Artificial Intelligence (AI) should – in the hands of the right people – prove to be a massive benefit to humanity. AI’s ability to process vast data sets quickly to identify and act on trends should help in important tasks like developing new drugs, improving transport systems and solving the mysteries of the Universe.
For all its “intelligence” however, AI is actually quite dumb too. Computer systems need to be given instructions and training before they can solve problems – and this is where the potential for evil arises.
How AI could go bad
Like all technologies, Artificial Intelligence will become more affordable over time, allowing almost anyone to begin building AI systems. In the relatively near future, criminals will be able to jump on the AI bandwagon – and that is when computer systems will start to become “evil”.
A research institute based at the University of New York has been looking into the issue of evil artificial intelligence. They have identified three key ways in which AI could be turned against us.
1. Simplified cybercrime targeting
Cyberattacks are increasing in sophistication, but many still rely on brute force to take websites offline or to “guess” passwords. By applying Artificial Intelligence to these tasks however, hackers can offload much of the initial discovery required. The AI will help to suggest the most effective avenues of attack, allowing cybercriminals to better organise their resources and improve their chances of successfully breaching their victims’ defences.
2. Real world attacks
In the movie The Terminator we are presented with a terrifying vision of the future where a rogue Artificial Intelligence system is trying to wipe out humanity using time-travelling killer robots. We may never see anything like this happen in real life, but some analysts suggest that AI could be used for real world attacks.
Using drones equipped with facial recognition software, it may be possible to launch airborne attacks against specific people for instance. Once the AI detects a specific individual, the drone can fire on them – without the need for a human pilot.
3. More efficient online manipulation
The world is still trying to come to terms with scandals like the misuse of personal data by Cambridge Analytica. But as AI matures these kinds of manipulation will become even more sophisticated. Intelligent machines will be able to use our social data against us to manipulate emotions, opinions and behaviour.
Totalitarian governments or cybercriminals will exploit this control to identify political dissenters, or to blackmail us for profit.
Time for a serious debate about AI
Artificial Intelligence should be good for humanity – but there is a very real danger that it will also be used for harm. Unfortunately there is no easy answer to this dilemma. Eventually we may see a code of ethics created to encourage organisations using AI to behave responsibly – but this is unlikely to be enforceable against determined criminals.
It may be the case that we need to build Artificial Intelligence platforms to police other AI systems – because it’s actually people who are the problem.