Is Artificial Intelligence Safe for Humanity?
Stefano V. Albrecht
Recently, a number of prominent scientists, inventors, and entrepreneurs openly voiced their concerns regarding the potential dangers and risks of artificial intelligence. Artificial intelligence, as an academic subject, is the study of intelligent behaviour exhibited by machines, including software and robots. The precise definition of intelligence varies but it usually includes elements such as inference (drawing conclusions from evidence), planning (setting goals and constructing plans to achieve them), and learning (improving performance based on experience).
What dangers and risks does artificial intelligence pose? One of the major fears is that the speed of development of intelligent machines might reach a point beyond which meaningful human control is no longer feasible. In other words, the machine could be smarter than humans and may have a will of its own. Another risk is the fact that intelligent machines may not be constrained by the same social, ethical, and legal rules that govern human decision making, and that machines may lack “common sense”. As a result, an intelligent machine may attempt to achieve its goals in ways not anticipated, and possibly unintended, by its human designer.
Many science fiction movies explored the potential dangers of artificial intelligence, such as “Skynet” from the Terminator film series and the more recent Ex Machina. (Prof. Bob Fisher from The University of Edinburgh maintains a list of films related to artificial intelligence.) However, while such scenarios seem distant, there already exists a potential threat in the form of “lethal autonomous weapons”, or “killer robots” as they are sometimes called. These terms refer to machines that can select and engage targets with no or only limited human intervention. Examples include automatic machine gun turrets and flying drones equipped with missiles.
Arguments can be made for and against such technology. Some scientists believe that as machines surpass humans in tasks such as communication, coordination, and targeting, they may potentially reduce the number of both civilian and military casualties. On the other hand, there are several concerns associated with autonomous weapons, such as a possible arms race between nations as the technology becomes widely available, and a lowered inhibition for armed conflict by using machines rather than human soldiers. As a result, a number of organisations have called for a ban on the development and deployment of autonomous weapons, such as in the form of a ban in the UN Convention of Certain Conventional Weapons.
The need to understand and control the potential dangers posed by artificial intelligence has long been recognised in the scientific community, and it has gained more significant momentum recently when the US-based Future of Life Institute received a donation of 10 million US Dollar to fund research in this area. As the institute states, the goal is to “maximize the societal benefit of AI, explicitly focusing not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial”. As an example, my colleague Dr. Adrian Weller at Cambridge University received a grant to investigate “self-policing” machines, whose purpose is to police other intelligent machines and recognise undesirable activity.
The question of whether a machine's behaviour is desirable, when considered in the human sphere, is deeply connected to ethics and moral values. Earlier this year, Prof. Benjamin Kuipers from the University of Michigan gave a guest lecture at The University of Edinburgh in which he examined how robots could make moral decisions. One point that became apparent during the lecture is that some social dilemmas are hard enough for humans (such as whether to kill an innocent bystander in order to save other people), let alone for machines. Another important issue is that of legal responsibility in case the machine causes an accident, such as a driverless car that accidentally kills a pedestrian – is the owner or manufacturer legally accountable?
It is evident that a very substantial amount of additional research and debate is required in order to understand the potential benefits and risks of artificial intelligence. The coming decades will see a further proliferation of computer technology, and it remains a significant challenge to maximise the benefits while minimising the risks.
Edinburgh University Science Magazine
Issue 18, p. 22