Is artificial intelligence safe for humanity?

Author: Stefano Albrecht

Date: 2015-01-01

Follow @UoE_Agents on Twitter

This essay originally appeared in:
Edinburgh University Science Magazine
Issue 18, p. 22, 2015 (link)

Recently, a number of prominent scientists, inventors, and entrepreneurs openly voiced their concerns regarding the potential dangers and risks of artificial intelligence. Artificial intelligence (AI), as an academic subject, is the study of intelligent behaviour exhibited by machines, including software and robots. The precise definition of intelligence varies but it usually includes elements such as inference (drawing conclusions from evidence), planning (setting goals and constructing plans to achieve them), and learning (improving performance based on experience).

What dangers and risks does artificial intelligence pose? One of the major fears is that the speed of development of intelligent machines might reach a point beyond which meaningful human control is no longer feasible. In other words, the machine could be smarter than humans and may have a will of its own. Another risk is the fact that intelligent machines may not be constrained by the same social, ethical, and legal rules that govern human decision making, and that machines may lack common sense. As a result, an intelligent machine may attempt to achieve its goals in ways not anticipated, and possibly unintended, by its human designer.

Many science fiction movies explored the potential dangers of artificial intelligence, such as the Terminator film series and the more recent Ex Machina (Edinburgh's School of Informatics hosts a list of films related to artificial intelligence). However, while such scenarios seem distant, there already exists a potential threat in the form of lethal autonomous weapons, or “killer robots” as they are sometimes called. These terms refer to machines that can select and engage targets with no or only limited human intervention. Examples include automatic machine gun turrets and flying drones equipped with missiles.

Arguments can be made for and against such technology. Some scientists believe that as machines surpass humans in tasks such as communication, coordination, and targeting, they may potentially reduce the number of both civilian and military casualties. On the other hand, there are several concerns associated with autonomous weapons, such as a possible arms race between nations as the technology becomes widely available and a lowered inhibition for armed conflict by using machines rather than human soldiers. As a result, a number of organisations have called for a ban on the development and deployment of autonomous weapons.

The need to understand and control the potential dangers posed by artificial intelligence has long been recognised in the scientific community, and it has gained more significant momentum recently when the US-based Future of Life Institute received a $10 million donation to fund research in this area. The institute's goal is to “maximize the societal benefit of AI, explicitly focusing not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial”. As an example, Dr. Adrian Weller at Cambridge University received a grant to investigate “self-policing” machines, whose purpose is to police other intelligent machines and recognise undesirable activity.

The question of whether a machine's behaviour is desirable, when considered in the human sphere, is deeply connected to ethics and moral values. Earlier this year, Professor Benjamin Kuipers from the University of Michigan gave a guest lecture at the University of Edinburgh in which he examined how robots could make moral decisions. One point that became apparent is that some social dilemmas are hard enough for humans (such as whether to kill an innocent bystander in order to save other people), let alone for machines. Another important issue is that of legal responsibility in case the machine causes an accident, such as a driverless car that accidentally kills a pedestrian — is the owner or manufacturer legally accountable?

It is evident that a very substantial amount of additional research and debate is required in order to understand the potential benefits and risks of artificial intelligence. The coming decades will see a further proliferation of computer technology, and it remains a significant challenge to maximise the benefits while minimising the risks.