By Rich Pell // May. 28, 2015
In a commentary in the journal Nature, artificial intelligence (AI) expert and professor at the University of California, Stuart Russell calls on colleagues to “take a stand” on lethal autonomous weapons systems (LAWS).
In his article, Russell notes that the AI and robotics communities face an ethical choice between supporting or opposing the development of LAWS. Such weapons – which would operate on the battlefield without any human control – are becoming increasingly feasible and have been described as the “third revolution” in warfare, after gunpowder and nuclear weapons.
These could include armed autonomous airborne drones as well as ground-based robotic systems that could include humans among their targets. The technology for such weapons is already here, says Russell, “For example, the technology already demonstrated for self-driving cars, together with the human-like tactical control learned by DeepMind’s DQN (deep Q-network) system, could support urban search-and-destroy missions.”
Russell cites two U.S. DARPA (Defense Advanced Research Projects Agency) programs as examples that foreshadow the planned use of LAWS:
- The Fast Lightweight Autonomy project is to explore ways to enable “a new class of algorithms for minimalistic high-speed navigation in cluttered environments” – i.e., the capability for tiny autonomous UAVs to fly at speeds up to 20 m/s in urban areas and inside buildings without operator control or GPS waypoints.
- Collaborative Operations in Denied Environment is focused on improving the capability for groups of unmanned aircraft systems to work together under a single human commander’s supervision, and aims to “develop algorithms and software that would extend the mission capabilities of existing unmanned aircraft systems well beyond the current state of the art.”
While most countries agree with the need for “meaningful human control” over the decisions made by robotic weapons, Russell worries that the term “meaningful” has not been clearly defined, and that allowing machines to decide whom to kill “could violate fundamental principles of human dignity.” Further, there is concern over the potential for such technologies to spill over from the military and be used in peacetime policing of civilian populations.
Ultimately, if LAWS technologies are taken to their logical extreme, Russell foresees a dire future. “One can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless.”