It must be a bit of a slow week at the United Nations. At a United Nations Convention on Certain Conventional Weapons in Geneva on Tuesday, diplomats discussed a potential UN ban on the use of killer robots if the technology were ever to be invented. And this isn't just a brief chat, it's the start of a four-day summit. According to the agenda, a whole host of experts are going to be talking about killer robots, their potential uses, potential issues that could occur, foreseeable next steps in the development of such a technology, and what the various impacts could be.
“All too often international law only responds to atrocities and suffering once it has happened,” Michael Moeller, acting head of the U.N.’s European headquarters in Geneva, told diplomats at the opening of the summit. “You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control.”
Killing is only lawful if it meets three cumulative requirements for when and how much force may be used: it must be necessary to protect human life, constitute a last resort, and be applied in a manner proportionate to the threat. Each of these prerequisites for lawful force involves qualitative assessments of specific situations. Due to the infinite number of possible scenarios, robots could not be pre-programmed to handle every circumstance.
To be fair to the UN, I think we can all agree that prevention is preferable to cure. And "lethal autonomous weapons systems" are not a million miles away from the rather terrifying-looking human-operated drones that the U.S. and several other armies across the world are currently employing.
Also, über-genius Stephen Hawking — who, let's face it, knows a darn sight more than us about basically everything — recently joined up with computer science professor Stuart Russell from the University of California, Berkeley, and physics professor Max Tegmark from the Massachusetts Institute of Technology to write an op-ed for The Independent warning that creating artificial intelligence could be both the greatest and the last thing that humans ever do.
"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes," the professors wrote.
So really, we guess this is actually a "Good on you, UN!" moment. If only we could get the ridiculous Terminator images out of our minds...