Skip to Main Content

Elon Musk calls for a ban on killer robots

Posted by Alex Perryman on 25th August 2017

Asimov’s First Law of Robotics reads simply enough: “A robot may not injure a human being or, through inaction, allow a human being come to harm.”

I guess the staff at BAE Systems probably weren’t fans of ‘Golden Era’ sci-fi literature. The Taranis drone, currently under development by BAE, and trialled in 2013, is essentially a killer robot — autonomous weaponry capable of bombing you into the stone age without any pesky human interaction whatsoever.

Another good example, Samsung’s SGR-A1 Sentry Gun, currently in operation along the South Korean part of the Korean Demilitarized Zone, is supposedly capable of turning you into mush completely autonomously, even when the operator happens to be on his fag break, (although it’s not clear if this feature is being actually used).

photo credit: kodiax2 Cylon via photopin (license)

All-in-all, autonomous military ordinance is popping everywhere, from drones and warships in the US to tanks in Russia.

Even though such systems can kill you to bits without any human interaction, I suspect it would be a lax government that, with today’s technology, would fire up the killbots without some human control in play. You can rest assured that, should a major military contractor blow up your house, they will at least have a human to hit the button. Possibly an intern.

But here’s the thing — robotics and AI technology are advancing really fast. Sophisticated, capable killbots seem to be only a few years away. At the same time, the allure of such systems is proving hard for military organisations to ignore.

This is the situation that has informed Elon Musk’s latest headline-grabbing escapade. As noted in The Guardian recently, in a letter read out at the opening of the International Joint Conference on Artificial Intelligence on Monday, Elon Musk and 116 specialists (founders of robotics and AI companies) have called for the United Nations to leverage an outright ban of the development and use of killer robots.

There are good arguments on both sides of the fence. The advent of autonomous, decision-making weaponry could lead to less risk of soldiers’ lives, and less incidence of battlefield issues such as post-traumatic stress disorder. Machines might also be demonstrably less prone to error, and certainly to emotional influence.

On the other hand, it is, by definition, somewhat inhuman. Does it represent an abnegation of responsibility to devolve your murderspree to a robot? Shouldn’t an entity capable of taking a life feel the ramifications of that death, and be legally responsible, individually, for accounting for that death? And can a robot really ‘decide’ whether it is right to kill someone without a true, emotional moral compass?

Perhaps the bigger issue is the potential for a new arms race, with countries building up enormous reserves of hyper-accurate autonomous ordnance. And what will these devices be pointing their guns at? Each other? At military installations? Civilians? And should the development of increasingly sophisticated autonomous killing machines be permitted, surely it’s simply a matter of time before such devices are either copied or otherwise accessed by non-military organisations?

Asimov, in his novel ‘I, Robot’, agreed with the sentiments of Musk and the 116 founders, that lethal robotics, unlike soldiers, can be controlled absolutely, and therefore used as ‘weapons that despots and terrorists [can] use against innocent populations’. But Musk and the founders are also concerned about the potential for a rapid escalation, a ‘third revolution’ in warfare (with gunpowder and atomic weapons as the first two).

There are no easy answers here. Perhaps lethal robotics and AI do represent a ‘Pandora’s box’, as Musk says. Perhaps those who use such devices should be treated as pariahs (as is the case with the use chemical or blinding weapons, in accordance with the UN’s convention on certain conventional weapons, 1983 (CCW)). However, with such a distinct tactical advantage on offer, the possible need to defend against such systems being developed elsewhere, and slow ‘creep’ towards automation via partially-automated weaponry, the concern is that this could be one Pandora’s box we may not be able to close.

Find the prospect of killer robots alarming? Perhaps, as Wildfire’s Alex Warren has noted, we should encourage the development of ‘useless robots’ instead.

Alex Perryman

Alex joined Wildfire in 2007. He is renowned for his ability to pick up complex technologies and new industries extremely quickly.