When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.
Some background: In 2014, Stephen Hawking and Elon Musk led an effort with thousands of AI to collectively pledge never to contribute research to the development of lethal autonomous weapons systems 鈥 weapons that could seek out a target and end a life without a human in the decision-making loop. The researchers argued that the technology to create such weapons was just around the corner and would be disastrous for humanity.
I signed that pledge, and in January of this year, I wrote a letter to the CEO of Clarifai, Matt Zeiler, asking him to sign it, too. I was not at all expecting his response. As The New York Times on Friday, Matt called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government.
I could not abide being part of that, so I quit. My objections were not based just on a fear that killer robots are 鈥渏ust around the corner.鈥 They already exist and are used in combat today.
Now, don鈥檛 go running for the underground bunker just yet. We鈥檙e not talking about something like the Terminator or Hal from 鈥2001: A Space Odyssey.鈥 A scenario like that would require something like artificial general intelligence, or AGI 鈥 basically, a sentient being.
In my opinion, that won鈥檛 happen in my lifetime. On the other end of the spectrum, there are some people who describe things like landmines or homing missiles as 鈥渟emiautonomous.鈥 That鈥檚 not what I鈥檓 talking about either.
The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that鈥檚 in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target.
Fortunately, there are only a few of these kinds of machines in operation today, and as of right now, they usually operate with a human as the one who decides whether to pull the trigger. Those supporting the creation of autonomous weapons systems would prefer that human to be not 鈥渋n the loop but 鈥on the loop鈥 鈥 supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.
When presented with the Harop, a lot of people look at it and say, 鈥淚t鈥檚 scary, but it鈥檚 not genuinely freaking me out.鈥 But imagine a drone acquiring a target with a technology like face recognition. Imagine this: You鈥檙e walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.
Suddenly, the question 鈥 鈥淲here in the decision loop does the human belong?鈥 鈥 becomes a deadly serious one.
What the generals are thinking
On the battlefield, human-controlled drones already play a critical role in surveillance and target location. If you add machine learning to the mix, you鈥檙e looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.
But there are vast technical challenges with streaming high definition video halfway around the world. Say you鈥檙e a remote drone pilot and you鈥檝e just found a terrorist about to do something bad. You鈥檙e authorized to stop them, and all of a sudden, you lose your video feed. Even if it鈥檚 just for a few seconds, by the time the drone recovers, it might be too late.
What if you can鈥檛 stream at all? Signal jamming is pretty common in warfare today. Your person in the loop is now completely useless.
That鈥檚 where generals get to thinking: Wouldn鈥檛 it be great if you didn鈥檛 have to depend on a video link at all? What if you could program your drone to be self-contained? Give it clear instructions, and just press Go.
That鈥檚 the argument for autonomous weapons: Machine learning will make war more efficient. Plus there's the fact that Russia and China are already working on this technology, so we might as well do the same.
Sounds reasonable, right?
Okay, here are six reasons why killer robots are genuinely terrifying
There are a number of reasons why we shouldn鈥檛 accept these arguments:
1. Accidents. Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly. Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.
Also, drones go rogue sometimes. It doesn鈥檛 happen often, but software always has bugs. Imagine a self-contained, solar-powered drone that has instructions to find a certain individual whose face is programmed into its memory. Now imagine it rejecting your command to shut it down.
2. Hacking. If your killer robot has a way to receive commands at all (for example, by executing a 鈥渒ill switch鈥 to turn it off), it is vulnerable to hacking. That means a powerful swarm of drone weapons could be turned off 鈥 or turned against us.
3. The 鈥渂lack box鈥 problem. AI has an 鈥渆xplainability鈥 problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can鈥檛 know why an algorithm reached the outcome that it did. It鈥檚 a black box. Now, when you enter the realm of autonomous weapons, and ask, 鈥淲hy did you kill that person,鈥 the complete lack of an answer simply will not do 鈥 morally, legally, or practically.
4. Morality & Context. A robot doesn鈥檛 have moral context to prioritize one kind of life over another. A robot will only see that you鈥檙e carrying a weapon and 鈥渒now鈥 that its mission is to shoot with deadly force. It should not be news that terrorists often exploit locals and innocents. In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react 鈥 and can be held accountable for those decisions. The best object localization software today is able to look at a video and say, 鈥淚 found a person.鈥 That鈥檚 all. It can鈥檛 tell whether that person was somehow coerced into doing work for the enemy.
5. War at Machine Speed. How long does it take you to multiply 35 by 12? A machine can do thousands of such calculations in the time it takes us to blink. If a machine is programmed to make quick decisions about how and when to fire a weapon, it鈥檚 going to do it in ways we humans can鈥檛 even anticipate. Early experiments with swarm technology have shown that no matter how you structure the inter-drone communications, the outcomes are different every time. The humans simply press the button, watch the fight, and wait for it to be over so that they can try to understand the what, when, and why of it.
Add 3D printing to the mix, and now it鈥檚 cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being. Such a swarm could overwhelm a city in minutes. There will be no way for a human to defend themselves against an enemy of that scale or speed 鈥 or even understand what鈥檚 happening.
6. Escalation. Autonomous drones would further distance the trigger-pullers from the violence itself and generally make killing more cost-free for governments. If you don鈥檛 have to put your soldiers in harm鈥檚 way, it becomes that much easier to decide to take lives. This distance also puts up a psychological barrier between the humans dispatching the drones and their targets.
Humans actually find it very difficult to kill, even in military combat. In his book 鈥淢en Against Fire,鈥 S.L.A. Marshall reports that over 70 percent of bullets fired in WWII were not aimed with the intent to kill. Think about firing squads. Why would you have seven people line up to shoot a single person? It鈥檚 to protect the shooters鈥 psychological safety, of course. No one will know whose bullet it truly was that did the deed.
If you turn your robot on and it decides to kill a child, was it really you who destroyed that life?
There is still time to ask our government to agree to ban this technology outright
In the end, there are many companies out there working to 鈥渄emocratize鈥 powerful technology like face recognition and object localization. But these technologies are 鈥渄ual-use,鈥 meaning they can be used not only for everyday civilian purposes but also for targeting people with killer drones.
Project Maven, for instance, is a Defense Department contract that鈥檚 currently being worked on at Microsoft and Amazon (as well as in startups like Clarifai). Google employees were successful in persuading the company to from the contract because they feared it would be used to this end. Project Maven might just be about 鈥渃ounting things鈥 as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell.
With so many tech companies participating in work that contributes to the reality of killer robots in our future, it鈥檚 important to remember that major powers won鈥檛 be the only ones to have autonomous drones. Even if killer robots won鈥檛 be the Gatling gun of World War III, they will do a lot of damage to populations living in countries all around the world.
We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons. When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now.
Official Defense Department states currently that there must be a 鈥渉uman in the loop鈥 for every kill decision, but that is under debate right now, and there鈥檚 a loophole in the policy that would allow for an autonomous weapon to be approved. We must work together to ensure this loophole is closed.
That鈥檚 why I refuse to work for any company that participates in Project Maven, or who otherwise contributes dual-use research to the Pentagon. Policymakers need to promise us that they will stop the pursuit of autonomous lethal weapons systems once and for all.
I don鈥檛 regret my time at Clarifai. I do miss everyone I left behind. I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won鈥檛 be used for killing people. More importantly, I hope our government begins working internationally to ensure that autonomous weapons are banned to the same degree as biological ones.