Amazon Drivers Placed Under Robot Surveillance Microscope
Last month we learned that Amazon is to AI cameras that will constantly scrutinize drivers inside the cabins of its delivery vehicles, and inform their bosses when the camera thinks they鈥檝e done something questionable.
The device Amazon is installing (called 鈥淒riveri,鈥 pronounced 鈥渄river eye鈥) has cameras pointing in four directions, one of which is toward the driver. In a posted online, the company says the 鈥渃amera records 100 percent of the time when you鈥檙e out on your route,鈥 and watches for 16 behaviors that will 鈥渢rigger Driveri to upload recorded footage.鈥 These include not only accidents but also such things as following another car too closely, making a U-turn, failing to wear the seatbelt, obstructing the camera, 鈥渉ard鈥 braking or accelerating, and appearing to be distracted or drowsy 鈥 or what the AI interprets as those activities, anyway. Sometimes the robot camera will shout commands at you, such as 鈥渕aintain safe distance!鈥 or 鈥減lease slow down!鈥 One driver told CNBC that if the camera , it will tell you to pull over for at least 15 minutes 鈥 and if you don鈥檛 comply, you may get a call from your boss.
The cameras in this system are not streamed live to management; this is an AI monitoring system. The device itself decides when to send video clips to the bosses and when to issue verbal alerts to drivers. But as we have long argued, nobody should make the mistake of thinking that we can鈥檛 suffer many forms of privacy harm when being monitored by machines, not least because those machines are programmed to 鈥渟nitch鈥 to actual humans when they see something they think is bad. The company that makes Driveri, Netradyne, also that its product keeps scores on drivers that are updated 鈥 and provided to management 鈥 in real time. (Such a function is not mentioned in Amazon鈥檚 video).
Given how bad AI is at understanding the subtleties of human behavior and dealing with anomalies, this system could lead to real fairness and accuracy issues. Automated test proctoring software, which also uses video to monitor people for subtle behaviors (in this case, cheating) has certainly been . Machine vision is very brittle and can 鈥 even at the fundamentals, like . Netradyne that 鈥渆very stop sign & traffic signal is identified and analyzed for compliance measurement.鈥 But what happens when the AI thinks it sees a stop sign where there is none, and flags the driver for 鈥渞unning鈥 it?
Ideally a human being would review the video and exonerate the driver, but given how automated Amazon鈥檚 management is, we don鈥檛 know how often that will happen. Workers in Amazon鈥檚 warehouses, for example, are constantly supervised by robots that judge whether they鈥檙e moving packages quickly enough. If they don鈥檛 like what they see, those robots issue warnings and even 鈥 without any human input.
Amazon touts the system as a beneficial safety measure. It could indeed reduce accidents 鈥 though that should be proven 鈥 but as a society we鈥檙e going to need to figure out how much to allow ourselves to be overseen by automated AI cameras that engage in intrusive monitoring, judging, nagging, and reporting of our behaviors. Potential fairness issues aside, that kind of monitoring would probably make anyone miserable. There are almost certainly ways to be found to use AI to protect the safety of workers that feel empowering and protective, instead of infantilizing and oppressive.
Meanwhile, this kind of robot monitoring is becoming an increasingly prominent sore spot for workers. Some UPS drivers, for example, have that company鈥檚 use of such cameras. (UPS drivers, , are unionized and actually employed by the company whose uniforms they wear.)
Amazon workers鈥 complaints about robot management are part of and of the company for unethical labor practices. The company has been sued by the New York attorney general for failing to protect workers against COVID-19 and those who complained, and was fined last month by the Federal Trade Commission for . Amazon drivers in particular reportedly face , and critics charge that the company places performance demands on them that to drive , while evading responsibility for the resulting accidents by insisting that they鈥檙e contractors. The Amazon drivers I have spoken to confirmed that they are urged to drive safely but also pushed to complete an unrealistic number of deliveries within a shift.
Driveri thus looks like a company鈥檚 attempt to use technology to solve a problem that its own managerial practices and profit drive may be creating. These technologies are like factory farms that pump our food with antibiotics 鈥 an attempt to use technology to unnaturally suppress the side effects of unhealthy and inhumane practices. This is something that we鈥檝e in the trucking industry: Instead of giving drivers protections from unhealthy productivity demands, they get micro-surveillance. And workers end up squeezed on both ends.
That squeeze may only increase as the AI is refined. For example, if sunglasses defeat Driveri鈥檚 drowsiness and inattentiveness detectors, drivers may be told they aren鈥檛 allowed to wear them. That could be just the beginning of many ways they are forced to conform their behavior, movements, and dress to the needs of the AI that is watching them. We鈥檝e already seen that happen in other areas; we鈥檙e no longer allowed to smile in our passport photos, for example, because it reduces the effectiveness of face recognition technology. Ultimately, the technology threatens to a modern-day version of , a 19th century industrial movement also known as 鈥渟cientific management鈥 that involved monitoring and controlling the minutiae of industrial workers鈥 bodily movements to maximize their productivity.
The issues raised by AI video monitoring extend far beyond Amazon and its particular practices. To begin with, Amazon is not the only company experimenting with this kind of robot surveillance; a number of trucking companies, for example, are imposing it on their drivers. More broadly, as AI cameras get smarter, there are many institutions that have different incentives to use them to visually monitor people. We could soon see not just employers but also everything from to to government deploying this technology 鈥 anyone who wants to enforce a rule, protect an asset, or gain a new efficiency.
Technological monitoring of workers has long taken place through other data-collection devices, down to and including the time clock, but these new tools don鈥檛 require expensive or specialized data-collection devices, or efforts to get workers to use them properly. All that鈥檚 needed is a camera. And improving AI is likely to open up ever-wider possibilities for automated visual monitoring, as we discussed in our 2019 report, The Dawn of Robot Surveillance.
Employees like drivers and factory workers whose jobs are most at risk of being supplanted by AI (but for now are just being with it) will be the first to be placed under oppressive AI surveillance microscopes, and we should support their rights to maximize their self-determination through unionization and other measures. But AI monitoring will soon move beyond those groups, starting with less powerful people across our society 鈥 who, like Amazon鈥檚 nonmanagerial , are disproportionately people of color and are likely to continue to bear the brunt of that surveillance. And ultimately, in one form or another, such monitoring is likely to affect everyone 鈥 and in the process, further tilt power toward those who already have it.