I鈥檝e always found radar speed signs to be interesting indicators of our relationship with technology, and I think how we relate to these signs can tell us something about privacy and technology.
I鈥檓 talking about the signs that tell you 鈥渢his is the speed limit / this is your current speed.鈥 These devices, which seem increasingly common, do not identify you or your vehicle; they keep no record of your speed, and they don鈥檛 report it to anyone. They are mere mirrors, reflecting our own behavior back to us. And yet they have consistently been found to be effective in getting drivers to reduce their speeds. I find this intriguing. I reckon that they work by sending drivers a message to the effect of, 鈥測ou are speeding, and I know it.鈥
Of course the 鈥淚鈥 in this case is a computer. Does that matter? Here we come to the increasingly significant question of whether computers, or only humans, can invade our privacy. I argued in a 2012 piece that computers can very much invade our privacy, and that the principal reason is that, in the end, what we truly care about are consequences. Computers, like humans, can bring consequences down on our heads when they are permitted to scrutinize our lives鈥攆or example, when an algorithm flags a person as 鈥渟uspicious.鈥
At the same time, because it鈥檚 consequences that people fear rather than scrutiny by some abstract 鈥渟entience,鈥 people learn to fear computers where eavesdropping by them can reverberate later鈥攂ut also not to fear either computers or people where their eavesdropping can鈥檛 later reverberate in their lives. And so, as I pointed out, humans often act brazenly in front of anonymous urban crowds, intimates, and servants or others over whom they hold power precisely because in those circumstances they are relieved from having to worry that their words or behavior will come back to hurt them later.
As a result of that line of thinking, I鈥檝e always thought that the effect of radar speed signs will inevitably diminish over time, as their lack of consequences gradually becomes apparent to us and sinks in at an intuitive level.
In some ways the signs could be compared to ineffective disciplinarians. Students often live in fear of a strict teacher, and are exceedingly cautious about what they do and say in front of him or her. But a teacher who is a lax disciplinarian, whose words are not backed up by consistent punitive actions, even if they storm and yell, will soon be flouted and ignored, even if they seem tough at first. The differences between how we react to humans and computers are not so great.
I don鈥檛 actually know as an empirical matter whether radar speed signs become less effective over time. But even if they do not, my overall point could still be correct: it鈥檚 possible that they are effective not because people worry about the signs themselves, but simply because they interrupt the solipsism of driving and remind us that our speed is easily apparent to others, and thus of the risk that a police officer or disapproving neighbor will see us speeding. It鈥檚 also possible that people worry that these signs are, in fact, identifying and reporting their behavior; anxiety about objects storing and reporting data about us is generally not paranoid these days. Looking online I see that some sign manufacturers do offer the optional capability of maintaining statistics on vehicles鈥 speeds. As far as I can tell, those speed measurements are stored in the aggregate and are not personally identifiable, though I suppose in theory the speed readings they collect, including time stamps, could be correlated with license recognition or video data and tied to a particular car.
There鈥檚 also a reasonable argument to be made that effectiveness of the signs will not wear off. In an interesting 2010 , Ryan Calo looks at how anthropomorphic or 鈥渉uman-like鈥 technologies have been found to affect us in many of the same ways that we鈥檙e affected by actual people. The potential chilling effects of monitoring machines, he writes, are well-established in 鈥渁n extensive literature in communications and psychology evincing our hard-wired reaction鈥 to human-like technologies. In one oft-cited , merely hanging a poster with human eyes was found to significantly change people鈥檚 behavior. And of relevance for my radar speed camera example, the studies find that, as Calo puts it:
Technology need not itself be anthropomorphic, in the sense of appearing human, to trigger social inhibitions; technologies commonly understood to stand in for people as a remote proxy tend to have the same effect.
He cites the example of security cameras, which do not resemble humans but were found to spark the same effect.
Overall, this literature reinforces my larger argument that when it comes to privacy, the human/machine distinction does not matter.
But it also suggests that I might be wrong in predicting that the chilling effects of technologies like radar speed cameras will wear off over time. Much of the science that Calo describes portrays the chilling effects of anthropomorphic technologies as involuntary, sub-conscious, and hard-wired鈥攚ith the implication that they will be constant. Calo worries that because of the hard-wired nature of these reactions, surrounding ourselves with 鈥渇ake human鈥 technologies could be a bad thing, pitching us into a self-conscious state of 鈥渃onstant psychological arousal鈥 that comes from 鈥渢he sensation of being observed and evaluated.鈥 In short, making us feel like we鈥檙e always being watched. He writes,
One might argue that humans will adjust to social machines and software the way the rich adjust to servants, the poor adjust to living on top of many relatives, or the chronically ill get accustomed to pharmacists, nurses, orderlies, and doctors. We may, after a time, feel solitude among machines as we acclimate to their presence.
This claim is not as reassuring as it might seem. What evidence there is suggests that the effects do not wear off.
Calo cites two studies that suggest this, but the evidence is thin and there does not seem to have been extensive study of this question.
In any case, there's a key distinction between machines that engage in monitoring that has the potential to reverberate in our lives, such as a surveillance camera recording footage we don鈥檛 control, and 鈥渟ocially inert鈥 monitoring such as radar speed cameras that are not going to affect us later. The difference is between our animal-like responses to a mere 鈥渕irror,鈥 and ultimately more rational (if still often unconscious and intuitive) response to machines that pose genuine social threats to us. I think Calo is entirely right to worry about 鈥渟ocial machines鈥 where those machines are in fact truly social, but that where they are not we will adjust.