A close-up of a video surveillance unit set up in front of the U.S. Capitol building.
A close-up of a video surveillance unit set up in front of the U.S. Capitol building.
Community members, policymakers, and political leaders can make better decisions about new technology by asking these questions.
Jay Stanley,
Senior Policy Analyst,
老澳门开奖结果 Speech, Privacy, and Technology Project
Share This Page
July 15, 2022
Community members, policymakers, and political leaders can make better decisions about new technology by asking these questions.

If police in your community are saying they want to install a new surveillance technology 鈥 face recognition, cameras, or license plate scanners, for example 鈥 they鈥檙e likely to be touted as the way to prevent all manners of evil, from terrorism to street crime to fraud to package theft. If we just record everything, surveillance boosters would have us believe, we can stop or solve crimes and life will be better. The authorities will also probably have specific stories they tell you 鈥 hypothetical or real 鈥 in which the technology saved the day.

How should we process those claims? If the technology can do some real good, should we accept it?

We humans naturally think in stories, and a compelling anecdote, narrative, or mental image, particularly one that evokes fear, frequently defeats all rational argumentation. But that鈥檚 often a terrible way to make decisions that shape the fundamental contours of power in our society. Ideally, public debates around surveillance technologies would revolve not around particular 鈥溾 scenarios, but around a more rational, systematic, and broadly humanistic vision of technology and its role in our society.

A sheriff's deputy to prepares to fly a drone for an aerial demonstration of the its capabilities.

Credit: AP Photo/Noah Berger, File

Surveillance opponents use stories too, but law enforcement and other operators of surveillance tech typically have a big advantage: they can put their success stories on television while burying their failures. In 2014, for example, the police in Chicago the sentencing of a robber who may have been the first criminal caught by face recognition. But how many false leads did the police chase in that and other cases before they caught that first, highly publicized suspect? How many people were investigated, interrogated, intimidated, frightened, or had their privacy invaded because of this technology, to produce the success story that the police touted? We may never know, and are unlikely to find out.

Side effects from surveillance can include the loss of privacy, the possibility of abuse, chilling effects on creativity and freedom of expression, and disparate racial impacts that worsen existing social injustices.

So how do communities, policymakers, and political leaders avoid being snookered 鈥 either by corporate or police department public relations departments, or by our own human tendency to be guided by stories and anecdotes? A good way to make more sophisticated decisions is by asking ourselves these six questions.


1. Does the technology work?

In many ways this is the threshold question, because if a technology doesn鈥檛 work, then we can stop there. There鈥檚 no reason to waste time debating privacy, or safety, or other values. Of course, most technologies work at least some of the time, in which case the question is: How well does it work? Does it fail 5 percent of the time or 95 percent? And how do we know? Can we trust the information we鈥檙e given about that rate?

Take face recognition, for example. Vendors started pushing the technology hard right after 9/11, but at that time it was highly ineffective, and deployments, though dangerous, also verged on the silly. The dawn of machine learning made the technology much more effective, though it still has error rates that are to conversations about the technology. New technologies, in particular, often perform badly, but local officials often don鈥檛 have the expertise to cut through hype and sales jobs and recognize when they see it.


2. How effective is the technology?

Even if the technology does what it claims, does it solve the problem it鈥檚 aimied at solving? Even a technology that works perfectly may not stop bad things very often, depending on the details and context of its deployment. A metal detector, for example, might detect metal 100 percent of the time 鈥 but fail to detect plastic explosives or ceramic guns. Even a face recognition algorithm that is nearly 100 percent effective can be defeated by things as simple as a baseball cap, mask, or sunglasses. There are many similar technological equivalents of the , the heavily fortified defensive frontier built by the French before World War II, which was rendered useless when Hitler鈥檚 army simply went around it.


3. How big is the danger the technology will allegedly reduce?

How serious are the bad things the technology claims to prevent, and how frequent or likely are those things? If a technology only saves the day every 20 years, but 鈥渟aving the day鈥 means preventing a global pandemic or nuclear attack, that could justify steep costs. On the other hand, if success means preventing somebody from jaywalking, that would be a different balance even if it happens many times a day.


4. What are the negative side-effects of the technology?

Even if a technology is effective and important, what are its downsides? We might be able to prevent the smuggling of weapons from other parts of the world if we close our borders, but nobody is willing to accept the enormous consequences that measure would have. We might cut down on domestic violence and other crimes if we allowed the government to install cameras in everyone鈥檚 bedrooms, but we鈥檙e not willing to accept the side effects of such a step. Side effects can include the loss of privacy, the possibility of abuse, chilling effects on creativity and freedom of expression, and disparate racial impacts that worsen existing social injustices 鈥 all of which could be produced by our example of face recognition 鈥 as well as more tangible things like pollution, noise, and economic harm.

鈥淪ecurity鈥 is the most common justification for new surveillance, but that is a term that should be viewed holistically. It鈥檚 true that theft or physical attack can harm people鈥檚 happiness and make them feel unsafe, but so can many other things 鈥 such as oppressive surveillance and violent police officers. For example, if a 鈥渟ecurity鈥 drone flies over my yard, do I have to worry that it will record me and my friends smoking weed, get my house raided by a SWAT team, and leave me with lasting feelings of violation and insecurity? That kind of degradation in people鈥檚 security, properly conceived, is a side effect of surveillance technology that we should be especially alert to.


5. What are the opportunity costs of spending resources on the technology?

Every dollar spent on high-tech surveillance devices means a dollar not spent on other community improvements that might do much more to improve the lives of its residents. In a rational world, money would be spent first on measures that will bring the greatest improvements to the greatest number of people鈥檚 lives, and something like expensive cameras to protect against rare or minor threats would not be allowed to vault to the top of the list just because they鈥檙e sold via a vivid story. Face recognition, for example, in addition to producing bad side effects such as chilling effects, may soak up public funds that could be used to help a community address social problems, become more prosperous, and enjoy improved physical infrastructure.


6. Does the community want it?

A technology can鈥檛 be evaluated without considering the answers to the above questions, but there鈥檚 no mathematical formula for measuring those variables or computing how they should balance against each other. That will inevitably be a judgment call. But since we live in a democracy, that judgment should be made openly and democratically by each community, not unilaterally or in secret by police chiefs or other public servants. That鈥檚 why we have been educating communities around the nation on the advantages of enacting 鈥Community Control Over Police Surveillance鈥 or CCOPS bills, which require law enforcement to get permission from their city council (or other elected oversight body) before deploying new surveillance technologies. Seattle learned the wisdom of this the hard way in 2013 when it had to it had quietly purchased because the community objected vehemently to the technology. Nowadays I see many of the smarter police chiefs consult with their communities before deploying a new surveillance technology, whether or not their city has enacted a CCOPs ordinance. A number of communities have banned their police from using face recognition, and there are surely others that would react badly if it were introduced.

The next time you hear someone pushing a new surveillance technology by telling a story about how it saved the day by stopping something bad, remember that it鈥檚 important to dig deeper and seek a fuller picture of the technology and its place in your community.

Learn More 老澳门开奖结果 the Issues on This Page