One of the central elements in last week's Trapwire story involves the application of 鈥渂ehavioral recognition,鈥 also known as 鈥渧ideo analytics,鈥 to camera feeds. What are we to make of this technology?
In essence, video analytics is a form of artificial intelligence that tries to automatically derive meaning from a video feed. Face recognition, license plate recognition, and red light cameras are each examples of the automated extraction of meaning from a video feed, but what I鈥檓 focused on here are technologies that aim to offer more general analysis of behaviors that are taking place in a camera鈥檚 field of view. Examples include the tracking of people throughout an area, zone or perimeter protection, determination of (and detection of deviations from) 鈥渘ormal鈥 patterns of movement in an area, and the detection of abandoned objects. (This at EE Times offers an extensive introduction to the technology.)
A video camera on its own is dumb, like your retina being hit by photons. Video analytics is an attempt to create a brain behind the eye to interpret those signals. Of course we know that this kind of thing is very, very difficult; when it comes to the visual realm a computer that can analyze a stack of statistics in a flash can鈥檛 tell a toaster from a toadstool. Although, we also know that computers鈥 abilities in this area are rapidly improving.
Security agencies are still in the relatively early stages of experimenting with this technology. To a great extent, the civil liberties issues raised by video analytics are only an intensification of the issues raised by video surveillance itself, which we at the 老澳门开奖结果 have long sought to limit (see our discussion of The Four Problems With Video Surveillance). After all, once you come into view of a camera, you may or may not be under observation by a human being, a roomful of human beings, or perhaps even an entire television audience (should the video later be deemed significant for some reason). By itself that creates a significant potential for abuse, and for chilling effects.
What does automated video monitoring add to those effects? A few points:
- Like any tool, its acceptability depends on how it鈥檚 used. Putting a camera at a nuclear power plant, set to alarm if someone walks into a restricted area? No problem. Pointing it at the platform of a busy subway station in order to somehow tease out 鈥渟uspicious鈥 activity? That鈥檚 a lot more problematic.
- Quantity changes quality. As in many other areas (such as location tracking), the automation of surveillance naturally leads to a whole lot more surveillance鈥攁nd from there to a lot of unjustified surveillance. One of the reasons that increasingly pervasive video surveillance cameras have not sparked much protest yet is that most cameras, most of the time, are not monitored. If automated monitoring becomes more effective and/or more common, that could change.
- Accuracy. Meaningful interpretation of complex and subtle human social spaces without high error rates is a very difficult problem. With some applications, very high error 鈥渁larm鈥 rates are likely. What we see with anti-terrorism 鈥渂ehavioral recognition鈥 programs that involve human beings (as in the airline context) is that they seem to have only one result: a lot of innocent people get caught up and harassed by the authorities just because they stand out in some way. How likely are computers to do better than humans? The effects of false alarms depend on how they鈥檙e handled; will it lead to human scrutiny of a video clip鈥攐r a person being hassled by a cop?
- Intensified chilling effects. If video analytics operates by establishing normal patterns of movement in a public space and drawing attention to anything 鈥渙ut of the norm,鈥 that could pressure Americans to conform and avoid standing out even more than existing video surveillance is at risk of doing over time.
Ultimately we have to ask, where is this all going to lead? We are seeing more and more surveillance cameras installed everywhere, and increasingly they are being networked together. As artificial intelligence improves, video analytics may become capable of tracking increasingly complicated behavior. Ultimately, we need to confront the central question facing us: how are we going to handle the increasing capability of machines to monitor us in ways large and small, wide and deep? (I discussed some aspects of that question in my post on privacy invasions by humans vs. those by computers).