The Washington Post has an excellent, in-depth today on the growing use of driver鈥檚 license photo databases combined with face recognition analytics by police.
There are two ways to think about this. First, it is yet another long stride toward a surveillance society:
- These systems represent yet another technology of control 鈥渉oned on the battlefields of Afghanistan and Iraq,鈥 as the Post puts it, now being brought back and applied to the American people.
- DMVs are being used to compile what is essentially a national identity system. The CEO of the company that makes most states鈥 face-rec systems, MorphoTrust USA, tries to claim in a Washington Post accompanying the story that having one鈥檚 photo taken in a DMV is 鈥渧oluntary鈥 but clearly that鈥檚 not true鈥攁t this point everyone needs a driver鈥檚 license or non-driver鈥檚 ID in order to participate fully in our society. That means virtually everybody鈥檚 photo is going to reside in their state鈥檚 DMV photo database鈥攁nd the FBI would like to connect all those databases into a single distributed image database.
- Such a database could be put to truly Orwellian uses if it were, say, combined with pervasive video surveillance, a growing network of status and identity checks throughout society, and/or super-high resolution photography at political events such as those created at and at a .
- Such databases can also be put to more prosaic, but still offensive uses. Take the Pinellas County deputy described by the Post (and shown in their video) stopping and photographing 鈥渟uspicious鈥 people on what appear to be thin pretexts and running their faces against the system. That is just the kind of petty instrument of control that we fear such a database would quickly become.
- There are still sharp limitations to face recognition technology when it comes to using photographs taken 鈥渋n the wild鈥濃攂y surveillance cameras for example鈥攁s opposed to those in controlled environments where a well-lit subject faces a camera without sunglasses or a smile. But the technology鈥檚 inaccuracy doesn鈥檛 necessarily protect individuals鈥攊n fact if the police put more faith in it than it deserves, that inaccuracy may pose a threat to innocent people mistaken for those wanted by the police.
On the other hand, are we prepared to say that the police, once they identified the images of the Boston Marathon bombing suspects, could not run them against DMV databases to see if they could identify the suspects? (They did, and it was no help, the Post reports, but that鈥檚 more a reflection of the technology鈥檚 limits than an answer to the policy question.)
We have always focused our criticisms of face recognition technology on proposed mass surveillance uses. We have never criticized the technology鈥檚 use, for example, in helping the police match a suspect photo against a book of mug shots of previous convicts. Typically in such cases, the software鈥檚 operator will set a desired tolerance level, and the program will collect all the faces that come within that range of similarity, and then a human will inspect them to look for matches. I don鈥檛 see any problems with this use of the technology.
When you expand that technique beyond mug shot books to DMV image databases of the entire population, however, it moves us into much more uneasy territory. Not only are such uses violations of the privacy principle that information collected for one purpose should not be repurposed without a subject鈥檚 permission, but they also subject each citizen to the possibility that he or she will fall under the microscope of the police due to some misidentification.
Perhaps those objections can be overcome in certain serious situations. But this much is clear: neither the police nor others should have access to DMV image databases without good reason and systematic checks and balances to enforce that policy. If there is a strong criminal predicate, that is one thing, but we mustn鈥檛 allow these databases to become fuel for petty status and identity checks鈥攁nd certainly not used to enable mass suspicionless surveillance and tracking.