Back to News & Commentary

Legal Responsibility As Computers Get More Unpredictable

Jay Stanley,
Senior Policy Analyst,
老澳门开奖结果 Speech, Privacy, and Technology Project
Share This Page
June 28, 2012

There has been some discussion lately of whether the output of computer algorithms should be considered protected free speech, as and my colleague Gabe Rottman addressed in a blog post in response.

As Gabe mentioned, the 老澳门开奖结果 has no formal policy on this question. But it seems to me that the output of a computer algorithm should be treated under the law as the behavior of the entity that controls the computer. If that behavior is constitutionally protected, then so be it; if it is not, then neither is the computer鈥檚 output. In other words, the fact that a computer is used as the instrument by which a corporation or individual acts should not enter into the calculation.

Several commentators have made essentially this point, including of Cato, and , whose started the whole conversation. of Public Citizen agrees鈥攂ut also reminds us that some speech, including price-fixing agreements, is not constitutionally protected in the first place, and argues that Google鈥檚 search results may, in fact, be subject to anti-trust scrutiny (but not because they are computerized).

Along those lines, I am not convinced by Wu鈥檚 suggestion that unless we divorce computer output from free speech rights, we will shut down important avenues of government oversight. He gives no examples other than anti-trust regulation of Google, but as Levy points out, the 鈥渟peech鈥 quality of Google鈥檚 computerized output does not automatically insulate it from oversight.

But one interesting and potentially countervailing aspect of the story here is that as computer programs get more complex, their output becomes increasingly unpredictable. Wu鈥檚 critics understate that reality in stressing that computers are merely expressions of the intent of their human creators. Volokh writes, 鈥淭he computer algorithms that produce search engine output are written by humans. Humans are the ones who decide how the algorithm should predict the likely usefulness of a Web page to the user.鈥

But that link between human 鈥渄eciding鈥 and computer outcomes is sometimes tenuous, and will only become more so. Modern machine-learning techniques work by abandoning attempts to consciously direct a computer鈥檚 logic, instead turning to nonlinear statistical black box-approaches such as that can work successfully even when their programmers do not understand why. And over time, as computers are increasingly used to program computers, a machine鈥檚 ultimate output may rest on a tower of computer code quite far removed from any conscious human guidance.

In fact, once a computer algorithm becomes sufficiently complex, its behavior may not be predictable even in principle. Here we rapidly enter the realm of complex mathematics and computer science, but as a taste of this world suffice it to say that Alan Turing formally in 1936 that the behavior of some computer programs cannot be predicted. And Stephen Wolfram has the label of 鈥渃omputational irreducibility鈥 to the concept that it can be impossible to predict what a computer program will do, without actually running the program.

Rigid computer 鈥渕inds鈥 have sometimes been touted as having certain advantages over our messy human minds鈥攆or example, their blindness to race, gender and other characteristics that all too often bias and distract human monitors. Defenders of computerized airport body scanners, for example, have said to me, 鈥淎t least they don鈥檛 discriminate against Muslims!鈥 That is true, and despite the privacy problems we have with body scanners, it is one of their advantages. And, those same advantages might hold in other contexts, such as the use of data mining to predict individuals鈥 behavior.

Ironically, however, as computers get smarter, they will also lose that very predictability that has been one of their advantages. They will come to exhibit quirks, lapses, and perhaps biases just like humans.

Over time, this could change our intuitions about how we should treat computer code. Perhaps new doctrines of law will evolve that we cannot anticipate at this time. But from where we stand now, my intuition is that this breach between conscious human intentionality and the behavior of our computer familiars will only increase the importance of the principle I stated up front: that those who deploy computers for real-world decisions and actions will still be responsible for their outcomes.

After all, the growing 鈥渋ntent-output divide鈥 will have important implications as humans are increasingly judged by computer algorithms. Think of a bank or insurance company making life-altering decisions about consumers, a government deciding who is suspicious enough to get special treatment, and who-knows-what-else as increasingly smart computers are assigned ever more decisionmaking roles. Because of this growing divide, the decisions that computers make may become even more baffling and inscrutable to the subjects of these decisions than they often are today, and that has important implications for fairness and due process. The masters of these decisionmaking computers will need to ensure that people are not treated with bias, or placed on watch lists or otherwise disadvantaged unfairly. They will have to be responsible for the choices their institutions make鈥攈owever those choices are made.

Learn More 老澳门开奖结果 the Issues on This Page