I鈥檝e been following with great interest the over Facebook鈥檚 alleged censoring of conservative-oriented news items from its influential 鈥渢rending news鈥 section. It started with an allegation in from a former employee that the social network overrode its own data and used discretion to place some stories in that section, while removing others, and that in so doing it had a bias against conservative material.
Obviously as many have pointed out this is a reminder about the enormous power that Facebook wields. One can make two complaints about Facebook. First, that it shouldn鈥檛 be biased. Personally, I'm ambivalent on that question. In the sense that the company is technically a publisher, it has the same right as any newspaper to pick and choose what it will publish, and to be liberal, conservative, or anything else. As a legal matter, Facebook almost certainly has such a right. Of course, insofar as it acts not as a news source but as a forum in which people communicate with each other, bias is more worrisome, if that means distorting the way that people connect and communicate. More broadly, that role as forum-host or platform is behind what I think is a larger public expectation that the company will be generally neutral and even-handed. But when push comes to shove, here it was operating a news feed not very different from what a newspaper does.
The second, more interesting complaint one could make about Facebook is that it has implicitly mislead its readers into believing that they are seeing an 鈥渙bjective鈥 measurement of mass interest in various stories when they are not鈥攍etting people believe that Facebook is not expressing its own judgments about what stories to present, but holding up a mirror to its user base to show everybody what they are collectively interested in. When I see a list on the side of a newspaper site that says 鈥渕ost read鈥 or 鈥渕ost shared,鈥 I assume that鈥檚 a relatively dumb algorithm that is simply counting up clicks. I don鈥檛 know that Facebook ever explicitly claimed that the news section in question, labeled simply 鈥淭rending,鈥 was the equivalent of that鈥攊t鈥檚 a fairly loose word鈥攂ut it鈥檚 a natural assumption to make in this age of data and algorithms. If people believe they are seeing a picture of what the world looks like via dumb data, but are actually seeing a curated source, that鈥檚 a problem.
Last year in a blog post on 鈥The Virtues of Dumbness鈥 I wrote about how there are many situations where we actually want and expect decisionmaking processes to be dumb rather than smart. And as artificial intelligence creeps into more and more of the things around us, I argued that we will increasingly experience a condition I call 鈥渋ntelligence anxiety鈥濃攖he worry that something that is supposed to be dumb and neutral actually has intelligence embedded within it, and is therefore capable of bias and manipulation. (If somebody has a better term, please let me know!) As I wrote, 鈥渢he really bad things come when we think something is dumb but it鈥檚 not.鈥 That appears to be exactly what has happened with Facebook鈥檚 鈥淭rending鈥 section. This would not be the political controversy it is if people didn鈥檛 think the story selection in that section was automatic and expect a certain kind of dumbness.
Of course, in that essay I was mainly talking about how computer intelligence is seeping into things. The intelligence here came from humans, and the dumbness (click counting) from computers, but it can go either way. Humans organized into bureaucracies can be dumb, and computers, as we see more every day, can be smart and subtle and . There is no such thing as an unbiased algorithm鈥攂ut there are very simple algorithms (both computer and bureaucratic), and as long as they are dumb we feel we can predict and count upon the ways that they will affect the world.
The takeaway here is that as intelligence anxiety spreads, the pressure on companies like Facebook鈥攁s well as the government and everybody else鈥攖o be transparent will become greater than ever. Facebook was smart to react to this flap by trying to fully to how their page works. Even assuming organizations make no false claims about the dumbness of certain algorithms, increasingly they will need to proactively disclose what exactly is and is not going on underneath the hood.