There has been a lot of discussion lately about 鈥渇ake news,鈥 which appears to have circulated with fierce velocity on social media throughout this past election season. This has prompted calls for the likes of Facebook and Google to fix the problem.
What are we to think of this from a free speech and civil liberties perspective?
With Facebook, which has been a particular subject of calls for reform, there are actually two issues that should be thought about separately. The first involves Facebook鈥檚 鈥淭rending News鈥 section, which was the subject of a flap earlier this year when it emerged that it was actually edited by humans, rather than being generated by a dumb algorithm that simply counted up clicks. A former employee alleged that the human curators were biased against conservative material. In the wake of that controversy, Facebook took the humans out of the loop, making the 鈥淭rending News鈥 more of a simple mirror held up to the Facebook user base showing them what is popular.
As I said in a blog post at the time, I鈥檓 ambivalent about this part of the fake news controversy. On the one hand, it can be valuable and interesting to see what pieces are gaining circulation on Facebook, independent of their merit. On the other hand, Facebook certainly has the right, acting like any publisher, to view the term 鈥渢rending鈥 loosely and publish a curated list of interesting material from among those that are proving popular at a given time. One advantage of their doing so is that crazy stuff won鈥檛 get amplified further through the validation of being declared 鈥淣ews鈥 by Facebook. A result of the decision to take human editors out of the loop is that a number of demonstrably have subsequently appeared in the 鈥淭rending News鈥 list.
But Facebook plays a separate, far more significant function than their role as publisher of Trending News: it serves as the medium for a peer-to-peer communications network. I can roam anywhere on the Internet, get excited by some piece of material, brilliant or bogus, and post it on Facebook for my Friends to see. If some of them like it, they can in turn post it for their Friends to see.
The question is, do we want Facebook in its role as administrator of this peer-to-peer communications network to police the veracity of the material that users send each other? If I don鈥檛 post something stupid on Facebook, I can telephone my friends to tell them about it, or text them the link, or tell them about it in a bar. Nobody is going to do anything to stop the spread of fake news through those channels. Facebook doesn鈥檛 want to get into that business, and I don鈥檛 think we want them to, either. Imagine the morass it would create. There will be easy, clear cases, such as a piece telling someone to drink Drano to lose weight, which is not only obviously false but also dangerous. But there would also be a thicket of hard-to-call cases. Is acupuncture effective? Are low-carb diets 鈥渇ake鈥? Is barefoot running good for you? These are examples of questions where an established medical consensus may have once been confidently dismissive, but which now are, at a minimum, clouded with controversy. How is Facebook to evaluate materials making various claims in such areas, inevitably made with highly varying degrees of nuance and care鈥攍et alone politically loaded claims about various officeholders? Like all mass censorship, it would inevitably lead the company into a morass of inconsistent and often silly decisions and troubling exercises of power. It might sound easy to get rid of 鈥渇ake news,鈥 but each case will be a specific, individual judgment call, and often, a difficult one.
The algorithm
It is true that in some ways Facebook already interposes itself between users and their Friends鈥攖hat unlike, say, the telephone system, it does not serve as a neutral medium for ideas and communications. If Facebook got out of the way and let every single posting and comment from every one of your Friends flow through your newsfeed, you would quickly be overwhelmed. So they use 鈥淭he Algorithm鈥 to try to assess what they think you鈥檒l be most interested in, and place that in your feed. The company says this algorithm tries to assess content for whether it鈥檚 substantive, whether you鈥檒l find it relevant to you personally based on your interests, and also how interested you are in the Friend who posted it, based on how often you click on their stuff (Facebook actually assigns you numbers for each of your Friends, a 鈥渟talking score鈥 that indicates how interested you seem to be in each of them).
Facebook provides some details on how its algorithm works in its 鈥溾 blog. Some of those mechanisms already arguably constitute censorship of a sort. For example, the company heavily items with headlines that it judges to be 鈥渃lickbaity,鈥 based on a Bayesian algorithm (similar to those used to identify spam) trained on a body of such headlines. That means that if you write a story with a headline that fits that pattern, it is unlikely to be seen by many Facebook users because the company will hide it. Since January 2015 Facebook has also heavily stories that Facebook suspects are 鈥渉oaxes,鈥 based on their being flagged as such by users and frequently deleted by posters. (That would presumably cover something like the Drano example.)
Most of this interference with the neutral flow of information among Friends is aimed at making Facebook more fun and entertaining for its users. Though I鈥檓 uncomfortable with the power they have, I don鈥檛 have any specific reason to doubt that their algorithm is currently oriented toward that stated goal, especially since it aligns with the company鈥檚 commercial incentives as an advertiser.
There are of course very real and serious questions about how Facebook鈥檚 algorithmic pursuit of 鈥渇un鈥 for its users contributes to the Filter Bubble, in which we tend to see only material that confirms our existing views. The difference between art and commerce has been defined as the difference between that which expands our horizons by getting us out of our comfort zone鈥攊.e. by making us uncomfortable鈥攁nd that which lets us stay complacently where we already are with pleasing and soothing confirmations of our existing views. In that, Facebook鈥檚 Newsfeed is definitely commerce, not art. It does not pay to challenge people and make them uncomfortable.
But for Facebook to assume the burden of trying to solve a larger societal problem of fake news by tweaking these algorithms would likely just make the situation worse. To its current role as commercially motivated curator of things-that-will-please-its-users would be added a new role: guardian of the social good. And that would be based on who-knows-what judgment of what that good might be at a given time. If the company had been around in the 1950s and 1960s, for example, how would it have handled information about Martin Luther King, Malcolm X, gay rights, and women鈥檚 rights? A lot of material that is now seen as vital to social progress would then have been widely seen as beyond the pale. The company already has a frightening amount of power, and this would increase it dangerously. We wouldn鈥檛 want the government doing this kind of censorship鈥攖hat would almost certainly be unconstitutional鈥攁nd many of the reasons that would be a bad idea would also apply to Facebook, which is the government of its own vast realm. For one thing, once Facebook builds a giant apparatus for this kind of constant truth evaluation, we can鈥檛 know in what direction it may be turned. What would Donald Trump鈥檚 definition of 鈥渇ake news鈥 be?
The 老澳门开奖结果鈥檚 ideal is that a forum for free expression that is as central to our national political conversations as Facebook has become would not feature any kind of censorship or other interference with the neutral flow of information. It already does engage in such interference in response to its commercial interest in tamping down the uglier sides of free speech, but to give Facebook the role of national Guardian of Truth would exponentially increase the pitfalls that approach brings. The company does not need to interfere more heavily in Americans鈥 communications. We would like to see Facebook go in the other direction, becoming more transparent about the operation of its algorithms to ordinary users, and giving them an ever-greater degree of control over how that algorithm works.
The real problem
At the end of the day, fake news is not a symptom of a problem with our social-communications sites, but a societal problem. Facebook and other sites are just the medium.
Writing in the New Yorker, Nicholas Lemann beyond information regulation by Facebook to another possible solution to the fake news problem: creating and bolstering public media like the BBC and NPR. But whatever the merits of public media may be, the problem today is not that there aren鈥檛 good news outlets; the problem is that there is a large group of Americans who don鈥檛 believe what those outlets say, and have aggressively embraced an alternate, self-contained set of facts and sources of facts. This is not a problem that can be fixed either by Mark Zuckerberg or by turning PBS into another BBC.
There are two general (albeit overlapping) problems here. The first is simply that there are a lot of credulous people out there who create a marketplace for mercenary of fake news, which can be about any topic. The timeless problem of gullible people has been exacerbated by the explosion of news sources and people鈥檚 inability to evaluate their credibility. For much of the 20th century, most people got most of their news from three television networks and a hometown newspaper or two. If a guy was handing out a leaflet on a street corner, people knew to question its value. If he was working for their union or for the Red Cross, they might trust him. If he was a random , they might not. The wonderful and generally healthy explosion of information sources made possible by the Internet has a downside, which is that it has collapsed the distinctions between established newspapers and the online equivalent of people handing out material on street corners. The physical cues that signal to people whether or not to trust pamphleteers in the park are diminished, and many people have not yet learned to read them.
We can hope that someday the entire population will be well-educated enough to discriminate between legitimate and bogus sources online鈥攐r at least adapt and learn to be more discriminating online as it鈥檚 natural to be off. But until that day arrives, gullibility will always be a problem.
The second problem is the existence of a specific political movement that rejects the 鈥渕ainstream media鈥 in favor of a group of ideological news outlets like Breitbart and Infowars鈥攁 movement of politically motivated people who eagerly swallow not just opinions but also facts that confirm their views and attitudes and aggressively reject anything that challenges those views. Left and right have always picked and chosen from among established facts to some extent, and constructed alternate narratives to explain the same facts. But what is new is a large number of Americans who have rejected the heretofore commonly accepted sources of the facts that those narratives are built out of. The defense mechanisms against intellectual challenge by those living in this world are robust. I have encountered this in my own social media debates when I try to correct factual errors. When I point posters to a news article in a source like the New York Times or Washington Post, I am told that those 鈥渓iberal mainstream media sources鈥 can鈥檛 be trusted. While these sources certainly make mistakes, and like everyone are inevitably subject to all kinds of systemic biases in what they choose to publish and how they tell stories, they are guided by long-evolved professional and reputational standards and do not regularly get major facts wrong without being called to task. When I point people to the highly reputable fact-checking site Snopes, I am told that it is 鈥渇unded by George Soros,鈥 and for that reason can apparently be dismissed. (This is itself a false fact; Snopes says it is entirely self-funded through advertising revenues.)
This phenomenon 鈥渆pistemic closure.鈥 While originally a charge levied at intellectuals at Washington think tanks, it is an apt term for everyday readers of Breitbart and its ilk who close themselves off from alternate sources of information.
This is not a problem that can be fixed by Facebook; it is a social problem that exists at the current moment in our history. The problems with bogus material on Facebook and elsewhere (and their as-yet-undetermined role in the 2016 election) merely reflect these larger societal ills. Attempting to program those channels to somehow make judgments about and filter out certain material is the wrong approach.
Note: I participated in a panel discussion on this issue at the 92nd Street Y in New York City on Tuesday, which can be seen .
Update (Dec. 16, 2016):
A followup blog post on changes announced by Facebook to its service has been posted here.