Only You Can Stop Facebook Hoaxes

Facebook is cracking down on the fake news stories that plague News Feeds everywhere by asking users to separate fact from fiction.

In 1770, a Hungarian inventor named Wolfgang von Kempelen presented a large box to the Empress of Austria. On top of the box was a chess board, and behind it was a mannequin that appeared to be an Arabic-looking man. It was called “The Turk,” and von Kempelen said it was a miraculous automaton that could beat anyone at chess. In other words: It was a chess-playing robot. Before it burned 84 years later, it handily check-mated Napoleon and Benjamin Franklin.
It check-mated them because there was a chess master hiding inside of it. The Turk was a hoax, a sham. It claimed to mechanically simulate a human process, but it really just concealed a human being doing the work.
Now, Amazon has a service called Mechanical Turk that lets users cheaply rent crowdsourced human labor. But this week The Turk has me thinking of another tech firm: Facebook.
On Tuesday, Facebook announced a new feature for its News Feed. Stories that Facebook believes to be untrue will be marked with a warning: Many people on Facebook have reported that this story contains false information.
True to its text, all the information that will trigger this warning comes from users. If you see a story you think is false on your friend’s wall, you can flag it as a bad post and then tell Facebook it’s a hoax. (After the menu below, Facebook will ask you if you want to privately message the poster telling them about the untrue information.)
Facebook
If enough people do this, a small warning will appear above the post. Eventually, an article or a publisher that many people have reported as hoax-y will show up in people’s News Feeds less frequently. Facebook also looks for other signifiers of hoax-yness—such as many users posting a certain story and then deleting it.
Notably, this entire process happens algorithmically.
“There are no human reviewers or editors involved,” a Facebook spokeswoman told me in an email. “We are not reviewing content and making a determination on its accuracy, and we are not taking down content reported as false.”
The feature is one more step in Facebook’s long-time-running attempt to figure out just what relationship it should have with news. Sometimes it has hired journalists outright. In 2013, it began guiding users toward sources it judged “high-quality”—and the secondary affects of that decision are still being felt across all of media. And last summer, it began experimenting with a "Satire" tag that warned users a certain story on their News Feed was not strictly true.
That tag seems to have grown into this new feature. The “hoax” flag allows Facebook to avoid some of the knottier conclusions of the “Satire” tag, such as Facebook users can’t recognize a joke. “We’ve found from testing that people tend not to report satirical content intended to be humorous, or content that is clearly labeled as satire,” says the company’s announcement. In other words: The Onion will be just fine, while “satirical” sham-mongers like The Daily Currant and The National Report will not.
The warning is good for Facebook in a deeper way, I think. Facebook is the the Media Company That Dare Not Call Itself a Media Company. By outsourcing hoax-hunting to users, it gets to effectively make editorial choices without having editorial values. It allows Facebook, in other words, to simulate the process of news judgment—but just as in Kempelen’s miraculous machine, it’s people who are in the machinery, making the decisions.
But I’m interested to see how the tool scales. The hoax button—the Hoaxamatic? the Hoaxatron 5,000?—is a type of flag, a content-moderation tool recently explored by MIT professor Kate Crawford and Cornell professor Tarleton Gillespie. Flags are often used in offensive or abusive content moderation, and they’re a way for decisions to be made with indirect user input.
They’re also often weaponized. Crawford and Gillespie refer to a 2012 episode in which conservative groups were tagging gay-rights pages and content on Facebook as abusive. These groups, however, only claimed that they were reacting to pro-gay-rights users who tagged the conservatives’s own content as abusive or offensive.
Will that happen with this new, 100 percent algorithmic tool? It’s hard to say. After Gamergate, it’s easy to see concerted campaigns forming around marking announcements from victims as hoaxes. Perhaps because Facebook’s algorithm looks to patterns of deletion—and not just flagging—as a useful metric, it will not mark stories as such. And Facebook, as it often is, is dealing with a difficult problem that does not scale well. Content moderation of the entire Internet—or much of it—would be impossible without some kind of crowdsourcing.