Facebook’s ridiculous plan to crowdsource news trustworthiness

Facebook has announced that it is going to start asking users which news sources they trust in an effort to improve the quality of the News Feed and crack down on so-called “Fake News”.

It is a ridiculous idea. Not only is it a crude instrument, but it’s either woefully naïve or deliberately shirking responsibility for the problems at hand.

“Too much sensationalism, misinformation and polarisation”

Facebook has been the subject of significant criticism over the last 18 months. The shock installation of Donald Trump as President is believed to have been caused, at least in part, by the systematic spread of Fake News sites on the social media platform.

The perception that Facebook failed to prevent the Fake News phenomenon, and are responsible for the election result, is bad for Facebook’s business. Facebook’s revenue streams are built on advertising; that, in turn, relies on people engaging with content on Facebook. If people don’t trust Facebook, they don’t use Facebook, and so Facebook’s CEO, Mark Zuckerberg, needs to maintain trust in the platform.

In a post on his Facebook page yesterday, Zuckerberg said:

“There’s too much sensationalism, misinformation and polarization in the world today. Social media enables people to spread information faster than ever before, and if we don’t specifically tackle these problems, then we end up amplifying them.”

And so he’s asked his product teams “to make sure we prioritise news that is trustworthy, informative, and local […] starting next week with trusted sources”.

Solving Fake News on Facebook

The solution to Fake News on Facebook is, in Zuckerberg’s view, to ensure that only trustworthy news sources appear on the platform. Facebook has apparently considered 3 ways to achieve this. They could:

  1. decide themselves, internally, which news sources are trustworthy
  2. ask external experts decide
  3. ask users of Facebook directly which sources they trust

For Facebook, the first option isn’t “something we’re comfortable with” and the second “would likely not solve the objectivity problem”. So Facebook is going to start asking — through their regular “quality surveys” — which sources are trustworthy: they’re going to crowdsource trustworthiness.

In his own words, this is how the system will work:

“As part of our ongoing quality surveys, we will now ask people whether they’re familiar with a news source and, if so, whether they trust that source. The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don’t follow them directly. (We eliminate from the sample those who aren’t familiar with a source, so the output is a ratio of those who trust the source to those who are familiar with it.)”

This is totally bananas. If it’s true, and Facebook will entirely rely on user opinion, I would wager it will make the problem worse, not better.

Asking users isn’t a solution at all

Asking users how trustworthy they think a news source is won’t fix Facebook’s Fake News problems. Charitably, it’s a partial solution to a wider set of issues.

1. People trust news they agree with

A 2014 study from the Pew Research Institute looked at trust in news: far before the concerns of Fake News were in the mainstream. Whilst only one source, it’s findings neatly demonstrate the issue with asking people which news sources they trust or prefer.

The study found that respondents with consistently liberal views tended to trust a broader base of news organisation than those with consistently conservative views. Perhaps unsurprisingly, those with liberal views had greater levels of trust in traditionally liberal-leaning news organisations (like the Economist or the Huffington Post), and less trust of those on the conservative end of the media spectrum (like Fox News and Breitbart). The inverse was true of respondents with conservative views.

All this means that people like it when the news chimes with their own political views. Think about that carefully: people don’t trust news sources for any sense of accuracy or objective quality: people trust news sources they agree with. This highlights why Facebook’s plans are so baffling.

Asking users to rate the trustworthiness of news sources is like asking 3 year olds to choose their own meals: they’ll pick what they like, not what’s good for them. It’s not hard to imagine that Facebook’s new plan will do the exact opposite of what it’s intending: creating a more polarised news environment on its platform.

2. Trustworthiness isn’t the whole problem

Even if asking people about whether they trust a news source was part of the solution, it doesn’t address the wider environment that makes Fake News so concerning to begin with.

Trustworthiness is only part of the problem. People just don’t assess and challenge the accuracy of news reporting. This isn’t some kind of ivory-tower, “I know better”, argument: we’re all guilty of it to some extent. When was the last time you read a news article containing an alleged fact or figure and thought, “I should go and check that for myself”?

I trust the BBC – but it’s not always accurate. Take the recent coverage of the Spectre and Meltdown bugs1; in it’s early reporting the BBC claimed the flaws only affected Intel chips. That was inaccurate. Either someone at the BBC didn’t understand what they were reporting, or weren’t reporting all the facts. A quick look at a specialist technology news site would have proved that.

People don’t have time to fact check news organisations though, which is why they must be held to high standards by external sources and why we, as consumers of news, have no choice but to assume what we read is accurate most of the time.

Here’s the thing though; inaccuracies happen all the time in news reporting. Asking people which news sources they trust, as Facebook suggests, doesn’t guarantee that news organisation is accurately reporting the facts.

3. The options aren’t mutually exclusive

Zuckerberg also presents Facebook’s three options for fixing news as mutually exclusive. This is a logical fallacy. Asking users what they think about the trustworthiness of a source doesn’t stop you from also employing external expertise to determine if something is Fake News.

Indeed, there would be many benefits to using external expertise. Doing so would mean Facebook could ensure those validating news sources were representative of the whole political spectrum or the wider population. It could ask them to consider not only how trustworthy the source was but also how accurately they represent objective facts. Importantly, it could do this in addition to using user feedback — it doesn’t have to be instead of it.

Did Facebook misunderstand the problem?

Is Facebook shirking responsibility for tackling Fake News by shunting it on to users, or is it just naïve and has it misunderstood the problem? I think it might be both. It’s expedient for Facebook to present these choices as mutually-exclusive, collectively-exhaustive, silver bullets.

Mark Zuckerberg wants you to trust Facebook. By asking you to fix its Fake News problem, it’s positioning itself as a neutral platform for free speech. If the news shared on the platform turns out to be a cesspool of fire and fury, it’s not Facebook’s fault: it’s because its users want it to be. To make this pitch is to deny Facebook’s own corporate and social responsibility, and as one of the largest companies and destinations on the Internet, it has a duty to do better.

  1. If you don’t know what Spectre and Meltdown are, then here’s a good set of coverage about it. Make sure you update your computer, tablet and phone operating systems, if you want to protect yourself. ↩︎