Yale researchers cannot feel the punch in Facebook fight to tag fake news

Credit: CC0 Public Domain

(Tech Xplore)—Earlier this year Facebook put its fake-news boxing gloves on and announced it was going to fight the good fight by turning to third-party fact checkers and show findings below the original post, in a battle against fake news.

How are they doing? Can such efforts stem the tide of false information on ?

Two Yale University researchers won't be sending up balloons any time soon. Their study said fake- labeling from Facebook has not had much of an impact.

The researchers set out to determine, what is the payoff from the fact-check program? They were mindful that tagging fake articles with "Disputed by 3rd party fact-checkers" warnings and making articles' sources more salient by adding publisher logos were two approaches.

Findings: "With respect to disputed warnings, we find that tagging articles as disputed did significantly reduce their perceived accuracy relative to a control without tags, but only modestly (d=.20, 3.7 percentage point decrease in headlines judged as accurate)."

Alas, their survey of 7,500 people showed little impact, after assessing the effect of these interventions on perceptions of accuracy across seven experiments (total N=7,534).

Levi Sumagaysay is editor of SiliconBeat. He also reported what they found—that tagging news as "disputed by third-party fact-checkers" made participants only 3.7 percentage points more apt to determine headlines were actually false.

The study is from David Rand and Gordon Pennycook.

The authors stated in their abstract that "results suggest that the currently deployed approaches are not nearly enough to effectively undermine belief in fake news, and new (empirically supported) strategies are needed."

They wrote in their paper that "the results of the seven experiments presented here show that 'cosmetic' changes to the way headlines are presented on social media are not enough to effectively fight fake news. More fundamental solutions are needed."

Jason Schwartz, Politico, said the sheer volume of misinformation flooding the social media network made it impossible for fact-checking groups partnering with Facebook to address every story.

Shan Wang, NeimanLab, also called attention to Facebook's announcement to start to add publishers' logos to articles shared on its platform. Pennycook and Rand ran an experiment around that intervention.

"With respect to source salience, we find no evidence that adding a banner with the logo of the headline's publisher had any impact on accuracy judgments whatsoever," they stated in their paper.

But wait, what is the big deal? All a person has to do is click off Facebook and turn to news sites.

Well, as of August, two-thirds (67%) of Americans reported they get at least some of their news on social media – with two-in-ten doing so often, according to information from Pew Research Center. The report added that "Facebook by far still leads every other social media site as a source of news. This is largely due to Facebook's large user base, compared with other platforms, and the fact that most of its users get news on the site."

Politico, meanwhile, carried remarks from a Facebook spokesperson, who said that "fact-checking is just one part of the company's efforts to combat ." Other efforts included disrupting financial incentives for spammers, "building new products and helping people make more informed choices about the news they read, trust and share."

At the time of this writing, the paper had not yet been peer reviewed.

Explore further: Facebook targets 30,000 fake France accounts before election

More information: Assessing the Effect of 'Disputed' Warnings and Source Salience on Perceptions of Fake News Accuracy, papers.ssrn.com/sol3/papers.cf … ?abstract_id=3035384

Abstract
What are effective techniques for combating belief in fake news? Tagging fake articles with "Disputed by 3rd party fact-checkers" warnings and making articles' sources more salient by adding publisher logos are two approaches that have received large-scale rollouts on social media in recent months. Here we assess the effect of these interventions on perceptions of accuracy across seven experiments (total N=7,534). With respect to disputed warnings, we find that tagging articles as disputed did significantly reduce their perceived accuracy relative to a control without tags, but only modestly (d=.20, 3.7 percentage point decrease in headlines judged as accurate). Furthermore, we find a backfire effect – particularly among Trump supporters and those under 26 years of age – whereby untagged fake news stories are seen as more accurate than in the control. We also find a similar spillover effect for real news, whose perceived accuracy is increased by the presence of disputed tags on other headlines. With respect to source salience, we find no evidence that adding a banner with the logo of the headline's publisher had any impact on accuracy judgments whatsoever. Together, these results suggest that the currently deployed approaches are not nearly enough to effectively undermine belief in fake news, and new (empirically supported) strategies are needed.

32 shares