Why false news snowballs on social media

Systems scientists find clues to why false news snowballs on social media
MIT researchers built a theoretical model to study how news spreads on a Twitter-like social network and found that when a network is highly connected or when the views of its members are sharply polarized, false news will spread wider than news that is seen as more credible. Credit: Jose-Luis Olivares, MIT

The spread of misinformation on social media is a pressing societal problem that tech companies and policymakers continue to grapple with, yet those who study this issue still don't have a deep understanding of why and how false news spreads.

To shed some light on this murky topic, researchers at MIT developed a theoretical model of a Twitter-like social network to study how news is shared and explore situations where a non-credible news item will spread more widely than the truth. Agents in the model are driven by a desire to persuade others to take on their point of view: The key assumption in the model is that people bother to share something with their followers if they think it is persuasive and likely to move others closer to their mindset. Otherwise they won't share.

The researchers found that in such a setting, when a network is highly connected or the views of its members are sharply polarized, news that is likely to be false will spread more widely and travel deeper into the network than news with higher credibility.

This theoretical work could inform empirical studies of the relationship between news credibility and the size of its spread, which might help companies adapt networks to limit the spread of false information.

"We show that, even if people are rational in how they decide to share the news, this could still lead to the amplification of information with low credibility. With this persuasion motive, no matter how extreme my beliefs are—given that the more extreme they are the more I gain by moving others' opinions—there is always someone who would amplify [the information]," says senior author Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering and a core faculty member of the Institute for Data, Systems, and Society (IDSS) and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

Joining Jadbabaie on the paper are first author Chin-Chia Hsu, a graduate student in the Social and Engineering Systems program in IDSS, and Amir Ajorlou, a LIDS research scientist. The research will be presented this week at the IEEE Conference on Decision and Control.

Pondering persuasion

This research draws on a 2018 study by Sinan Aral, the David Austin Professor of Management at the MIT Sloan School of Management; Deb Roy, an associate professor of media arts and sciences at the Media Lab; and former postdoc Soroush Vosoughi (now an assistant professor of computer science at Dartmouth University). Their empirical study of data from Twitter found that false news spreads wider, faster, and deeper than real news.

Jadbabaie and his collaborators wanted to drill down on why this occurs.

They hypothesized that persuasion might be a strong motive for sharing news—perhaps in the network want to persuade others to take on their —and decided to build a theoretical model that would let them explore this possibility.

In their model, agents have some prior belief about a policy, and their goal is to persuade followers to move their beliefs closer to the agent's side of the spectrum.

A news item is initially released to a small, random subgroup of agents, which must decide whether to share this news with their followers. An agent weighs the newsworthiness of the item and its credibility, and updates its belief based on how surprising or convincing the news is.

"They will make a cost-benefit analysis to see if, on average, this piece of news will move people closer to what they think or move them away. And we include a nominal cost for sharing. For instance, taking some action, if you are scrolling on social media, you have to stop to do that. Think of that as a cost. Or a reputation cost might come if I share something that is embarrassing. Everyone has this cost, so the more extreme and the more interesting the news is, the more you want to share it," Jadbabaie says.

If the news affirms the agent's perspective and has persuasive power that outweighs the nominal cost, the agent will always share the news. But if an agent thinks the news item is something others may have already seen, the agent is disincentivized to share it.

Since an agent's willingness to share news is a product of its perspective and how persuasive the news is, the more extreme an agent's perspective or the more surprising the news, the more likely the agent will share it.

The researchers used this model to study how information spreads during a news cascade, which is an unbroken sharing chain that rapidly permeates the network.

Connectivity and polarization

The team found that when a network has high connectivity and the news is surprising, the credibility threshold for starting a news cascade is lower. High connectivity means that there are multiple connections between many users in the network.

Likewise, when the network is largely polarized, there are plenty of agents with who want to share the news item, starting a news cascade. In both these instances, news with low credibility creates the largest cascades.

"For any piece of news, there is a natural network speed limit, a range of connectivity, that facilitates good transmission of information where the size of the cascade is maximized by true news. But if you exceed that speed limit, you will get into situations where inaccurate news or news with low credibility has a larger cascade size," Jadbabaie says.

If the views of users in the network become more diverse, it is less likely that a poorly credible piece of news will spread more widely than the truth.

Jadbabaie and his colleagues designed the agents in the to behave rationally, so the model would better capture actions real humans might take if they want to persuade others.

"Someone might say that is not why people share, and that is valid. Why people do certain things is a subject of intense debate in cognitive science, social psychology, neuroscience, economics, and political science," he says. "Depending on your assumptions, you end up getting different results. But I feel like this assumption of persuasion being the motive is a natural assumption."

Their model also shows how costs can be manipulated to reduce the spread of false information. Agents make a and won't share news if the cost to do so outweighs the benefit of sharing.

"We don't make any policy prescriptions, but one thing this work suggests is that, perhaps, having some cost associated with sharing news is not a bad idea. The reason you get lots of these cascades is because the cost of sharing the is actually very low," he says.

More information: Chin-Chia Hsu et al, Persuasion, News Sharing, and Cascades on Social Networks, SSRN Electronic Journal (2021). DOI: 10.2139/ssrn.3934010

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Why false news snowballs on social media (2021, December 15) retrieved 19 March 2024 from https://techxplore.com/news/2021-12-false-news-snowballs-social-media.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

People unknowingly group themselves together online, fueling political polarization across the US

221 shares

Feedback to editors