Wave of coronavirus misinformation as social media users focus on popularity, not accuracy

April 7, 2020 by Jon-Patrick Allem, The Conversation
Credit: CC0 Public Domain

Over the past few weeks, misinformation about the new coronavirus pandemic has been spreading across social media at an alarming rate. One video that went viral claimed breathing hot air from a hair dryer could treat COVID-19. A Twitter post touted injecting vitamin C to the bloodstream to treat the viral disease. Other threads hyped unfounded claims that vaping organic oregano oil is effective against the virus, as is using colloidal silver.

The sheer number of false and sometimes dangerous claims is worrying, as is the way people are unintentionally spreading them in ever wider circles.

In the face of this previously unknown virus, millions of people have been turning to in an attempt to stay informed about the latest developments and connected to friends and family. Twitter reported having about 12 million more daily users in the first three months of 2020 than in the last three of 2019. Facebook also has reported unprecedented surges in user activity.

What people see, follow, express and repost on social platforms are all communications that I study as the director of the Social Media Analytics Lab at the Keck School of Medicine of USC. My lab's goal is to harness publicly accessible data from Twitter, Instagram, Reddit, YouTube and others to better understand health-related attitudes and behaviors.

We have spotted some troubling trends as the coronavirus pandemic spreads.

Why do people perpetuate misinformation online?

Initial evidence suggests that many people are unintentionally sharing misinformation about COVID-19 because they fail to stop and think sufficiently about whether the content is accurate.

There are many reliable sources on social media, such as the Centers for Disease Control and Prevention and the World Health Organization, but most social media platforms aren't designed to prioritize the best information: They're designed to show content most likely to be engaged with first, whether accurate or not. Content that keeps users on the platform gets priority.

My team's research suggests that people's motivations for sharing might also be part of the problem. We have found that Twitter users tend to retweet to show approval, argue, gain attention and entertain. Truthfulness of a post or accuracy of a claim was not an identified motivation for retweeting. That means people might be paying more attention to whether a tweet is popular or exciting than whether its message is true.

Artificial intelligence isn't stopping it

Social media companies have been promising to combat misinformation on their platforms. However, they are relying on more than ever to moderate content as concerns about coronavirus keep human reviewers at home, where they don't have the support necessary to review sensitive content safely. This approach increases the chances of mistakes, such as when accurate content is accidentally flagged or cases where problematic content is not quickly detected.

Until misinformation can be identified in close to real time on social media platforms, everyone needs to be careful about where they get their news about coronavirus. Fact-checking organizations are available to help debunk false claims. But they, too, are getting overwhelmed battling the flood of coronavirus misinformation.

Even when the leading have plans of action to flag, curb and remove misinformation across their platforms, problematic content will slip through the cracks, exposing social media users to potentially dangerous information.

Social policing can backfire

Another troubling trend is a form of social policing on social media platforms that may have unintended consequences.

It is nothing new for social media users to try to shame people they don't agree with and condemn them on social media for violating perceived . During the current pandemic, people on social media have shamed others for socializing and ignoring social distancing recommendations, such as posting images of college students in bars or on crowded beaches.

However, when social media users seek to persuade their followers to behave in accordance with existing norms, they need to be aware of how they do it and the subliminal messages they might be sending.

Posting, forwarding or lamenting over captured moments of people ignoring social distancing measures is not the most effective way to curb these behaviors. The reason is that the underlying message one could walk away with is that people are still being social. This impression could lead people to continue being social, negating the intended effect of such social policing.

Research has shown that public officials often try to mobilize action against disapproved conduct by depicting it as distressingly frequent. As a result, they install a counterproductive descriptive norm in the minds of their audiences. In the case of social distancing, examples abound, including posts of crowded parks or markets or churches or hiking trails or backyards.

Instead, social media users attempting to reduce such conduct should focus attention on approved behavior. This could materialize with posts of people from home abiding by social distancing measures without mentioning others who are ignoring them.

What's being done right?

Social media can be a powerful tool for behavior change when used wisely.

Intensive care unit doctors on the frontlines are sharing coronavirus information on social media well. They provide useful information on ways to protect ourselves and our families from this disease. Other leading physician scientists are taking to social media to debunk rumors.

Communication campaigns from public health officials could also start reinforcing normative behaviors by recommending healthy activities that can reduce the boredom or loneliness of social distancing measures. Social sharing and social policing are going to continue. How the public engages on could make a difference.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation