close
close

Association-anemone

Bite-sized brilliance in every update

Misinformation really does spread like a virus – News & Events
asane

Misinformation really does spread like a virus – News & Events

Posted on: November 14, 2024

How misinformation circulates can be effectively described using mathematical models designed to simulate the spread of pathogens, write David Robert Grimes of the School of Medicine and Sander Van Der Linden of the University of Cambridge.

We are increasingly aware of how misinformation can influence elections. About 73% of Americans report seeing misleading election news and about half struggle to discern what is true or false.

When it comes to disinformation, “going viral” seems to be more than just a catchphrase. Scientists have found a close analogy between the spread of misinformation and the spread of viruses. In fact, the way misinformation circulates can be effectively described using mathematical models designed to simulate the spread of pathogens.

Concerns about disinformation are widespread, with a Recent UN survey suggesting that 85% of people worldwide are concerned about it.

These concerns are well founded. Foreign disinformation increased in sophistication and scale since the 2016 US election. The 2024 election cycle has seen dangerous conspiracy theories about “weather manipulation” undermining proper hurricane management, fake news about immigrants who eat pets inciting violence against the Haitian community and misleading choice conspiracy theories amplified by the richest man in the world, Elon Musk.

Recent studied they used mathematical models drawn from epidemiology (the study of how disease occurs in the population and why). These models were originally developed to study the spread of viruses, but can be effectively used to study the spread of misinformation in social networks.

A class of epidemiological models that work for misinformation is known as susceptible-infectious-recoveredmodel (SIR). They simulate the dynamics between susceptible (S), infected (I) and recovered or resistant (R) individuals.

These models are generated from a series of differential equations (which help mathematicians understand rates of change) and are easily applied to the spread of disinformation. For example, on social networks, false information is propagated from individual to individual, some of whom become infected, others remain immune. Others serve as asymptomatic vectors (disease carriers), spreading misinformation without knowing it or being adversely affected by it.

These models are incredibly useful because they allow us to predict and simulate population dynamics and come up with measures such as the basic reproduction number (R0) – the average number of cases generated by an “infected” individual.

As a result, it was on the rise interest in the application of such epidemiological approaches to our informational ecosystem. Most social media platforms have a estimate R0 greater than 1, indicating that the platforms have the potential to spread disinformation like an epidemic.

I am looking for solutions

Mathematical modeling usually involves what is called phenomenological research (where researchers describe observed patterns) or mechanistic work (which involves making predictions based on known relationships). These models are particularly useful because they allow us to explore how possible interventions can help reduce the spread of misinformation on social media.

We can illustrate this basic process with a simple illustrative model shown in the graphic below, which allows us to explore how a system might behave under a variety of hypothetical assumptions, which can then be tested.

Prominent social media personalities with large followings can become “overspreaders” of electoral disinformation, casting falsehoods in the potential hundreds of millions of people. This reflects the current situation in which election officials the ratio being exceeded in their attempts to verify the information.

In our model, if we conservatively assume that people have only a 10% chance of infection after exposure, disproving misinformation has only a small effectaccording to studies. In the 10% chance of infection scenario, the population infected by electoral disinformation grows rapidly (orange line, left panel).

Chart showing how denial and preconfirmation affect the spread of information, explained in the text above and below.
A “compartment” model of misinformation spread over a week in a cohort of users, where the misinformation has a 10% chance of infecting an unvaccinated susceptible individual upon exposure. Denial is assumed to be 5% efficient. If pre-bunking is introduced and is roughly twice as effective as debunking, the dynamics of misinformation infection change significantly. Sander van der Linden / Robert David Grimes

Psychological “vaccination”.

The analogy with the viral spread of disinformation is apt precisely because it allows scientists to simulate ways to counter its spread. These interventions include an approach called “psychological inoculation”also known as prebunking.

Here researchers preemptively introduce and then disprove a falsehood so that people gain immunity to misinformation in the future. It is similar to vaccination, where people are given a (weakened) dose of the virus to prepare their immune system for future exposure.

For example, a recent study used AI chatbots to come up with prebunks against common election fraud myths. This involved warning people in advance that political actors might manipulate their opinion with sensational stories, such as the bogus claim that “massive overnight vote deposits are overturning elections”, along with key tips on how to spot such rumours. misleading. These “inoculations” can be integrated into population models of the spread of disinformation.

You can see in our graph that if prebunking is not used, it takes much longer for people to become immune to misinformation (left panel, orange line). The right panel illustrates how, if prebunking is implemented at scale, it can contain the number of people who are misinformed (orange line).

The purpose of these designs is not to make the issue sound scary or to suggest that humans are gullible disease vectors. But it is clear EVIDENCE that some fake news spreads like a simple contagion, infecting users immediately.

Meanwhile, other stories behave more like a complex contagion, where people require repeated exposure to sources of misleading information before they become “infected.”

That individual susceptibility to misinformation may vary does not diminish the utility of approaches drawn from epidemiology. For example, models can be adjusted according to how hard or difficult it is for misinformation to “infect” different subpopulations.

While thinking of people this way might be psychologically uncomfortable for some, most misinformation is broadcast by a small number of influential super-spreaders, as happens with viruses.

Taking a EPIDEMIOLOGIC approach to the study of fake news allows us to anticipate its spread and model effectiveness of interventions such as prebunking.

Some recent validated work the viral approach using social media dynamics from the 2020 US presidential election. The study found that a combination of interventions can be effective in reducing the spread of misinformation.

Models are never perfect. But if we want to stop the spread of misinformation, we need to understand it to effectively counter its societal damage.

This article written and originally published for The Conversation by David Robert Grimes, Faculty of Medicine, Trinity College and Sander Van Der Linden, University of Cambridge.

This article is republished from conversation under a Creative Commons license. Read on original article.