close
close

Association-anemone

Bite-sized brilliance in every update

Are we misinformed about misinformation?
asane

Are we misinformed about misinformation?

Iin june Nature magazine published a perspective suggesting that the harms of online misinformation have been misunderstood. The paper’s authors, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they characterize as three common misperceptions: that the average person’s exposure to false and inflammatory content is high, that algorithms drive that exposure, and that many wider problems in society are mainly caused by social media.

“People going on YouTube to watch baking videos and ending up on Nazi websites — that’s very, very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher at the University of Pennsylvania. Penn Media Accountability Project. That’s not to say marginal cases don’t matter, he and his colleagues wrote, but treating them as ordinary can contribute to misunderstandings — and distract from more pressing issues.

Rothschild spoke to Undark about the paper in a video call. Our conversation has been edited for length and clarity.

Undark: What motivated you and your co-authors to write this perspective?

David Rothschild: The five co-authors of this paper have all done a lot of different research in this space for years, trying to understand what’s going on in social media: what’s good, what’s bad, and most of all, understand how it differs from stories that we hear from the media and other researchers.

Specifically, we focused on these questions about what is the experience of a typical consumer, a typical person versus a more extreme example. A lot of what I saw or a lot of what I understood – referenced in a lot of research – really described a pretty extreme scenario.

The second part of it is a lot of emphasis on algorithms, a lot of concern for algorithms. What we’re seeing is that a lot of harmful content isn’t coming from an algorithm that’s pushing it on people. In fact, it’s the exact opposite. The algorithm pulls you to the center.

And then there are these questions of causation and correlation. A lot of research, and especially the mainstream media, confuses the immediate cause of something with its root cause.

There are many people who say, “Oh, these yellow vest riots are happening in France. They were organized on Facebook.” Well, there have been riots in France for several hundred years. They find ways to organize themselves even without the existence of social media.

The proximate cause—the proximate way people organized around (January 6)—was definitely a lot online. But then comes the question, could these things have happened in an offline world? And these are difficult questions.

Writing a perspective here in Nature really allows us to reach out to stakeholders outside of academia to really address broader discussion because there are real world consequences. Research is allocated, funding is allocated, platforms become pressure to solve the problem that people are discussing.

UN: Can you talk about the example of the 2016 election: what did you find about it and also the role that the media may have played in spreading information that was not entirely accurate?

DR: The bottom line is that what the Russians did in 2016 is certainly interesting and newsworthy. They invested quite a lot in creating Facebook organizations that posted viral content and then threw in a bunch of non-true fake news towards the end. Definitely significant and definitely something I can see why people were intrigued by. But ultimately what we wanted to say is, “How much of an impact could this plausibly have?”

“A lot of research, and especially the mainstream media, confuses the immediate cause of something with the root cause of it.”

The impact is really hard (to measure), but at least we can put people’s news diets into perspective and show that the number of views of direct Russian disinformation is only a microscopic portion of people’s news consumption on Facebook – don’t let alone their consumption of Facebook, not to mention their consumption of news in general, of which Facebook is only a small part. Especially in 2016, the vast majority of people, even the youngest, were still consuming far more news on TV than on social media, let alone online.

While we agree that any fake news is probably not good, there is ample research to show that repeated interaction with content is indeed what drives causal understanding of the world, of narratives, however you want to describe it. Being hit with some fake news occasionally, and at very low numbers for the average consumer, is simply not the driving force.

UD: My impression after reading your Nature paper is that you found that journalists are spreading misinformation about effects of misinformation. Is that correct? And why do you think this happens if so?

DR: It’s a good story after all. And the shade is heavy, very heavy, and the negative is popular.

UD: So what exactly is a good story?

DR: That social media is harming your children. That social media is the problem.

There is a general desire to cast things in a more negative light. To be sure, there is a long history of people freaking out and subscribing all of society’s ills to new technology, whether or not it was the Internet, or television, or radio, or music, or books. You can just go back in time and see all these kinds of concerns.

Ultimately, there will be people who benefit from social media. There will be people who will be affected by social media and there will be many people who will progress with it in the way that society continues to progress with new technology. This is not as interesting a story as these social media issues are causing, without counterbalancing that.

“Social media is the problem and it’s actually the algorithms” offers a very simple and tractable solution, which is that you fix the algorithms. And it avoids the harder question—the one we generally don’t want to ask—about human nature.

A lot of the research we cite here, which I think makes people uncomfortable, is that a certain segment of the population is asking for horrible things. They demand things that are racist, degrading, that cause violence. This demand can be satisfied in various social networks, just as it was previously satisfied in other forms of media, whether or not people were reading books or movies or radio, regardless of what people were listening to or earning. information from the past.

Finally, the various channels we have available certainly change the ease and distribution and how it is distributed. But the existence of these things is a matter of human nature that is beyond my ability to resolve as a researcher, far beyond the ability of many people – most people, everyone. I think it’s difficult and also makes you uncomfortable. And I think that’s why a lot of journalists like to focus on “bad social media, algorithms are the problem”.

UD: On the same day that Nature published your article, the journal also posted a comment titled “Disinformation is a bigger threat to democracy than you might think.” The authors suggest that “concern about the expected blizzard of election-related disinformation is warranted given the ability of false information to fuel polarization and undermine trust in electoral processes.” What would the average person make of these seemingly divergent opinions?

DR: We certainly don’t want to give the impression that we condone any misinformation or harmful content, or trivialize the impact it has, especially on those it affects. What we’re saying is that it’s focused away from the average consumer in extreme pockets, and it takes a different approach and a different allocation of resources to achieve that than traditional research, and the traditional questions that you see have come up about targeting to a typical consumer, about targeting this mass impact.

I read that and I don’t think it’s necessarily wrong, as much as I can’t see who he’s basically yelling at in that piece. I don’t think it’s a huge move—to trivialize—so much as to say, “Hey, we should fight it where it is, fight it where the problems are.” I think in a sense it’s a discussion between them.

UD: You are an employee of Microsoft. How would you reassure potentially skeptical readers that your study is not an effort to minimize the negative effect of products that are profitable TO technology industry?

DR: This paper has four academic co-authors and has gone through an incredibly rigorous process. You may not (be) noticing on the face of it: I submitted this paper on October 13, 2021, and it was finally accepted on April 11, 2024. I’ve had some crazy review processes in my time. This was intense.

We came up with ideas based on our own academic research. We have supplemented it with the latest research and continue to supplement it with incoming research, especially some research that has been contrary to our original conception.

The bottom line is that Microsoft Research is an extremely unique place. For those unfamiliar with it, it was founded under Bell Laboratories model where there is no review process for publications coming out of Microsoft Research because they believe the integrity of the work is based on not censoring as it comes out. The idea is to use this position to be able to engage in discussion and understanding around the impact of things that are close to the company, things that have nothing to do with it.

In this case, I think it’s pretty far. It’s a really wonderful place. A lot of the work is co-authored with academic collaborators, and certainly that’s always important to make sure there are very clear guidelines in the process and to ensure the academic integrity of the work they’re doing.

UD: I forgot to ask you about your team methods.

DR: It is obviously different from a traditional research paper. In this case, this was definitely started by conversations between co-authors about the joint work and the separate work that we had done that we felt was still not hitting the right places. It really started with establishing some theories that we had about the differences between our academic work, the general body of academic work, and what we were seeing in the public discussion. And then an extremely thorough literature review.

As you’ll see, we’re somewhere in the 150+ citations — 154 citations. And with this incredibly long review process in Nature, we went line by line to make sure there was nothing that wasn’t necessarily literature: either, where appropriate, the academic literature, or, as appropriate, what we were able to do. quote from things that have been in public.

The idea was to really create, hopefully, a comprehensive piece that allows people to really see what we think is a really important discussion – and that’s why I’m so happy to talk to you today – about where are the real damages and where the push should be.

None of us are firm believers in trying to give up a position and hold on to it despite new evidence. There are changing patterns of social media. With what we have now TikTokand reeland YouTube shorts it’s a very different experience than what the main social media consumption was a few years ago – with longer videos – or the main social media a few years before that, with news feeds. These will continue to be something you want to monitor and understand.

This article was originally published on Darkness. Read on original article.

Read more

about this topic