close
close

Association-anemone

Bite-sized brilliance in every update

AI-generated faces proliferate as tools of political manipulation on X, study finds
asane

AI-generated faces proliferate as tools of political manipulation on X, study finds

AI-generated profile pictures are becoming a significant tool for coordinated manipulation on X, researchers in Germany identify almost 8,000 accounts using synthetic faces primarily focused on amplifying political messages and crypto schemes.

“Recent advances in generative artificial intelligence (AI) have blurred the lines between genuine and machine-generated content, making it nearly impossible for humans to distinguish between such media,” the study notes.

The research, conducted by teams from Ruhr University Bochum, the GESIS Leibniz Institute and the CISPA Helmholtz Center, found that more than half of these accounts were created in 2023, often as part of suspicious mass creation events.

“A significant portion of the accounts were created in bulk shortly before our data collection, which is a common pattern for accounts created for message amplification, disinformation campaigns, or similar disruptive activities,” the researchers explain.

This finding gains further context from a recent analysis of the platform by the Center to Combat Digital Hate, which shows that X owner Elon Musk’s political posts favor Donald Trump. received 17.1 billion views— more than double all US political campaign ads combined during the same period.

“At least 87 of Musk’s posts this year promoted claims about the US election that fact-checkers rated as false or misleading, garnering 2 billion views. None of these posts contained a community note, X’s name for user-generated fact checks,” the CCDH report said.

The use of generative AI – whether to generate fake images or text – was easy to spot, as accounts with synthetic faces displayed distinct patterns that separated them from legitimate users. “accounts with fake images have fewer followers (mean: 393.35, median: 60) compared to accounts with real images (mean: 5,086.38, median: 165).” The study also found that fake accounts tend to interact less with their ecosystem of followers and instead post messages without responding or interacting with other accounts.

The study also pointed out specific patterns that suggest coordinated activity: “We note that 1,996 fake image accounts (25.84%) have exactly 106 followers. Our content analysis shows that these accounts belong to a large group of fake accounts involved in inauthentic coordinated behavior.”

The research team’s sophisticated detection methods achieved remarkable accuracy, with the researchers reporting near 100% certainty in their findings.

The researchers also said that many of the accounts don’t live very long, with more than half of them being suspended in less than a year.

Content analysis also revealed carefully orchestrated posting patterns across multiple languages. The study identified “large networks of accounts with fake images that were likely created automatically and participated in large-scale spam attacks.” English-language accounts focused heavily on controversial topics, with researchers finding that accounts preferred to address issues such as the war in Ukraine, the US election, and debates over COVID-19 and vaccination policies.

Outside of politics, many of these accounts also promoted crypto scams and sex-related content.

Looking ahead, the researchers plan to extend the detection capabilities to identify AI images generated with other models based on different technologies, such as diffusion models instead of generative adversarial networks (GANs). They also want to improve their methodology to find more ways to spot what they label as “coordinated inauthentic behavior on social platforms.”

Edited by Josh Quittner and Sebastian Sinclair

Generally intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.