close
close

Association-anemone

Bite-sized brilliance in every update

AI’s adverse impact on the 2024 election
asane

AI’s adverse impact on the 2024 election

Ethis year, supervisors and technologists ADVISED that artificial intelligence would wreak havoc in the 2024 US election by spreading misinformation through deepfakes and personalized political ad campaigns. Those fears have spilled over into the public: More than half of US adults are “extremely or very worried” about the negative impact of artificial intelligence on elections, according to a recent study. Pew poll.

However, with the election a week away, fears of the election being derailed or defined by AI now appear to have been overblown. Political deepfakes were shared on social media, but they were only a small part of larger disinformation campaigns. The The US intelligence community wrote in September, while foreign actors such as Russia were using generative artificial intelligence to “enhance and accelerate” attempts to sway voters, the tools had not “revolutionized such operations.”

Tech insiders admit that 2024 was not a breakthrough year for generative AI in politics. “There are a lot of campaigns and organizations that are using AI in one way or another. But in my view, it hasn’t had the level of impact that people anticipated or feared,” says Betsy Hoover, founder of Higher Ground Labs, a venture fund that invests in political technology.

At the same time, researchers caution that the impact of generative AI on this election cycle has yet to be fully understood, especially due to their implementation on private messaging platforms. They also argue that even if the impact of AI on this campaign seems disappointing, it is likely to pick up in future elections as the technology improves and its use increases among the general public and political agents. “I’m sure in a year or two the AI ​​models will improve,” says Sunny Gandhi, vice president of policy affairs at Encode Justice. “So I’m quite concerned about what it’s going to look like in 2026 and certainly in 2028.”

The rise of political deepfakes

Generative AI has already had a clear impact on global politics. In the countries through South Asiacandidate used artificial intelligence to flood the audience with articles, images and deepfake videos. In February, deepfake audio was released that falsely claimed to show London Mayor Sadiq Khan making inflammatory comments ahead of a major pro-Palestinian march. Khan says that the audio clip sore violent clashes between protesters and counter-protesters.

There were examples in the US as well. In February, New Hampshire residents received voicemails from deepfake audio of Joe Biden, in which the president appeared to discourage them from voting. FCC prompt forbidden robocalls containing AI-generated voices, and the Democratic political consultant who created the voicemails was indicted on criminal chargessending a strong warning to others who might try similar tactics.

However, political deepfakes have been picked up by politicians, including former President Donald Trump. In August, Trump posted AI images of Taylor Swift endorsing him, as well as Kamala Harris’s communist outfit. In September, a video which was linked to a Russian disinformation campaign in which Harris alleged he was involved in a hit-and-run accident and was seen on social media millions of times.

Read more: There is another important message in Harris’ endorsement of Taylor Swift.

Russia was a special focus for malicious uses of artificial intelligence, with state actors generating text, images, audio and video that it used in the US, often to amplify immigration fears. It is unclear whether these campaigns had much impact on voters. The Justice Department said so disrupted one of those campaigns, known as Doppelganger, in September. US Intelligence Community written in the same month that these foreign actors faced several challenges in spreading these videos, including the need to “overcome restrictions built into many AI tools.”

Independent researchers have also worked to track the spread and impact of AI creations. Earlier this year, a group of Purdue researchers created a incident database of political deepfakes, which has since recorded over 500 incidents. Surprisingly, most of these videos were not created to deceive people, but rather satire, education or political commentary, says researcher Christina Walker. However, Walker says the meanings of these videos to viewers often change as they spread through different political circles. “A person posts a deepfake and says, ‘This is a deepfake.’ I created it to show X, Y and Z.’ Twenty retweets later, someone else is sharing it like it’s real,” says Walker.

Daniel Schiff, another researcher on the project, says many deepfakes are likely designed to reinforce the opinions of people who were already predisposed to believe their messages. Other studied suggests that most forms of political persuasion they have very small effects at best— and that voters actively dislikes political messages which are personalized to them. This could make one of AI’s main strengths moot: to create targeted messages cheaply. In August, Meta reported that AI-based generative tactics provided “only incremental productivity gains and content generation” to influence campaigns. The company concluded that the tech industry’s strategies to neutralize their spread “appear effective at this point.”

Other researchers are less confident. Mia Hoffmann, a researcher at Georgetown’s Center for Security and Emerging Technology, says it’s difficult to determine AI’s influence on voters for several reasons. One is that big tech companies have limited the amount of data they share about posts. Twitter ENDED free access to its API and recent Meta close Crowdtangle on Facebook and Instagram, making it more difficult for researchers to track hate speech and misinformation on those platforms. “We’re at the mercy of what these companies share with us,” says Hoffmann.

Hoffmann also worries that AI-created misinformation is proliferating on closed messaging platforms such as WhatsAppwhich are particularly popular among diaspora immigrant communities in the US. Robust AI efforts may be deployed to sway voters in swing states, but we may not know about their effectiveness until after the election, she adds. “As the electoral importance of these groups has grown, they are increasingly targeted with tailored influence campaigns that aim to suppress their votes and influence their views,” says Hoffmann. “And because of app encryption, misinformation is more hidden from fact-checking efforts.”

AI tools in political campaigns

Other political actors are trying to use generative AI tools in more mundane ways. Campaigns can use AI tools to scour the web to see how a candidate is perceived in various social and economic circles, conduct opposition research, summarize dozens of news articles, and write social media copy adapted to different audiences. Many campaigns are short staffed, on tight budgets, and short on time. AI, the theory says, could replace some of the low-level work usually done by interns.

A spokesperson for the Democratic National Committee told TIME that members of the organization were using generative artificial intelligence to make their work “more efficient while maintaining strong safeguards,” including helping officials draft fundraising emails. funds, write codes and identify unusual patterns of voter deletion in public data records. A spokesman for the Republican National Committee did not respond to a request for comment.

A variety of startups have begun providing AI tools for political campaigns. These include BattleGroundAI, which can write copy for hundreds of political ads “within minutes,” the company says, and Grow Progress, which runs a chatbot tool that helps people generate and modify persuasive tactics and messages to potential voters . Josh Berezin, a co-founder at Grow Progress, says dozens of campaigns have “experimented” with their chatbot this year to create ads.

But Berezin says adoption of those AI tools has been slow. Political campaigns are often risk-averse, and many strategists have been hesitant to jump in, especially given the public’s negative perception of the use of generative AI in politics. New York Times reported in August that only a handful of candidates were using artificial intelligence – and the few who used the technology wanted to hide it from the public. “If someone was saying, ‘This is the AI ​​election,’ I didn’t really see that,” Berezin says. “We’ve seen some people exploring using some of these new tools with great enjoyment, but it’s not universal.”

However, the role of generative AI is only likely to expand in future elections. Improved technology will allow campaigns to create messages and raise funds faster and cheaper. AI could also help the bureaucracy of processing votes. Automated signature verification – where a mail-in voter’s signature is matched with their signature on file – was used in several counties in 2020, for example.

But improved AI technology will also lead to more believable fake video and audio clips, likely leading to both the spread of misinformation and a growing distrust of all political messages and their veracity. “This is a threat that will grow,” says Hoffmann, the Georgetown researcher. “Dismantling and identifying these influencer campaigns will become even more time- and resource-consuming.”