close
close

Association-anemone

Bite-sized brilliance in every update

What Trump’s victory could mean for AI regulation
asane

What Trump’s victory could mean for AI regulation

An exhausting electoral cycle has ended. Donald Trump will be the 47th president of the United States, and with Republicans in control of the Senate — and possibly the House — his allies are poised to make major changes at the highest levels of government.

The effects will be felt acutely in the AI ​​industry, which has largely rallied against federal policymaking. Trump has repeatedly said he plans to dismantle Biden’s AI policy framework on “day one” and has aligned himself with kingmakers who have harshly criticized all but the lightest regulations.

Biden’s approach

Biden’s AI policy went into effect by executive order, the AI executive orderpassed in October 2023. Congressional inaction on the regulation precipitated the executive order, whose provisions are voluntary — not mandatory.

AI EO addresses everything from promoting AI in healthcare to developing guidance to mitigate the risks of IP theft. But two of its most important provisions — which have drawn the ire of some Republicans — deal with the security risks of AI and the impact on real-world safety.

One provision directs companies that develop powerful AI models to report to the government how they train and secure those models and provide the results of tests designed to investigate model vulnerabilities. The other provision directs the Commerce Department’s National Institute of Standards and Technology (NIST) to create guidelines that help companies identify — and correct — design flaws, including bias.

AI EO has achieved a lot. In the past year, the Commerce Department established the US AI Safety Institute (AISI), a body that studies risks in AI systems, including systems with defense applications. It also released new software to help improve AI reliability and tested major new AI models through agreements with OpenAI and Anthropic.

Trump-allied critics argue that the EO’s reporting requirements are onerous and effectively force companies to reveal their trade secrets. During a House hearing in March, Rep. Nancy Mace (R-SC) said it “could scare off future innovators and prevent more ChatGPT-type breakthroughs.”

At a Senate hearing in July, Trump’s running mate JD Vance expressed concern that “attempts at preemptive overregulation” would “entrench the tech operators we already have.” Vance was too of support of antitrust, including the efforts of FTC Chairman Lina Khan, who is leading the charge investigations of big tech companies’ acquisitions of AI startups.

Several Republicans have equated NIST’s work on artificial intelligence with censorship of conservative speech. They accuse the Biden administration of trying to guide AI development with liberal notions of misinformation and bias; Sen. Ted Cruz (R-TX) recently blasted NIST’s awakened AI ‘safety’ standards as a ‘speech control plan’ based on ‘amorphous’ social harm.

“When I’m re-elected,” Trump said at a rally in Cedar Rapids, Iowa, last December, “I will rescind Biden’s artificial intelligence executive order and ban the use of artificial intelligence to censor the speech of American citizens on Day 1 . ”

AI EO replacement

So what could replace Biden EO’s AI?

Little can be gleaned from the AI ​​executive orders Trump signed during his last term as president, which established national AI research institutes and ordered federal agencies to prioritize AI research and development. His EOs mandated that agencies “protect civil liberties, privacy, and American values” in applying AI, help workers acquire AI-relevant skills, and promote the use of “trusted” technologies.

During his campaign, Trump promised policies that would “support the development of AI based on free speech and human flourishing” — but declined to go into specifics.

Some Republicans have said they want NIST to focus on AI’s physical security risks, including its ability to help adversaries build biological weapons (which Biden’s EO also addresses). But they have also been wary of endorsing new restrictions on AI that could jeopardize portions of NIST’s guidance.

Indeed, the fate of AISI, which is housed in NIST, is murky. Although it has budget, director and Partnerships with AI research institutes around the world, AISI could be overturned by a simple repeal of Biden’s EO.

IN A open letter in October, a coalition of companies, nonprofits, and universities called on Congress to pass legislation codifying AISI before the end of the year.

Trump acknowledged that AI is “very dangerous” and that will require massive amounts of power to get up and running, suggesting a willingness to engage with the growing risks of AI.

That being said, Sarah Kreps, a political scientist who focuses on US defense policy, doesn’t expect any major AI regulations to come out of the White House in the next four years. “I don’t know that Trump’s views on AI regulation will rise to the level of antipathy that will lead him to repeal the Biden AI EO,” she told TechCrunch.

Commerce and state regulation

Dean Ball, a research fellow at George Mason University, agrees that Trump’s victory likely heralds a light regulatory regime — one that will rely on enforcing existing law rather than creating new laws. However, Ball predicts that this may encourage state governments, especially in Democratic strongholds like California, to try to fill the gap.

State-led efforts are well underway. In March, Tennessee passed a law protecting voice artists from AI cloning. This summer, Colorado adopted a tiered, risk-based approach to AI deployments. And in September, California Governor Gavin Newsom signed on TENS of AI-related safety bills, some of which require companies to publish details about them AI training.

State politicians have PUT nearly 700 pieces of AI legislation this year alone.

“It’s unclear how the federal government will respond to these challenges,” Ball said.

Hamid Ekbia, a professor at Syracuse University who studies public affairs, believes that Trump’s protectionist policies could have implications for the regulation of artificial intelligence. He expects the Trump administration to impose tighter controls on Chinese exports, for example, including controls on technologies needed to develop AI.

The Biden administration already has a number of bans on the export of AI chips and designs in place. However, some Chinese firms are according to reports using loopholes to access tools through cloud services.

“Global regulation of AI will suffer as a result (of the new controls), despite circumstances that call for greater global cooperation,” Ekbia said. “The political and geopolitical ramifications of this could be huge, enabling more authoritarian and oppressive uses of artificial intelligence around the globe.”

If Trump passes tariffs on the technology needed to build AI, it could also squeeze the capital needed to fund AI research and development, says Matt Mittelsteadt, another researcher at George Mason University. During his campaign, Trump proposed a 10 percent tariff on all U.S. imports and a 60 percent tariff on products made in China.

“Perhaps the biggest impact will come from trade policies,” Mittelsteadt said. “Expect potential tariffs to have a massive economic impact on the AI ​​sector.”

Of course, it’s early. And while Trump has mostly avoided addressing AI on the campaign trail, much of his platform — like his plan to restrict H-1B visas and embrace oil and gas — could have repercussions downstream on the AI ​​industry.

Sandra Wachter, professor of data ethics at the Oxford Internet Institute, urged regulators, regardless of their political affiliation, not to lose sight of AI’s dangers for its opportunities.

“These risks exist no matter where you are on the political spectrum,” she said. “These prejudices don’t believe in geography and don’t care about party lines. I can only hope that AI governance doesn’t boil down to a partisan issue—it’s an issue that affects us all, everywhere. We all need to work together to find good global solutions.”