close
close

Association-anemone

Bite-sized brilliance in every update

Biden’s AI national security memo calls for heavy lifting
asane

Biden’s AI national security memo calls for heavy lifting

President Joe Biden’s directive to all US national security agencies to integrate artificial intelligence technologies into their systems sets ambitious targets amid a volatile political environment.

This is the first assessment by tech experts after Biden on Oct. 24 directed a wide range of organizations to use AI responsibly, even as the technology advances rapidly.

“It’s like trying to assemble an airplane while you’re in the middle of flying it,” said Josh Wallin, a fellow in the defense program at the Center for a New American Security. “It’s a tough lift. This is a new area that a lot of agencies need to look at that they may not have necessarily paid attention to in the past, but I will also say that it’s definitely a critical one.”

Federal agencies will need to quickly hire experts, get them security clearances and tackle the tasks Biden sets out, while private companies add money and talent to advance their AI models, Wallin said.

The memo, which stems from the president’s executive order last year, asks the Pentagon; spy agencies; the departments of Justice, Homeland Security, Commerce, Energy, and Health and Human Services; and others to leverage AI technologies. The directive emphasizes the importance of national security systems “while protecting human rights, civil rights, civil liberties, privacy and safety in AI-enabled national security activities.”

Federal agencies have deadlines, some as short as 30 days, to complete tasks. Wallin and others said the deadlines are driven by the pace of technological advances.

The memo calls for by April the AI ​​Safety Institute at the National Institute of Standards and Technology to “pursue voluntary preliminary testing of at least two frontier AI models prior to their implementation or public release to assess capabilities that could pose a threat to national security”.

Frontier models refer to large AI models such as ChatGPT that can recognize speech and generate human-like text.

The testing is intended to ensure that the models do not allow rogue actors and adversaries to launch offensive cyber operations or “accelerate the development of biological and/or chemical weapons, autonomously perform malicious behavior, automate the development and deployment of other models.”

But the memo also adds an important caveat: the deadline to start testing AI models would be “subject to private sector cooperation.”

Meeting the testing deadline is realistic, said John Miller, senior vice president for policy at ITI, a trade group that represents top technology companies including Google, IBM, Intel, Meta and others.

Because the institute is “already working with model developers on model testing and evaluation, it is feasible that companies could complete or at least begin such testing within 180 days,” Miller said in an email. But the memo also requires the AI ​​Safety Institute to issue guidance on testing the models within 180 days, and so “it seems reasonable to ask exactly how these two timelines will sync up,” he said.

By February, the National Security Agency “will develop the capability to conduct rapid systematic classified testing of the ability of AI models to detect, generate, and/or exacerbate offensive cyber threats. Such tests will assess the extent to which AI systems, if misused, could accelerate offensive cyber operations,” the note said.

“Dangerous” order.

With the presidential election just a week away, the outcome looms large for this directive.

The Republican Party platform says that, if elected, Donald Trump would repeal Biden’s “dangerous executive order that impedes AI innovation and imposes radical leftist ideas on the development of this technology.” In its place, Republicans support the development of AI based on free speech and human flourishing.”

Since Biden’s memo is the result of the executive order, it’s likely that if Trump wins, they would “simply pull the plug” and go their own way on artificial intelligence, said Daniel Castro, vice president of the Foundation for Information Technology and Innovation. interview.

Leadership of federal compliance departments would also change significantly under Trump. Up to 4,000 jobs in the federal government are changing hands with the arrival of a new administration.

However, people who follow the issue note that there is a broad bipartisan consensus that the adoption of AI technologies for national security purposes is too critical for partisan bickering to derail it.

The tasks and deadlines in the memo reflect in-depth discussions between the agencies that took place several months ago, said Michael Horowitz, a professor at the University of Pennsylvania who was until recently deputy secretary of defense with a portfolio that included military uses of artificial intelligence. and advanced technologies.

“I think the implementation of (the memo) regardless of who wins the election is going to be absolutely critical,” Horowitz said in an interview.

Wallin noted that the memo emphasizes the need for US agencies to understand the risks posed by advanced generative AI models, including the risks of chemical, biological and nuclear weapons. On threats like those to national security, there is agreement between the parties, he said in an interview.

Senate Intelligence Chairman Mark Warner, D-Va., said in a statement that he supports the Biden memo, but the administration should work “over the coming months with Congress to advance a clearer strategy for engaging the private sector on national security risks to AI. systems along the supply chain.”

Immigration policy

The memo acknowledges the long-term need to attract talented people from around the world to the United States in fields such as semiconductor design, an issue that could be tied to larger questions about immigration. The Departments of Defense, State, and Homeland Security are directed to use available legal authorities to bring them.

“I think there is broad recognition of the unique importance of STEM talent in ensuring US technological leadership,” Horowitz said. “And AI is no exception.”

The memo also calls on the State Department, the US Mission to the United Nations and the US Agency for International Development to develop a strategy within four months to advance international governance norms for the use of AI in national security.

The US has already taken some steps to promote international cooperation in artificial intelligence, both for civilian and military use, Horowitz said. He cited the example of the US-led Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, which was endorsed by more than 50 countries.

“It demonstrates how the United States is already leading the way by setting strong norms for responsible behavior,” Horowitz said.

The push toward responsible use of technology must be seen in the context of the larger global debate about whether countries are moving toward authoritarian systems or leaning toward democracy and respect for human rights, Castro said. He noted that China is stepping up investment in Africa.

“If we want to get African nations to align with the US and Europe on AI policy, instead of going to China,” he said, “what do we actually do to get them on our side?”