close
close

Association-anemone

Bite-sized brilliance in every update

The UK Government is introducing an AI self-assessment tool
asane

The UK Government is introducing an AI self-assessment tool

The UK Government has launched a free self-assessment tool to help businesses responsibly manage their use of artificial intelligence.

The questionnaire is intended for use by any organization that develops, provides or uses services that use AI as part of its standard operations, but is primarily intended for smaller companies or start-ups. The results will tell decision makers the strengths and weaknesses of their AI management systems.

How to use AI Management Essentials

Now availablethe self-assessment is one of the three parts of the so-called “Essential Tools for AI Management” tool. The other two parts include an assessment system that provides an overview of how well AI is managing their business and a set of action points and recommendations for organizations to consider. None have been released yet.

AIME is based on the ISO/IEC 42001 standard, NIST frameworkand EU AI Act. The self-assessment questions cover how the company uses AI, manages its risks and is transparent with its stakeholders.

SEE: Delaying UK AI rollout by five years could cost economy more than £150bn, Microsoft report says

“The tool is not designed to assess AI products or services per se, but rather to assess the organizational processes that are in place to enable the development and responsible use of these products,” according to Report of the Department of Science, Innovation and Technology.

When completing the self-assessment, input should be obtained from employees with technical and broad business knowledge, such as a CTO or software engineer and an HR business manager.

The government wants to include self-assessment in its procurement policy and frameworks to integrate insurance into the private sector. It would also like to make it available to public sector buyers to help them make more informed decisions about AI.

On November 6, the government opened a consultation inviting companies to provide feedback on the self-assessment, and the results will be used to refine it. The evaluation and recommendation parts of the AIME tool will be launched after the consultation closes on 29 January 2025.

The self-assessment is one of many government initiatives planned for AI assurance

One paper published this week, the government said AIME will be one of many resources available on the “AI Insurance Platform” it is trying to develop. These will help companies conduct impact assessments or review AI data to detect bias.

The government is also creating a responsible AI terminology tool to define and standardize key AI assurance terms to improve cross-border communication and trade, particularly with the US.

“Over time, we will create an accessible toolkit to enable basic good practices for the responsible development and implementation of artificial intelligence,” the authors wrote.

The government says the UK AI insurance market, the sector that provides tools for the development or use of AI safety and currently comprises 524 firms, will grow the economy by over £6.5 billion over the next decade. This growth can be attributed in part to increased public trust in technology.

The report adds that the government will collaborate with the AI ​​Safety Institute – launched by former prime minister Rishi Sunak at AI Safety Summit in November 2023 — to promote AI insurance in the country. It will also allocate funds to expand the Systemic Safety Grant programme, which currently has up to £200,000 available for initiatives developing the AI ​​safety ecosystem.

Legally binding AI safety legislation is coming next year

Meanwhile, Peter Kyle, the UK’s technology secretary, has pledged to make the voluntary agreement on AI safety testing legally binding by implementing AI Bill in the following year at Financial Times Future of AI Summit Wednesday.

The AI ​​Safety Summit in November saw AI companies – including OpenAI, Google DeepMind and Anthropic – voluntarily agree to allow governments to test the safety of the latest AI designs before their public release. It was first reported that Kyle had expressed plans to legislate voluntary agreements to executives from prominent AI companies in a meeting in July.

SEE: OpenAI and anthropic sign agreements with the US AI Safety Institute, handing over frontier models for testing

He also said AI Bill will do it focus on large ChatGPT-style foundation patterns created by a few companies and transforms the AI ​​Safety Institute from a DSIT directorate into a “non-distinguished government body”. Kyle reiterated these points at this week’s Summit, according to the FT, stressing that he wants to give the Institute “the independence to act fully in the interests of British citizens”.

He also pledged to invest in advanced computing power to support the development of border AI models in the UK, responding to criticism of the government. removing £800m of funding for a University of Edinburgh supercomputer in August.

SEE: UK government announces £32m for AI projects after ditching supercomputer funding

Kyle said that while the government cannot invest £100 billion alone, it will work with private investors to secure the funding needed for future initiatives.

A year into AI safety legislation for the UK

Piles of legislation have been published in the past year committing the UK to develop and use AI responsibly.

On 30 October 2023, the Group of Seven countries, including the UK, created a AI’s voluntary code of conduct comprising 11 principles that “promote safe, secure and trusted AI around the world”.

The AI ​​Safety Summit, which saw 28 countries pledge to ensure safe and responsible development and deployment, was launched just days later. Later in November, the UK’s National Cyber ​​Security Centre, the US Cyber ​​Security and Infrastructure Agency and international agencies from 16 other countries guidelines released on how to ensure security while developing new AI models.

SEE: UK AI safety summit: Global powers make ‘landmark’ commitment to AI safety

In March, the G7 countries signed another agreement committing to explore how AI can improve public services and boost economic growth. The agreement also included joint development of an AI toolkit to ensure that the models used are safe and reliable. The following month, then-Conservative-Gov agreed to work with the US in developing tests for advanced AI models by signing a memorandum of understanding.

In May, the government released Inspecta free, open-source testing platform that evaluates the safety of new AI models by assessing their basic knowledge, reasoning ability, and autonomous capabilities. It was also co-hosted another AI safety summit in Seoulwhich involved the UK agreeing to work with global nations on AI safeguards and announcing grants of up to £8.5m for research into protecting society from its risks.

Then, in September, Britain signed on the world’s first international treaty on AI alongside the EU, the US and seven other countries, committing them to adopt or maintain measures to ensure that the use of AI is consistent with human rights, democracy and the law.

And it’s not over yet; with the AIME tool and report, the government announced a new AI safety partnership with Singapore through a memorandum of cooperation. It will also be represented at the first meeting of the International AI Safety Institutes in San Francisco later this month.

AI Safety Institute President Ian Hogarth said “An effective approach to AI safety requires global collaboration. That’s why we’re putting so much emphasis on the International Network of AI Safety Institutes, while strengthening our own research partnerships.”

However, the US has moved further away from AI collaboration recent directive limiting the sharing of AI technologies and mandating protection against foreign access to AI resources.