close
close

Association-anemone

Bite-sized brilliance in every update

EU AI Act: Draft guidance for general purpose AI shows first steps for Big AI to comply
asane

EU AI Act: Draft guidance for general purpose AI shows first steps for Big AI to comply

A the first project a code of practice that will apply to providers of general-purpose AI models under the European Union’s AI Act has been published, alongside an invitation for feedback – open until 28 November – as the development process continues next year , before formal compliance. the deadlines that will apply in the coming years.

The pan-EU law, which went into effect this summerregulates the applications of artificial intelligence in a risk-based framework. But also aims some steps towards stronger basic – or general purpose – AI models (GPAIs). This is where this Code of Practice comes in.

Among those likely to be in the frame is OpenAI, the manufacturer GPT modelswhich are the basis of the AI ​​chatbot ChatGPTGoogle with his GPAI GeminiTarget with BladeAnthropogenic with Claudeand others like France’s Mistral. They will be expected to comply with the AI ​​General Purpose Code of Practice if they wish to ensure that they comply with the AI ​​Act and thus avoid the risk of prosecution for non-compliance.

To be clear, the Code is intended to provide guidance for meeting the obligations of the EU AI Act. GPAI providers may choose to deviate from the best practice suggestions if they believe they can demonstrate compliance through other measures.

This first draft of the Code is 36 pages long, but it is likely to get longer – perhaps considerably – as the editors warn that it is sparse on detail as it is “a high-level drafting plan outlining our guiding principles and objectives for the Code”.

The project is littered with boxes that pose “open questions” that the working groups tasked with producing the Code have not yet resolved. The feedback sought – from industry and civil society – will clearly play a key role in shaping the substance of the sub-measures and key performance indicators (KPIs) that have not yet been included.

But the document gives an idea of ​​what lies ahead (in terms of expectations) for GPAI manufacturers once the relevant compliance deadlines apply.

The transparency requirements for GPAI producers are due to enter into force on 1 August 2025.

But for the strongest GPAIs—those the law defines as having “systemic risk”—the expectation is that they must comply with risk assessment and mitigation requirements 36 months after entry into force (or August 1, 2027).

There is an additional caveat to the effect that the draft Code was designed on the assumption that there would be only a “small number” of GPAI and systemic risk GPAI producers. “If this assumption proves wrong, future designs may need to be significantly modified, for example by introducing a more detailed system of tiered measures, focusing primarily on those models that offer the greatest systemic risks,” warn the editors.

In terms of transparency, the Code will set out how GPAIs must comply with information provisions, including in the area of ​​copyrighted material.

An example here is “Sub-Measure 5.2”, which currently obliges signatories to provide details of the names of all web crawlers used for GPAI development and their relevant robots.txt features “including at the time of crawling”.

GPAI model makers continue to face questions about how they obtained data to train their models, with more PROCESSES filed by rights holders who allege that AI firms have unlawfully processed copyrighted information.

Another commitment set out in the draft Code requires GPAI providers to have a single point of contact and complaints handling to facilitate rights holders to communicate grievances “directly and quickly”.

Other proposed copyright measures cover the documentation GPAI is expected to provide about data sources used for “training, testing, and validation, and about permissions to access and use protected content for the development of general-purpose AI.” .

Systemic risk

The strongest GPAIs are also subject to EU AI Law rules aimed at mitigating so-called “systemic risk”. These AI systems are currently defined as models that have been trained using a total computing power greater than 10^25 FLOPs.

The Code contains a list of the types of risks that signatories will have to treat as systemic risks. These include:

  • Offensive cybersecurity risks (such as the discovery of vulnerabilities).
  • Chemical, biological, radiological and nuclear risk.
  • “Loss of control” (meaning here the inability to control a “powerful autonomous general purpose AI”) and the automated use of models for AI research and development.
  • Persuasion and manipulation, including widespread disinformation/disinformation that could pose risks to democratic processes or lead to a loss of trust in the media.
  • Widespread discrimination.

This version of the Code also suggests that GPAI producers may identify other types of systemic risks that are not explicitly listed, such as invasion of privacy and “large-scale” surveillance, or uses that could present public health risks. And one of the open questions the paper asks here is which risks should be prioritized to be added to the main taxonomy. Another is how the systemic risk taxonomy should address deepfake risks (related to AI-generated child sexual abuse material and non-consensual intimate images).

The Code also aims to provide guidance on identifying key attributes that could lead to patterns that create systemic risk, such as “dangerous model capabilities” (eg, cyber offensive or “weapons acquisition or proliferation capabilities ”) and “dangerous model biases” (eg, being out of alignment with human intent and/or propensity to cheat; change).

Although many details remain to be filled in as the drafting process continues, the Code’s authors write that its measures, sub-measures and KPIs should be “proportionate”, with a particular focus on “adaptation to the size and capacity of a particular supplier , especially SMEs and start-ups with less financial resources than those on the frontier of AI development.” Consideration should also be given to “different distribution strategies (eg open-sourcing), where appropriate, reflecting the principle of proportionality and taking into account both benefits and risks”, they add.

Many of the open questions raised by the project relate to how specific measures should be applied to open source models.

Safety and security in the frame

Another measure in the code refers to a “safety and security framework” (SSF). GPAI producers will be expected to detail their risk management policies and identify “ongoing and comprehensive” systemic risks that could arise from GPAI.

There is an interesting sub-measure here on “Risk Prediction”. This would commit signatories to include in the SSF a “best efforts estimate” of the timelines for when they expect to develop a model that triggers indicators of systemic risk – such as the dangerous capabilities and trends noted above. It could mean that starting in 2027, we’ll see cutting-edge AI developers setting time frames for when they expect model development to exceed certain risk thresholds.

Elsewhere, the draft Code focuses on systemic risk GPAIs, using “best-in-class assessments” of their models’ capabilities and limitations, and applying “a range of appropriate methodologies” to do so. Examples listed include: question and answer sets, benchmarks, red teaming and other adversarial testing methods, human lift studies, model organisms, simulations, and proxy assessments for classified materials.

Another submeasure on “substantial systemic risk notification” would require signatories to notify Office AIa supervisory and management body established under the Act, “if they have reasonable grounds to believe that substantial systemic risk could materialize”.

The code also sets out measures on “serious incident reporting”.

“Signatories commit to identifying and keeping track of serious incidents to the extent they arise from their general purpose AI models with systemic risk, documenting and reporting without undue delay any relevant information and possible corrective actions to the AI ​​Office and , as appropriate, to the competent national authorities,” it says – although an associated open-ended question asks for information on “what constitutes a serious incident”. So it looks like there is more work to be done here to establish the definitions.

The draft Code includes additional questions on “possible corrective measures” that could be taken in response to serious incidents. It also asks “what serious incident response processes are appropriate for open-weight or open-source vendors?”, among other wording seeking feedback.

“This first draft of the Code is the result of a preliminary analysis of existing best practices by the four specialized working groups, input from the stakeholder consultation of almost 430 submissions, responses from the supplier workshop, international approaches (including the Code G7 Code of Conduct, AI Safety Border Commitments, the Bletchley Declaration and relevant government and standard setting body outputs) and most importantly the AI ​​Act in itself,” the editors go on to say in the conclusion.

“We emphasize that this is only a first draft and, accordingly, the suggestions in the draft Code are provisional and subject to change,” they add. “We therefore invite your constructive input as we further develop and update the content of the Code and work towards a more granular final form for 1 May 2025.”