Out-Law News 8 min. read

EU moves to clarify AI Act scope for gen-AI

Gen-AI depicted

Yuichiro Chino/Getty Images.


EU policymakers are considering setting threshold measures of computational resources to help businesses determine whether – and to what extent – AI models they train or modify are subject to regulatory requirements under the EU AI Act.

The proposed thresholds were set out by the EU AI Office in a working document that outlines current, but not final or binding, thinking on matters relevant to the scope of rules applicable to ‘general-purpose’ AI (GPAI) models under the AI Act.

The proposals are the subject of a survey the European Commission has opened, through which it is seeking the views of industry and other stakeholders. The feedback is expected to help shape new guidelines that will clarify the scope of the GPAI regime, which is due to take effect on 2 August 2025. The survey closes on Thursday 22 May. The AI Act’s regime for GPAI models is distinct from the Act’s other requirements applicable to AI systems – AI models are considered to be components of AI systems, for the purposes of the Act.

Dr Nils Rauer of Pinsent Masons in Frankfurt, expert in technology law and AI regulation, said: “As with all new laws, those who are affected by the new regulatory regime do require some initial steer on what the legislator means with the language chosen and enacted. The efforts made by the Commission as well as the AI Office are therefore generally welcome, even though the nature of the guidance is not binding.”

“Notably, in the absence of any court decisions, such guidance will help shaping the regulatory landscape. This is positive regardless of whether one agrees with the position taken or not. If the CJEU were to take a different position at a later stage, this would simply be an overruling clarification, which could be sought proactively in case of disagreement with the guidance provided now,” he said.

Under the AI Act, providers of GPAI models face a range of record-keeping and disclosure obligations, from documenting the model’s training and testing process, sharing information to help providers of AI systems integrate their systems with the model, and drawing up an EU law-compliant copyright policy, to publishing a sufficiently detailed summary about the content used for training of the model.

GPAI models ‘with systemic risk’ face further obligations, including around model evaluation, testing and risk mitigation, as well as around incident reporting and cybersecurity.

The Commission is in the final stages of developing a GPAI code of practice to set out in more detail each of the various requirements GPAI providers must meet under the AI Act. It was widely expected that the finalised code would be published on 2 May. However, a report by Politico, quoting a Commission spokesperson, suggests there will be a delay in publication. The AI Office’s working document suggests that publication of the code could instead coincide with the finalised guidelines “in May or June 2025”.

Adhering to the code will not be mandatory for GPAI providers, but the AI Office said in its working document that signatories to the code “will be transparent in their compliance with the AI Act and therefore benefit from increased trust by the Commission and other stakeholders”.

GPAI is defined under the AI Act as an AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. AI models that are used for research, development or prototyping activities before they are placed on the market fall outside the scope of what is defined.

If the AI Office’s proposals are adopted as drafted currently, AI models that can generate text or images would be presumed to be GPAI models, for the purposes of the AI Act, if their training compute is greater than 1022 floating point operations (FLOP). FLOP is a measure of computing power.

Other AI models could also be considered to constitute GPAI models under the AI Act if those models “have a level of generality and capabilities comparable” to in-scope text- and/or image-generating models, the AI Office said.

The AI Act also envisages businesses that implement models into their own AI systems as being providers of GPAI, for the purposes of the legislation. In this regard, the AI Office said that businesses that modify or ‘fine-tune’ GPAI models that have been placed on the market already would be presumed to be providers of GPAI models if the amount of computational resources used to modify the model is greater than a third of 1022 FLOP. In that scenario, those downstream entities would only be responsible for compliance in relation to the modifications they make, not the model as a whole. For example, they would be required to update the technical documentation the original providers produced to reflect their adaptations.

In a question-and-answer publication it has issued, the Commission said: “Regardless of whether a downstream entity that incorporates a general-purpose AI model into an AI system is deemed to be a provider of the general-purpose AI model, that entity must comply with the relevant AI Act requirements and obligations for AI systems.”

Where modifications use more than a third of the compute required for the model to having been classified as a GPAI model ‘with systemic risk’ – i.e a third of 1025 FLOP – then businesses responsible for those modifications would be presumed to be providers of such models, which comes with additional compliance requirements. According to the AI Office’s assumptions, however, there are no examples of modifications to models in operation today that would cause a downstream modifier to be fall within scope of the requirements applicable to GPAI ‘with systemic risk’. The threshold it has outlined, it said, is instead designed to be “forward-looking and in line with the risk-based approach of the AI Act”.

The presumptions that models meeting the various thresholds fall in-scope of the relevant requirements under the GPAI regime would be rebuttable, giving businesses an opportunity to present arguments as to why their models should be exempted.

As Out-Law explored recently, a threshold measure of FLOP is already stipulated in the AI Act – when more than 1025 FLOP is used to train an AI model, it is presumed to be a GPAI model ‘with systemic risk’. The AI Act, however, does not set out a corresponding FLOP metric for determining what constitutes a GPAI model in the first place – or for determining when modified AI models constitute a GPAI model or elevate from a basic GPAI model to one ‘with systemic risk’. This appears to be something that policymakers are now looking to rectify via new guidance.

FLOP, however, is a controversial metric chosen by the legislators for distinguishing regulatory obligations under the AI Act. The AI Office itself said in its working document that “training compute is an imperfect proxy for generality and capabilities”. It said it is looking into whether there are potential alternative metrics that enable a model’s generality and capabilities to be assessed “with relative ease”.

Seinen Wouter

Wouter Seinen

Partner, Head of Office, Amsterdam

The approach does not appear ‘technology neutral’, which has been the gold standard for tech laws in Europe up until now

Amsterdam-based technology law expert Wouter Seinen of Pinsent Masons said there are parallels between the distinction being drawn between models not in scope, GPAI models, and GPAI models ‘with systemic risk’ under the EU AI Act, and the distinction drawn between online platforms and ‘very large’ online platforms under the EU’s Digital Services Act.

Seinen said reliance on FLOP as a relevant metric for GPAI is flawed and that it is unclear how the Commission plans to audit the computation power used for training the model, as well as how computing power used to train a model is informative of the importance or risk level of that model.

“The approach does not appear ‘technology neutral’, which has been the gold standard for tech laws in Europe up until now,” Seinen said. “The Commission subtly acknowledges this where it states that ‘training compute is an imperfect proxy for generality and capabilities’ – and that is an understatement.”

“The proposal prompts the question of how imminent developments such as the rise of AI agents and agentic AI will interplay with the concept of GPAI and the scope of its legal definition, let alone what will happen once AI models are trained using quantum computing, as the FLOP measurement will not make a lot of sense in that use case,” he said.

Standardised ways for businesses to determine what amount of computational resources they use to train or modify AI models are set out in its working document.

The working document also highlights that the Commission is considering a data-related carve-out to regulatory exemptions that apply to AI models made accessible under a free and open-source licence under the GPAI regime. Those exemptions would not apply, according to the proposals, where providers of those models collect personal data “from the use of the model or the accompanying services” – other than for the purpose of using that data “to improve the security, compatibility or interoperability of the software”.

Among the obligations that providers of GPAI models face under the AI Act are duties to put in place an EU law-compliant copyright policy and enable rightsholders to reserve their rights not to have their works used for training. In its working document, the AI Office outlined plans to reduce associated compliance burdens for providers, relating to disclosing the source of data used to train their models, where they place their GPAI models on the market before 2 August 2025.

“The AI Office recognises that in the months following the entry into application of the obligations of providers of general-purpose AI models in the AI Act on 2 August 2025, some providers may face various challenging situations to ensure timely compliance with their obligations under the AI Act,” it said. “Accordingly, the AI Office is dedicated to supporting providers in taking the necessary steps to comply with their obligations.”

“In particular: for general-purpose AI models that have been placed on the market before 2 August 2025, providers must take the necessary steps to comply with their obligations by 2 August 2027. This does not require re-training or unlearning of models already trained before 2 August 2025, where implementation of the measures for copyright compliance is not possible for actions performed in the past, where some of the information for the training data is not available, or where its retrieval would cause the provider disproportionate burden. Such instances must be clearly justified and disclosed in the copyright policy and the summary of the content used for training,” the AI Office added.

According to the AI Office, signatories to the new GPAI code of practice can expect their adherence to the code to be a focus of the Commission’s enforcement activities, once the GPAI regime takes effect. It added that commitments made in a code of practice could also be considered “a mitigating factor” when it is deciding what level of fine to impose for non-compliance.

For providers that elect not to adhere to the code, the AI Office said those businesses will be “expected to demonstrate how they comply with their obligations under the AI Act via other adequate, effective, and proportionate means”. It further suggested that those providers will be subject to “more requests for information and access to conduct model evaluations, since there may be less clarity regarding how they ensure compliance with their obligations under the AI Act”.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.