Out-Law Analysis 7 min. read

EU AI Act rules on GPAI models under DeepSeek review

DeepSeek illustration_Digital - SEOSocialEditorial image

DeepSeek sprung to prominence in January. Justin Sullivan/Getty Images.


The mainstream emergence of Chinese AI app DeepSeek is causing EU policymakers to consider changes to the EU AI Act, Out-Law can reveal.

An update of a threshold measure of computing power specified in the regulation could follow, with potential implications for the regulation of other general-purpose AI (GPAI) models.

Below, we explore what the law provides in more detail and the specific changes under consideration, as well as reflect on what we can glean about how policymakers are responding to market developments, to help providers understand which category of regulation their GPAI models might fall into.

GPAI models and how ‘systemic risk’ is determined

GPAI models are AI models that can perform a wide range of tasks and often form the basis of other AI systems. Large language models (LLMs) are an example of GPAI models.

Rules specific to providers of GPAI models are set out in Chapter V of the AI Act and take effect on 2 August 2025.

The strictest rules under Chapter V apply to providers of GPAI models ‘with systemic risk’. Understanding whether a GPAI model will be classed as a GPAI model ‘with systemic risk’ is an essential first step for AI developers in understanding their obligations under the AI Act, but it is not a straightforward process.

The concept of ‘systemic risk’ is defined in the Act. It means “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the [EU] market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”.

Article 51(1) addresses how GPAI models ‘with systemic risk’ will be classified. There are two ways in which this can happen:

  • if the model “has high impact capabilities” – something which is “evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks”, or;
  • if the European Commission decides the model has equivalent impact or capabilities – based on criteria set out in an annex to the Act, which includes factors such as the number of parameters of the model, the quality or size of the data set it was built on, the number of registered end-users, and amount of computation used for training the model.

The relevance of ‘FLOPS’ and how DeepSeek changes things

Floating point operations, or FLOPS, is a measure of computing power. It is defined in the Act as “any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base”.

Article 51(2) states that a GPAI model will be presumed to have ‘high impact capabilities’ when more than 1025 FLOPS is used to train the model.

Recitals in the AI Act make clear that providers of GPAI models should know when they have exceeded the FLOPS threshold in advance of the development of those models being completed. This is because, the text states, “training of general-purpose AI models takes considerable planning which includes the upfront allocation of compute resources”.

Providers are expected to notify the EU AI Office within two weeks of meeting the threshold or becoming aware of meeting the threshold. It is open to providers that meet the FLOPS threshold to argue that their models should nevertheless not be classed as GPAI models ‘with systemic risk’ – on the basis that it “exceptionally does not present systemic risks”.

Under Article 51(3), the European Commission has a statutory duty to update the thresholds for classifying GPAI models ‘with systemic risk’ when necessary to ensure they continue to reflect the latest technology or industry practices – including in relation to FLOPS.

Out-Law can reveal that the mainstream emergence earlier this year of DeepSeek – the Chinese open source model that the company behind claims to have developed at a fraction of the cost of other LLMs on the market, and without access to the same computing power – has already spurred discussions within the Commission in this regard.

Commission spokesperson for tech sovereignty, Thomas Regnier, told Out-Law: “The Commission is always monitoring market developments – and technology developments in the wider sense – to assess potential impacts on the EU, its market and citizens. We are currently seeing two developments here – large numbers of models are likely trained with compute resources above the threshold, while DeepSeek has shown that frontier capabilities can also be reached with less compute. As foreseen by the AI Act, the threshold should be adjusted over time to reflect technological and industrial changes, and should be supplemented with benchmarks and indicators for model capability.”

“Additionally, the AI Office can designate models with systemic risk based on capabilities. The AI Office is engaging with the scientific community, industry, civil society, and other experts in assessing the situation and considering the appropriate course of action in each case,” he added.

In March, the European Commission said that the AI Office “intends to provide further clarifications on how general-purpose AI models will be classified as general-purpose AI models with systemic risk”. It said the AI Office would draw on “insights from the Commission’s Joint Research Centre which is currently working on a scientific research project addressing this and other questions”.

The AI Act itself envisages codes of practice being developed to “establish a risk taxonomy of the type and nature of the systemic risks at [EU] level, including their sources”.

What can GPAI model providers expect next?

While it remains to be seen how the Commission responds to the two developments that Thomas Regnier flagged in his statement, his comments need to be considered in the wider geopolitical context.

Last year the “additional regulatory requirements on general purpose AI models” included in the AI Act were cited as an example of the EU’s innovation-hindering precautionary regulation of technology companies. Those comments were included in a report prepared on behalf of the Commission by Mario Draghi, the former European Central Bank president, which flagged wider concerns about the EU’s competitiveness in the global marketplace.

The EU’s approach to tech regulation has been a particular gripe of multiple US administrations, but criticism of EU policy has intensified since Donald Trump returned to the White House in January. Trump’s deputy, JD Vance, has described the EU approach as restrictive and paralysing and as hindering AI development.

In response to Vance and the Draghi report, and in line with the global pivot from focusing on AI safety to AI adoption, Commission president Ursula von der Leyen has pledged to “cut red tape” in relation to AI, with proposals for “simplification” of the EU’s digital policy rulebook expected over the coming months.

In this context, an increase in the FLOPS threshold, to reduce the “large numbers of models” that Regnier suggested would currently be presumed to be GPAI models with systemic risk, would be entirely consistent with the apparent move to reduce regulatory burdens around AI within the EU.

On the other hand, reducing the FLOPS threshold would represent a recognition by the Commission of DeepSeek’s impact and the likely effect it will have on other developers as they explore how to reduce compute demands and thereby lower development costs.

As Regnier alludes, however, there is broad scope given to the Commission under the AI Act, to designate GPAI models with equivalent impact or capabilities as those with ‘high impact capabilities’, so that they too are regulated as GPAI models with systemic risk. It seems highly likely that a mainstream app like DeepSeek would be considered for designation as a GPAI model with systemic risk, whether it meets the FLOPS threshold or not.

How the Commission decides to act will not just have repercussion for DeepSeek but many other model providers too. They will be interested in the product of the AI Office’s work to clarify how GPAI models will be classified as GPAI models with systemic risk.

The distinction between GPAI models and GPAI models with systemic risk is important because of the additional regulatory requirements falling on the latter.

Two-tier regulation and the GPAI code of practice

Providers of all GPAI models face record-keeping, transparency and copyright-related obligations, subject to exceptions applicable to providers of certain GPAI models released under a free and open-source licence.

For example, they must:

  • draw up and maintain the technical documentation of the model, including its training and testing process and the results of its evaluation – and make it available to regulators on request;
  • draw up, maintain and make available information to help providers of AI systems integrate their systems with the model;
  • put in place an EU law-compliant copyright policy and enable rightsholders to reserve their rights not to have their works used for training;
  • publish a sufficiently detailed summary about the content used for training of the general-purpose AI model.

Where GPAI models are classified as GPAI models with systemic risk, they face additional obligations. The above exceptions for open source models do not apply where GPAI models are ‘with systemic risk’.

The additional obligations on providers of GPAI models with systemic risk include requirements to:

  • perform model evaluation, including through adversarial testing, with a view to identifying and mitigating systemic risks;
  • assess and mitigate possible systemic risks at EU level, including their sources, that may stem from the development, the placing on the market, or the use of GPAI models with systemic risk;
  • keep track of, document, and report, without undue delay, serious incidents and possible corrective measures to address them, and;
  • ensure an adequate level of cybersecurity protection for the model and its physical infrastructure.

An important tool to help providers comply with the AI Act’s GPAI models regime will be the GPAI code of practice.

The code, which sets out more detail on each of the various requirements for providers, is currently in the final stages of development: the third draft of the code was published in March and the final version is due for publication in May. The AI Act provides that, while compliance with the code is not mandatory, it will providers them demonstrate compliance with the Act’s Chapter V requirements.

Publication of the finalised code is likely to be the catalyst for a major compliance exercise. GPAI model providers, whose compliance work should already be underway on the basis of the draft codes published to-date, will be looking to the AI Office or Commission for clarifications around how GPAI models with systemic risk will be classified, to inform this activity.

Developers should be aware that GPAI models can be classed as ‘high-risk’ AI systems under the EU AI Act, which would further significantly extend their regulatory obligations.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.