Out-Law Analysis 8 min. read

Paris AI summit signals pivot from safety to adoption

Paris at dawn_Digital - SEOSocialEditorial image

Mike Reid Photography/Getty Images


Commercial opportunities for businesses developing and deploying artificial intelligence (AI) systems, as well as for those building the infrastructure to underpin AI use, will emerge from the ‘pro AI adoption’ approach confirmed by policymakers at this week’s AI action summit in Paris.

The summit brought heads of state and other senior government figures from across the globe together with industry leaders and other stakeholders. The focus of – and outcomes from – the summit reflect the enormous potential of AI to deliver economic and social benefits and represents a change of emphasis from previous international AI summits – AI safety was the central theme of the so-called Bletchley summit hosted by the UK government in November 2023, for example.

For Paris summit host French president Emmanuel Macron, the event was considered a chance to show off France’s strengths as a technology hub and as a place for investing in AI – ahead of the summit, Macron announced a €109 billion investment package for France, predominantly for new data centres. A separate €200bn EU-wide AI investment initiative was announced by European Commission president Ursula von der Leyen, with part of the funds committed towards building four “AI gigafactories” which will be “specialised in training the most complex, very large, AI models”.

Both the French and EU announcements might be viewed as Europe’s response to the US’ Stargate project, where $500bn of investment has been pledged to deliver new AI infrastructure.

Richard Annabelle_Dec 2019

Annabelle Richard

Partner

Some within industry continue to question whether enough is being done to attract private sector investment in AI

The summit provided a platform for businesses to showcase AI innovations and ask for more to be done to support the technology’s adoption on a wider scale. Companies from SAP and AWS to Spotify were there, making the case for why AI solutions can and should be integrated more into the everyday activities of people and organisations.

In support of that, 60 states and territories signed a statement that laid out core priorities such as making innovation in AI thrive and encouraging AI deployment – together with AI accessibility, sustainability, openness, inclusivity, safety and security. The statement further committed the signatories to work together on better coordinating international governance of AI.

China, Australia, Brazil, Korea, South Africa and the EU – as well as individual EU countries such as France, Germany, Spain, Netherlands, Ireland and Luxembourg – all signed the statement, but the US and UK were notable omissions from the list of signatories.

The precise reasons for the US’ position are unclear, but parts of the statement do appear to run contrary to the direction the new Trump administration is headed with its approach towards AI – immediately on coming to office last month, Trump revoked his predecessor Joe Biden’s executive order concerning safe, secure, and trustworthy development and use of AI.

The US approach to AI development and regulation under Trump is likely to be shaped by a new AI action plan under development – agencies recently issued a request for information on the development of an AI action plan, seeking input from industry and other stakeholders on what the plan should include. The action plan will, the Trump administration said, “define the priority policy actions needed to sustain and enhance America's AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation”.

US vice-president JD Vance used his appearance at the summit to relay concerns that policymakers elsewhere in the world are imposing such unnecessarily burdensome requirements on AI, calling out the EU approach to tech regulation – regulation some consider is designed to keep a leash on major US technology companies, in respect of their EU operations at least.

“We want to embark on the AI revolution before us with the spirit of openness and collaboration, but to create that kind of trust we need international regulatory regimes that foster creation,” Vance said, according to Politico. “To restrict its development now would not only unfairly benefit incumbents in the space, it would mean paralysing one of the most promising technologies we have seen in generations,” he added.

Cameron Sarah

Sarah Cameron

Legal Director

The UK decision not to sign the Paris statement risks it being viewed as damaging its leading role on AI collaboration

The UK’s decision not to sign the statement might be best viewed in the wider geopolitical context. In pursuit of its growth mission, the new Labour government is simultaneously seeking to repair relations with EU countries post-Brexit and position the UK to benefit from a US trade policy reset under Trump.

The UK’s technology industry may consider that it is no bad thing for the UK government to be seen to be putting clear blue water between its approach to AI development and that of the EU. The UK has already adopted a different regulatory approach to AI than the EU and its recent AI opportunities action plan, together with the growth mandates given to regulators and planning reforms aimed at supporting new infrastructure like data centres, represents an attempt to shift the perception that developing, using and investing in AI in the UK is harder than it should be.

However, the UK decision not to sign the Paris statement risks it being viewed as damaging its leading role on AI collaboration.

The UK was the architect of the Bletchley declaration on AI safety and has been a pioneer of AI safety testing regimes and partnerships globally. Its Bletchley safety summit leadership helped spur a recent international AI safety report that suggested international collaboration is needed if AI opportunities are to be taken advantage of and which warned that improving capabilities and the widespread use of general-purpose AI (GPAI) models is creating emerging and unforeseen risks to the public, organisations and governments. In this light, some may view not signing the Paris statement without explaining clearly why as undermining any claim the UK might have to be the standard-bearer for trustworthy, safe and ethical AI use and development.

That said, with the Labour government having recently introduced a new AI cybersecurity code of practice, remaining committed to a sectoral-based approach to AI regulation, at least for now, and having further pledged to introduce impose statutory guardrails on the most powerful AI systems, perhaps this is more about rhetoric and signals to both the market and trade partners than reflective of a major shift in policy. According to the BBC, a UK government spokesperson cited concerns about national security and global governance as reasons for the UK not signing the statement: “This isn't about the US, this is about our own national interest, ensuring the balance between opportunity and security", the spokesperson said.

Mullenex Diane

Diane Mullenex

Partner

In France … there is concern that a new finance law that has been passing through the country’s parliament will disincentivise AI-related entrepreneurship

For France and the wider EU’s part, delegates were left with mixed perspectives on the summit. While the shift in focus towards facilitating AI adoption and respective announcements on AI-related investment packages were welcomed, some within industry continue to question whether enough is being done to attract private sector investment in AI.

In France specifically, for example, there is concern that a new finance law that has been passing through the country’s parliament will disincentivise AI-related entrepreneurship. Under the plans, entrepreneurs behind innovative AI start-ups could face heavy taxation on shares they have been gifted or have acquired, including on gains they have obtained at the point they come to dispose of those shares when selling their stake in the business.

The broader regulatory framework for AI in the EU also continues to be the subject of criticism, not only by US-based business and the Trump administration, but by industry in Europe too. The need for change was highlighted in a report into EU competitiveness last September, where Mario Draghi, the former European Central Bank president, cited “additional regulatory requirements on general purpose AI models” included in the EU AI Act as an example of the EU’s “precautionary approach” to the regulation of technology companies. He said this regulatory stance “hinders innovation”.

At the time, European Commission president Ursula von der Leyen pledged to act on the report – clear signs of what that might mean for the development and use of AI in the EU in future emerged this week.

Rauer Nils

Dr. Nils Rauer, MJI

Rechtsanwalt, Partner

Businesses … should proceed with compliance efforts based on the AI Act’s rules already in effect and due to take effect in due course

Beyond the €200bn AI investment initiative announced, von der Leyen pledged to “cut red tape” in relation to AI. Her Commission’s new work programme for 2025, released on Tuesday, includes plans for “simplification” of the EU’s digital policy rulebook. The Commission has further pledged to reconsider the case for the introduction of a new AI liability directive – proposals it tabled to that effect in 2022 have stalled under scrutiny from EU law makers, during which time separate revised EU product liability laws encompassing AI software have come into force.

For many businesses, the prospect of the EU AI Act being streamlined will be welcome, but there will also be concern from those that have already engaged in significant work to achieve compliance that some of that work, at least, may turn out to be unnecessary.

The provisions in the EU AI Act, dubbed the world’s first law specific to AI, are coming into effect in phases. The first lot of substantive new rules took effect on 2 February 2025 and prohibit certain AI practices entirely. The Commission issued guidelines last week to help businesses better understand the new prohibited AI regime.

Andre Walter

Andre Walter

Legal Director

A good starting point for businesses is determining what solutions they use qualify as an ‘AI system’

While businesses will want to monitor closely for what the Commission’s ‘simplification’ initiative will mean precisely for AI regulation in the EU in future, they should proceed with compliance efforts based on the AI Act’s rules already in effect and due to take effect in due course – rules relevant to GPAI models are scheduled to take effect in August, for example.

In practical terms, a good starting point for businesses is determining what solutions they use qualify as an ‘AI system’, for the purposes of the AI Act; assessing whether they would be considered provider, deployer, importer or distributor of those systems; and whether or not those systems are prohibited – or would otherwise, for example, constitute a ‘high-risk’ AI system and/or GPAI model ‘with systemic risk’, subject to the strictest regulation under the legislation.

In the absence of any case law under this new law, the Commission’s new guidance on the definition of AI systems is instructive for businesses. While it emphasises that the understanding of what constitutes an AI system must be broad, not narrow, it breaks down and elaborates on the various criteria relevant to the way an AI system is defined in the AI Act. Whilst the guidelines, along with another guidance on prohibited AI practices, are only in draft form, substantial changes are not to be expected in the formal adoption process.

There is, however, some apparent inconsistency between the AI Act and the Commission’s guidelines that businesses will want to consider.

Maureen Daly

Maureen Daly

Partner

The purpose of the guidelines was to provide clarity, but the Commission’s intervention causes some confusion

For example, under the guidelines, systems used to improve mathematical optimisation or to accelerate and approximate traditional, well-established optimisation methods fall outside the scope of the AI definition. This is because, the Commission said, while those models have the capacity to infer, they do not transcend ‘basic data processing’ and that “an indication that a system does not transcend basic data processing could be that it has been used in consolidated manner for many years”. This contradicts Recital 12 of the EU AI Act which states that “the capacity of an AI system to infer transcends basic data processing”. In addition, a reading of the AI Act would suggest the length of time a system has been used is irrelevant to whether or not it meets the definition of AI system under the legislation.

Also excluded from the scope of the definition of AI systems, according to the guidelines, are simple prediction systems. However, under the definition in the AI Act, complexity is not a factor that decides whether a system is an AI system or not.

The purpose of the guidelines was to provide clarity, but the Commission’s intervention causes some confusion. As the guidelines are not binding, businesses will hope national regulators can do better in clarifying what an AI system is – to aid their compliance efforts amidst the changing landscape they are operating in.

Annabelle Richard and Diane Mullenex attended the AI action summit in Paris. Co-written by Nils Rauer, Andre Walter, Maureen Daly, Sarah Cameron and Mark Ferguson of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.