Out-Law Analysis 4 min. read
07 Feb 2025, 3:32 pm
The AI action summit in Paris will not yield the single global rulebook for deploying artificial intelligence (AI) systems that businesses desire – but it will help them better understand the different ways that policymakers and regulators around the world view AI governance and inform their approach to AI adoption in individual markets.
The Paris summit, on Monday 10 and Tuesday 11 February, is the latest in a series of events at which heads of state and other senior government figures from across the globe have gathered with industry leaders and other stakeholders, to share views and best practice on AI issues and try to build some form of consensus over actions that should be taken in response.
The so-called ‘Bletchley declaration’, an international accord that recognises the need for AI development and use to be “human-centric, trustworthy and responsible”, was the first tangible evidence of work in this area. The declaration was brokered by the UK government at the inaugural global AI summit it hosted in autumn 2023. The US, China, UK, EU, Australia, France, Germany, India, Singapore, and the UAE were among the 29 signatories to the declaration.
The UK summit also spawned a deal between governments in 10 countries and leading technology companies – Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI – over pre-market AI testing. Thereafter, the US and UK formed their own AI safety testing partnership, which among other things provides for closer ties between the UK AI Safety Institute and its namesake in the US.
In May 2024, the second global AI summit was held in Seoul, South Korea. One of the outcomes from the summit was an expansion of the AI safety testing arrangements that had been agreed six months earlier, to address next-generation AI systems, known as frontier AI.
The Paris summit will bring together almost 100 countries and over 1,000 private sector and civil society representatives from across the world. Discussion will be focused around five core themes:
Diane Mullenex and Annabelle Richard of Pinsent Masons will be participating in events at the AI action summit in Paris. Among other things, Richard will be moderating a panel discussion at an event focused on AI risks and safety being hosted by Renaissance Numérique, while Mullenex will be attending the ‘Business Day’ event at the AI action summit, at Station F and a networking event being hosted by Mozilla.
The Paris summit falls at an important moment, with recent political changes already resulting in apparent divergence over the way in which AI use will be governed in different jurisdictions over the coming years.
The early movers on regulating AI were EU policymakers and lawmakers. The world’s first AI law, the EU AI Act, entered into force last summer. The regulation not only prohibits some types and uses of AI altogether – rules that took effect earlier this month, with related guidelines issued at the time for businesses – but also imposes strict requirements on ‘high-risk’ AI systems as well as ‘general-purpose AI models’ deemed to pose ‘systemic risk’.
The AI Act forms part of a broader suite of legislation impacting technology companies – a package that has drawn criticism. In his report into EU competitiveness last September, for example, Mario Draghi, the former European Central Bank president, cited “additional regulatory requirements on general purpose AI models” included in the AI Act as an example of the EU’s “precautionary approach” to the regulation of technology companies. He said this regulatory stance “hinders innovation”.
So far, UK policymakers have resisted the temptation to introduce specific new legislation for AI. Instead, use of AI has been governed by existing legislation, such as data protection or consumer protection laws, and sectoral regulation. However, regulators must fulfil their regulatory functions as they relate to AI with due regard to five cross-sector principles – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The UK’s approach, developed under the previous Conservative government, could change, however, in light of recent recommendations made to the new Labour government by a prominent tech entrepreneur. In his AI opportunities action plan, Matt Clifford said the UK should move to a more centralised system for the regulation of AI if sector regulators fail to sufficiently promote innovation in the way they carry out their duties. The government endorsed all 50 of Clifford’s recommendations and committed to a series of actions – mainly over the next 12 months – to implement them. It is also intent on bringing forward new legislation this year to specifically regulate frontier AI.
For its part, the US’ approach to AI regulation is already changing. Since coming to office for the second time last month, US president Donald Trump, via an executive order, revoked the 2023 executive order issued by Joe Biden that promoted safe, secure, and trustworthy development and use of AI. How AI is governed in the US going forward, with tech innovator Elon Musk entrenched in Trump’s new administration and Trump intent on delivering growth via an ‘America first’ trade and policy agenda, it remains to be seen what the US approach to AI governance will be over the next four years. Some clarity on the federal approach at least could come from the Paris summit, which US vice-president JD Vance is expected to attend.
The very different approaches to AI governance in the EU, US and UK is also reflected in the varying degrees of appetite for and progress towards AI regulation in other parts of the world. For example, while Canada and South Korea have followed the EU in adopting AI laws and others like China are developing rulebooks of their own, Australia and Hong Kong have preferred to apply existing regulatory regimes or guidance.
For global businesses either developing or deploying AI solutions, the evolving and patchwork nature of the AI regulatory framework across markets poses significant challenges. The Paris summit should help business leaders better understand where the conversation has reached, at a global level, around AI governance, amidst such recent political change and different ideological views on what good governance looks like in relation to AI. That conversation will not end when the summit closes on Tuesday, but the discussions at the event and outcomes from it should help inform the strategic actions and decisions they take at a local level, on AI development or use.
Co-written by Mark Ferguson of Pinsent Masons.