Businesses will welcome the formation of a new international treaty on AI that sets a “global standard” for how AI-related risks and opportunities should be managed, an expert has said.
Maureen Daly of Pinsent Masons in Dublin was commenting after the EU, UK and US signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (AI Convention), which will govern how AI systems are developed and operated in signatory countries. It is the first treaty of its kind in the world.
The signatories, which also include Israel and Norway, have each committed to “ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law”, under the treaty.
Daly said: “While the EU AI Act sets out clear rules for the regulation of AI systems and is the world’s first comprehensive law to address AI risks, the new AI Convention creates a common framework for AI systems applicable to the US, the UK, the EU and the other signatories. This is a welcome development, as it sets a global standard for managing AI’s impact and innovation while safeguarding fundamental rights and values.”
“The Convention creates a legal framework that covers the entire lifecycle of AI systems. By applying obligations to the entirety of the lifecycle, it ensures that the Convention covers not only current but future risks, given the rapid and often unpredictable technological developments. The Convention is a significant step towards shaping AI without compromising the core principles of human rights, democracy and the rule of law which the signatories commit to protect,” she said.
While signatories have a wide degree of freedom over how they interpret and apply the AI Convention, the treaty sets out high-level requirements on issues such as transparency and oversight, accountability and responsibility, equality and non-discrimination, and privacy, as well as safe innovation.
Signatories are required to “adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention”, which also include requirements to put in place “measures to ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of artificial intelligence systems” – such as record keeping duties and complaints procedures.
Frankfurt-based Nils Rauer of Pinsent Masons said legislators that have already enacted laws and passed administrative orders on AI around the world have taken “demonstrably different paths”. He cited US president Joe Biden administration’s executive order of 30 October 2023, which focuses on the public sector, China’s specific interim measures on managing generative AI services in August 2023, and the former UK government’s so-called “pro-innovation approach” to AI regulation, which is a principles-based, non-statutory, and cross-sector framework, as examples of the differences. He said the most robust and comprehensive piece of legislation so far is the EU AI Act.
“It remains to be seen whether the Convention will lead to any visible convergence in the sphere of AI regulation,” said Rauer.
The signatory countries must ratify the AI Convention in order for it to have effect in their jurisdiction.
Shabana Mahmood, Lord Chancellor and justice secretary within the UK government, said: “Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth. However, we must not let AI shape us – we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”
The European Commission has said the AI Convention will be implemented in practice in the EU via the EU AI Act.