Out-Law / Your Daily Need-To-Know

Out-Law News 2 min. read

CIArb releases new guidelines for AI use in international arbitration


The Chartered Institute of Arbitrators (CIArb) has issued new guidelines on regulating the use of artificial intelligence (AI) in international arbitration, highlighting the risks and compliance issues for alternative dispute resolution practitioners.

As the use of AI tools grows in the legal industry, the need to regulate AI in dispute resolution proceedings has become one of the most debated topics over the last few years. Many jurisdictions have enacted regulation for the use of AI in court proceedings, including recently in Singapore.

As of 2024, only two arbitration bodies had issued guidelines for the use of AI in international arbitration proceedings: the Silicon Valley Arbitration & Mediation Centre in April 2024 and the Stockholm Chamber of Commerce (SCC), whose guidance is limited to SCC arbitrations.

With these new guidelines (29-page / 4.02mb PDF), CIArb, a professional body for alternative resolution practitioners, aims at providing all arbitration users, both institutional and ad hoc, a comprehensive framework for the effective and ethical use of AI tools.

Mohammed Talib, an expert in international arbitration at Pinsent Masons, said: “The guidelines highlight party autonomy as the key touchstone in arbitration: parties may agree on whether and how AI is used in arbitration, subject to applicable laws and regulations."

"Arbitrators are encouraged to ascertain if parties have addressed AI use already," he said.

"When there is no such agreement on AI, arbitrators are encouraged to invite parties to express their views and have a discussion around the use of AI.”

The guidelines cover the benefits of the technology and the associated risks, and include general recommendations on the use of AI in arbitration proceedings. They also offer guidance on the arbitrators’ powers to regulate the parties’ use of AI and recommendations regarding the arbitrators’ own use of AI.

Johanne Brocas, an international arbitration expert at Pinsent Masons, said: "Imposing a duty to disclose the use of AI is a thorny issue where jurisdictions have divergent approaches."

"The CIArb guidelines provide a useful framework to assist arbitrators with the decision to impose a duty to disclose, in particular defining criteria for where a duty to disclose should be envisaged,” she said.

“The guidelines also give guidance on maintaining the duty to disclose throughout the proceedings and possibly imposing sanctions for failure to comply, including adverse inferences."

The guidelines also anticipate the duties that may be imposed to arbitrators as a result of the EU AI Act, which will come into force in August 2026.

Talib said: "The guidelines underline that arbitrators have powers and responsibilities with regard to the use of AI, they may also use AI to enhance efficiency and decision-making quality."

"However, the guidelines are clear that arbitrators must not relinquish decision-making powers to AI,” he said.

"Arbitrators should independently verify AI-generated information and maintain critical perspective. They assume full responsibility for all aspects of an award regardless of any AI assistance they may have obtained."

The guidelines state that to further transparency, arbitrators should consult with parties about their intended AI use. If parties disagree on arbitrators' use of specific AI tools, arbitrators should refrain from using them and arbitrators on a tribunal should consult with fellow arbitrators about AI use. The guidelines also provide template documents in the appendices.

Brocas said: "As AI continues to evolve, these guidelines will serve as a central resource for parties, counsel and arbitrators alike, helping them to navigate the complexities of integrating AI into dispute resolution processes".

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.