Out-Law / Your Daily Need-To-Know

Concrete steps taken by arbitral institutions and tribunals to enable the use of AI must be underpinned by robust governance by businesses engaging in arbitration, if those businesses are to realise the benefits of the technology without exposing themselves to legal risks.

AI in arbitration is being operationalised by institutions through structured partnerships and training, reflecting a shift in user expectations around the speed and cost of resolving disputes and the associated handling of large evidential records.

For businesses, AI tools offer the promise of shortening the time it takes to undertake legal research and drafting and reducing friction in document-heavy cases. However, there are risks: if parties to arbitration proceedings, and their counsel, do not have appropriate checks and balances in place, they can inadvertently compromise confidentiality and privilege, import errors into submissions, or trigger due process challenges where an AI tool has effectively influenced reasoning without adequate transparency.

Below, we look at some recent AI-related developments in arbitration and identify actions businesses should take in response.

Institutions embrace AI

Major arbitration bodies across the globe have taken concrete, public steps to integrate AI tools into arbitration processes in recent months.

The London Court of International Arbitration (LCIA) announced a strategic collaboration with legal tech provider Jus Mundi which, among other things, will involve the bodies exploring “arbitration-specific AI-driven solutions that enhance legal research, case analysis, and arbitration processes”. The bodies said Jus Mundi’s AI tools will be used “to support LCIA workflows, focusing on practical applications that strengthen institutional operations and enhance the user experience”. Under the partnership, the LCIA and Jus Mundi will also collaborate over AI-related education, training and research.

The LCIA-Jus Mundi tie-up follows on from similar announcements made by other institutions.

For example, in June 2025, the Dubai International Arbitration Centre (DIAC) announced its own partnership with Jus Mundi. Among other things, the partnership gives the DIAC access to Jus Mundi’s AI-driven legal research and workflow solution, Jus AI, to help streamline case management processes. The partnership also enables AI-related training specific to arbitration practices and the publication of DIAC decisions on Jus Mundi’s platform.

In May, the Hong Kong International Arbitration Centre (HKIAC) unveiled ‘The Hub’, which is essentially a forum for connecting arbitrators with legal technology providers to encourage, accelerate and improve the adoption of AI in arbitration processes.

Also, at the beginning of 2025, American Arbitration Association and its international arm (AAA-ICDR) confirmed its partnership with legal tech company Clearbrief, under which Clearbrief will provide AI-powered drafting and evidence handling tools to AAA-ICDR panel arbitrators and mediators. The move followed a six-month pilot, which AAA-ICDR said “demonstrated that Clearbrief enhanced efficiency and accuracy while reducing the time and costs associated with traditional dispute resolution workflows”, adding that “arbitrators and mediators could instantly generate timelines, search and summarise evidence, verify facts and laws, and hyperlink citations within draft awards”.

The direction of travel is clear: institutions are equipping tribunals and practitioners with AI to aid more effective document and case management. This should reduce cost and improve the quality of arbitral awards, but only if the profession pairs these tools with disciplined verification and sensible safeguards around confidentiality and procedural fairness.

The risk of hallucinations

The need for AI safeguards to be applied by businesses engaged in arbitration comes at a time of rapid growth in the use of both popular generative-AI tools for legal research and summarising, and legal tech tools, like CoCounsel, Harvey, Libra, and Legora, for drafting, translation, and document management. A number of cases that have come before courts in recent times offer sharp lessons of the consequences of using AI without sufficient human oversight.

In 2023, in the early days of generative-AI, a US court sanctioned two US lawyers and the law firm they worked for after they filed legal submissions containing case citations that did not exist which were reportedly generated by ChatGPT.

This risk of ‘hallucinations’ – whereby false or misleading information may be presented as facts – has materialised in other cases since, including, as we reported in November 2025, before California’s Court of Appeal. The court imposed a $10,000 sanction on a lawyer who filed two appellate briefs containing fabricated case citations generated by ChatGPT. The court declined to award legal fees or costs to the opposing counsel since they did not report the fake citations to the court nor detect them in the first place.

There have been other such examples globally.

In June 2025, the High Court in London said that freely available tools, such as Chat GPT, are “not capable of conducting reliable legal research” as whilst they seemingly produce plausible responses to prompts, these may turn out to be incorrect as the AI may hallucinate case law. The court warned that the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system”, and that those who use AI for legal research therefore have a duty to verify output prior to using it in legal documentation or for legal advice. 

The judge, Dame Victoria Sharp, said that lawyers who refer to non-existent cases are likely to be in breach of their duty not to mislead the court. Punishment, she said, can range from public censure, imposition of adverse costs orders, and potential disbarment by regulators, to imprisonment “in the most egregious cases”.

In August 2025, an Australian criminal barrister apologised after filing submissions with false AI-generated content in a murder trial. According to ABC, the judge in the case said: “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified”.

In November 2025, the Qatari Financial Centre Civil and Commercial Court held a lawyer practicing in Dubai in contempt of court and breach of court rules (10-page / 594KB PDF) for citing “fake cases” purporting to be decisions of the court in an application they raised in which they requested more time to be extended to them for serving a defence on behalf of their client.

The court in that case said there was “no reasonable excuse” for the lawyer’s behaviour, adding that referring to a case citation originating from an AI system without first checking the accuracy of that citation “ordinarily amounts to reckless conduct”. The case prompted the court to consult on a new practice direction for governing use of AI in cases before it.

More emphasis on guidelines

The risks AI use poses in the context of arbitration are not limited to hallucinations and the resultant risk that fake or misleading citations poses to procedural fairness and to the ability to subsequently enforce arbitral awards if initially undetected. There are also risks that personal data, confidential information, and even privileged information is inadvertently disclosed in submission.

To this end, some arbitral bodies have decided to act.

For example, in March 2025, the Chartered Institute of Arbitrators (CIArb), a professional body for alternative resolution practitioners, launched its guideline on the use of AI in arbitration – a document it subsequently updated in September. Among other things, the guideline provides a framework to assist arbitrators with any decision to impose a duty on parties to disclose the use of AI in the preparation of their case.

Also in March 2025, the AAA‑ICDR issued guidance for arbitrators’ use of AI in which it emphasised cross-verification of AI outputs; due process and confidentiality; human control over decision-making; transparency over AI use; and data protection.

China International Economic and Trade Arbitration Commission (CIETAC) also became the first major arbitral institution in the Asia-Pacific region to publish AI guidelines in 2025. Among other things, CIETAC said AI can be used to support, but must not replace, human-decision making, and confirmed that use of AI tools does not relieve parties of their responsibility to guarantee the authenticity and legality of evidence and arbitration-related documents that they submit.

The CIETAC’s guidelines follow the publication of similar guidance by the Silicon Valley Arbitration & Mediation Centre and the Stockholm Chamber of Commerce in 2024.

What the future holds and actions for businesses

Aside from case preparation, AI is being adopted as a forecasting tool to, for example, predict a case outcome and assess arbitrator tendencies. Whilst this area is still developing – and is limited compared to similar efforts in litigation, due to confidentially and fewer publicly available awards – it is inevitable that AI will be used as a predictive tool.

The intersection of AI, smart contracts and arbitration is also gaining traction, though regulatory and enforceability challenges remain. Confidentiality risks linked to generative-AI use were studied in an academic piece, calling for “confidentiality by design” in arbitration AI tools.

What all this shows is that, for businesses and state entities that arbitrate, in house counsel managing disputes budgets and disclosure risk, and legal teams conducting document heavy proceedings, AI literacy is becoming part of arbitration best practice. Parties that take a proactive approach, by setting clear protocols and agreeing guardrails early, will capture the efficiency benefits while reducing the risk of satellite disputes about process, privilege or enforceability.

For this, businesses need an AI protocol for disputes. They should agree internal and external counsel protocols on which AI tools are permitted to be used and ensure no confidential material is input into non-approved systems.

Businesses should further consider whether to raise early with the tribunal any guardrails – for example, transparency about any AI-assisted drafting, restrictions on uploading party materials to third party tools, and safeguards to protect privilege and confidentiality – and build in time for rigorous human verification, especially to avoid hallucinated authorities or mis-citations.

Taking these steps will ensure that efficiency gains do not come at the cost of enforceability.

Co-written by Melissa McLaren of Pinsent Masons. 

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.