Out-Law / Your Daily Need-To-Know

Out-Law News 2 min. read

Guide to using AI-generated content in DIFC court proceedings issued


Parties involved in litigation before the Dubai International Financial Centre (DIFC) courts must obtain agreement to the use of “AI-generated content” in those cases from other parties – or obtain a court determination approving such use, according to new guidelines issued by the DIFC courts.

Dispute resolution specialists at Pinsent Masons welcomed publication of the guidance on the use of ‘large language models’ (LLMs) and ‘generative content generators’ (GCGs) in DIFC Court proceedings.

Melissa McLaren of Pinsent Masons said: “Emphasis has been placed on disclosure of the intention to use AI at an early stage in proceedings, along with a clear onus on the parties to be transparent with one another and the court when adopting such content, including its potential limitations.”

Amelia Cave of Pinsent Masons said: “The DIFC courts have a reputation for being innovative, forward-thinking and embracing new technologies. This new guidance attempts to strike a balance between allowing parties to benefit from appropriate use of AI-generated content in proceedings, whilst safeguarding them from the inevitable risks this technology represents.”

In its guidance, the DIFC courts said: “Parties should declare at the earliest possible opportunity if they have used or intend to use AI-generated content during any part of proceedings. Any issues or concerns expressed by either party in respect of the use of AI should be resolved no later than the Case Management Conference stage. Early disclosure of the use or intention to use AI gives all parties the opportunity to raise any concerns they might have or to provide their consent to such use. It also provides the Courts with the opportunity to provide any necessary case management orders on the reliance on AI-generated content during proceedings.”

The new guidance includes a series of best practices and principles that parties are invited to consider, with transparency over the use of AI-generated content in DIFC court proceedings being one of the issues promoted. In this respect, parties are advised to disclose not just their use of AI systems but also “the source of the AI-generated content, and any potential limitations or biases associated with the AI system”. The guidance further notes that parties should not wait until shortly before trial, or the trial itself, to reveal that they intend to use AI-generated content as this may lead to requests for adjournments and the loss of trial dates.

Another principle promoted by the DIFC courts in respect of the use of AI-generated content in proceedings is accuracy and reliability. According to the guidance, parties should evaluate the reliability of AI-generated evidence before it is submitted to the court. On this point, the DIFC courts said that “the AI's training data, algorithms, and potential for bias or inaccuracies” should be taken into account, and it warned that the DIFC courts have the power to “reject any content generated through AI-systems”.

Parties have also been urged to avoid being “overly reliant” on LLMs and GCGs to produce documents for legal proceedings. The courts said that “technology should only be used to assist parties in putting forward submissions and not to replace the integral human decision-making that is required when preparing evidence or submissions to the Courts”.

The DIFC courts also advised legal practitioners to be mindful of their professional obligations when using AI-generated content in DIFC court proceedings, and further urged parties to ensure use complies with data protection and intellectual property laws, among other legal requirements.

In an introductory statement to the new guidance, the DIFC courts recognised that the use of LLMs and GCGs is becoming “more commonplace across the legal industry and can significantly assist in the preparation and presentation of cases by saving time and costs”, but it said that parties need to consider the “potential risks associated with the use of such technologies”.

Such risks were said to include: “providing misleading or incorrect information/evidence to the Courts and other parties; breaching client confidentiality; infringing intellectual property rights; and breaching relevant data protection legislation”.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.