Artificial intelligence in legal limbo: Alberta’s courts are the latest to provide guidance over the use of generative AI in legal submissions

October 24, 2023 | Alicia York, Caitlin Smith

Introduction

The use of Artificial Intelligence (AI) is becoming increasingly prevalent in legal practice and is being used by various practitioners and self-represented litigants as a tool to assist with legal research and court submissions. However, the integration of AI in legal submissions has showcased some disastrous consequences, raising concerns about the reliability and accuracy of the information generated from the use of AI.

Presently, courts in Manitoba, Yukon, Quebec and, most recently, Alberta, have issued Practice Directives and Notices to the Profession and Public, addressing the pivotal topic of using Large Language Models (LLMs)[1] and Generative AI[2] for court submissions.

These courts have recognized the challenges posed by AI and have implemented rules and parameters around its use, marking a noteworthy step in safeguarding the integrity of the legal system in the digital age.

Recent cases of AI-generated court decisions making their way into legal submissions

Instances have recently emerged where fabricated decisions, originating from AI-generated responses, have found their way into court submissions. ChatGPT is one of the LLMs that has garnered recent attention for generating non-existent legal cases.

Earlier this year, two US lawyers and their law firm found themselves facing fines after it was discovered they had submitted a legal brief that included six entirely fictional legal cases, purportedly sourced from ChatGPT. In an Order to Show Cause, United States District Judge P. Kevin Castel stated that the submission was “replete with citations to non-existent cases” and that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”[3] The cases cited in the legal brief had used fictional names but with docket numbers taken from real, but entirely different, cases.[4] The fictional decisions themselves also contained internal citations and quotes to other fictional decisions.[5]

This case highlights the growing concerns over the consequences of overreliance on AI and has prompted serious questions about the ethics and oversight of AI-generated content in legal submissions.

Although there have not yet been any decisions released by the Alberta courts with respect to the use and reliance on AI-generated submissions, in a September 2023 decision of the Ontario Superior Court of Justice, Justice I.F. Leach placed no reliance on a document titled  “Results of legal research carried out using artificial intelligence system ChatGPT (Chat Generative Pre-Trained Transformer).”[6]  Justice Leach took judicial notice of the potential benefits and dangers of AI and remarked that, “while there may come a time when legal research and submissions generated by artificial intelligence will be recognized and accorded value in our courts, in my view that time has not yet arrived.”[7]

Directives from Canadian courts

Canadian courts have been responding to these growing concerns through Practice Directions and Notices to the Profession, which set boundaries for the use of AI in court submissions.

Manitoba’s Practice Direction

The Court of King’s Bench of Manitoba released the Practice Direction re: Use of Artificial Intelligence in Court Submissions on June 23, 2023.[8]

Manitoba’s Practice Direction requires that “when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used [emphasis added].”[9]

Yukon’s Practice Direction

The Supreme Court of Yukon released Practice Direction General-29 (Use of Artificial Intelligence Tools) on June 26, 2023.[10]

Yukon’s Practice Direction requires that “if any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter and in any form before the Court, they must advise the Court of the tool used and for what purpose [emphasis added].”[11]

Quebec’s Notice to the Profession and Public

The Superior Court of Quebec released a Notice to the Profession and Public (Integrity of Court Submissions When Using Large Language Models) on September 28, 2023.[12]

Quebec’s Notice “urges practitioners and litigants to exercise caution when referencing legal authorities or analysis derived from large language models.”[13]

The Notice requires that, “for all references to case law, statutes or commentary in representations to this court, it is essential that parties rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers, or well-established public services such as CanLII and SOQUIJ, for instance.”[14]

It also requires that “any AI-generated submissions must be verified with meaningful human control.”[15]

Alberta’s Notice to the Profession and Public

Most recently, on October 6, 2023, Alberta’s Court of Appeal, Court of King’s Bench and Court of Justice jointly released a Notice to the Profession and Public (Ensuring the Integrity of Court Submissions When Using Large Language Models).[16]

Similar to that in Quebec, Alberta’s courts advise practitioners and litigants to exercise caution when relying on legal authorities or analysis derived from AI in their submissions. In this regard, the courts direct that “any AI-generated submissions be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, ensuring that the citations and their content hold up to scrutiny.”[17]

Alberta’s courts urge parties to rely exclusively on authoritative sources (such as official court websites, commonly referenced commercial publishers, or well-established public services such as CanLII) for all references to case law, statutes or commentary in court submissions.[18]

Practical implications

Interestingly, the Notices released in both Alberta and Quebec do not follow suit with the Practice Directions released in Manitoba and Yukon. While Manitoba and Yukon require disclosure of the use of AI in the preparation of court materials, the courts in Alberta and Quebec only require that AI-generated submissions be verified with meaningful human control.

As such, at this time, there is no requirement in Alberta that parties advise the Court when and how AI was used in their legal submissions. Accordingly, Alberta courts will not be expressly alerted to when AI is being used.

Legal practitioners and self-represented litigants in Alberta should be mindful of this requirement (or lack thereof) when participating in court proceedings. Parties to an action will want to ensure that the submissions relied on by other parties are supported by true and accurate legal authorities, using credible primary sources. Moreover, even if the legal authorities being relied on by opposing parties have been verified, there is a risk that any AI-generated analysis of those authorities may not accurately apply the legal principles to the facts at issue.

Takeaways

The use of AI brings both opportunities and challenges. As AI will undoubtedly continue to be used in legal proceedings, legal practitioners and self-represented litigants should proceed with caution when using LLMs and relying upon or incorporating Generative AI in submissions before the Court, not only with respect to accuracy but also confidentiality and data privacy. This prudent approach is essential not only when presenting one’s own case but also when evaluating the merits of an opposing party’s claim. The potential misuse of AI, which is still new and evolving, underscores the need for vigilance and critical assessment in every aspect of the legal process.

Should you have any questions or need assistance with court-related matters, Miller Thomson’s Commercial Litigation team is here to help.

_____

[1] See Court of Appeal of Alberta, Court of King’s Bench of Alberta, & Alberta Court of Justice, “Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions When Using Large Language Models” (6 October 2023) [Alberta’s Notice] which defines LLMs as a type of AI system capable of processing and generating human-like text based on vast amounts of training data.

[2] Generative AI refers to the creation of new and original content through algorithms that learn from and mimic patterns in existing data.

[3] Order to Show Cause in Roberto Mata v Avianca, Inc, Case 1:22-cv-01461-PKC, NY Dist Ct, granted 4 May 2023.

[4] Ibid.

[5] Ibid.

[6] Floryan v Luke et al, 2023 ONSC 5108 at paras 11-12.

[7] Ibid at para 12.

[8] Court of King’s Bench of Manitoba, “Practice Direction Re: Use of Artificial Intelligence in Court Submissions” (23 June 2023).

[9] Ibid.

[10] Supreme Court of Yukon, “Practice Direction: Use of Artificial Intelligence Tools” (26 June 2023).

[11] Ibid.

[12] Superior Court of Quebec, “Notice to the Profession and Public: Integrity of Court Submissions When Using Large Language Models” (28 September 2023).

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] Alberta’s Notice, supra note 1.

[17] Ibid.

[18] Ibid.

Disclaimer

This publication is provided as an information service and may include items reported from other sources. We do not warrant its accuracy. This information is not meant as legal opinion or advice.

Miller Thomson LLP uses your contact information to send you information electronically on legal topics, seminars, and firm events that may be of interest to you. If you have any questions about our information practices or obligations under Canada’s anti-spam laws, please contact us at privacy@millerthomson.com.

© Miller Thomson LLP. This publication may be reproduced and distributed in its entirety provided no alterations are made to the form or content. Any other form of reproduction or distribution requires the prior written consent of Miller Thomson LLP which may be requested by contacting newsletters@millerthomson.com.