Generative AI in Legal Briefs: Proposed Rules and Practitioner Perspectives

By: Paul Coste

 

“[A]ny use of AI requires caution and humility,” cautioned U.S. Supreme Court Chief Justice John Roberts in a year-end report published December 31st, 2023. In the report, Roberts mentioned the now infamous instance where over reliance on artificial intelligence (“AI”) in a court filing led lawyers to cite non-existent cases. These comments also came after court papers were unsealed last week revealing that Michael Cohen, Donald Trump’s former attorney, admitted to a court that he had mistakenly given his attorney fake case citations generated by an AI program. In November of last year, a federal appeals court in New Orleans put forth the first proposed rule by any of the U.S. appeals courts aiming to regulate the use of generative AI tools in court. The proposed rule would require lawyers to certify that they either did not rely on AI programs to draft briefs or that a human reviewed their briefs for the accuracy of any text generated by AI. On January 29th of this year, several prominent lawyers practicing in the Fifth Circuit published letters written to the court commenting on the proposed rule. None were outright supportive of the rule as written; some thought the rule went too far, while a minority thought the rule did not go far enough.

 

The proposed rule as written would require lawyers and litigants appearing before the court without counsel to certify that either they did not rely on AI programs to draft briefs or that, to the extent an AI program was used to generate a filing, citations and legal analysis were reviewed for accuracy by a human. The consequences for not complying with this rule could result in the filing being stricken as well as sanctions against the filing attorney.

 

Much of the discourse supported the spirit of the rule that attorneys should bear the responsibility to review and verify the accuracy of their legal and factual assertions. However, many think that the existing rules are sufficient. Many pointed to Fed. R. Civ. P. 11 and stated that the proposed rule is duplicative of this existing rule. Rule 11 addresses certifications by counseland representations to the court. One of the letter writers also added that the court should already have power under Rule 38 to impose sanctions for misstatements of law filed in a brief.

 

Moreover, many who felt that the existing rules were sufficient to guard against the risks of AI voiced concerns over how the proposed rule was written and the additional issues it would create. Some additional concerns about the proposed rule were that it was too vague or broad, needlessly singled out technology, and invaded attorney work product privilege.

 

Vagueness:

There were multiple issues of vagueness raised in the published letters by various attorneys. The first was that the proposed ruledoes not do a good enough job of defining “generative” AI. Several posed the question of whether the use of Westlaw, Lexis, Microsoft, and Google must be disclosed now since all have incorporated some form of generative AI into their search or editing functions. Another common issue raised was the court’s use of the word human in the proposed rule. A few were in favor of replacing the word human with some form of “the attorney signing the brief” or, at the very least, specifying that an attorney reviewed the accuracy of legal citations and analysis.Many had questions about what level of use of AI tools required disclosure. For example, if AI were used to wordsmith a handful of sentences, would that necessitate disclosure under the proposed rule?

 

Singling out technology:

“The proposed rule unfairly targets AI-generated research even though the problem of inaccurate citation long predates AI[,]”wrote one author. This was a commonly cited issue among letter publishers, many bringing up examples of copying and pasting from other briefs or sources found online without checking work and comparing this to issues with AI “hallucination” of citations and cases. “It seems, then, that the rule addresses a concern that the general certifications by counsel are inadequate whenever generative AI is involved – in other words, that generative AI, in and of itself, is uniquely prone to inaccuracy, and thus requires a special rule.” One author wrote that singling out this technology with the proposed rule “[i]gnores widespread use of other, arguably more impactful technologies in legal practice” and “[i]t risks creating a precedent for discriminatory regulation against future technological advancements.” Another took the stigmatization argument a step further, arguing that the proposed rule “[u]nfairly stigmatizes the use of generative AI, and by extension, the legal practitioners who employ it.”

 

Work product privilege:

A couple of authors raise the issue of work product privilege. One wrote, “Requiring a lawyer to disclose to the opposition whether they have used AI in drafting a brief is a serious invasion of the work-product privilege” and “[w]hat processes a lawyer uses to write a brief should be protected by that privilege.” Another wrote that “[t]he combination of research tools that I use for my briefs and filings are a proprietary matter between my clients and me, and not a topic I feel comfortable broadcasting in a public court disclosure.” The specific concern raised was that if courts can require disclosure of the use of AI, will that lead to compelled disclosure of search prompts and strategies – “activities which indisputably fall within work product privilege” – and ultimately a chilling effect onattorneys’ use of AI-powered research tools, cutting off all the potential benefits.

 

Disclosure is not enough:

A small minority of the letter writers were critical of the rule because they felt it did not go far enough. One author voiced strong opposition to any AI tool being used in court filings, reasoning that the rule did not go far enough to address the “fundamental dangers of using generative AI for any legal analysis” and urged the court to require lawyers to certify that they did not use the technology at all. The same author compared generative AI’s capacity to “hallucinate” to that of the iPhone’s autocorrect feature and argued that autocorrect is completely useless to him because of its inability to predict the messages he types. They further posed the question, “[d]o we really want parties to submit legal work product to courts drafted by a robot that can’t think?”, and then argued “Fifth Circuit judges will use the parties’ briefs to decide the best legal resolution and approach to very important matters that have found their way up to the court and to write opinions that not only resolve the dispute between these lazy parties, but that will be binding on everyone in the Fifth Circuit and persuasive authority for the whole world. What they do is very important, consequential, and hard to do. They deserve to be given the best possible work product by the parties’ legal counsel, not some ‘app’.”

 

Overall, the majority of the published letters expressed a belief that the best approach was not to have any additional rules to regulate the use of generative AI in court filings. I agree. The proposed rule seems needlessly redundant and does more harm than good, mainly because of the confusion it would cause. There are well established risks associated with generative AI“hallucinations,” but these risks, as pointed out in the published letters, are nothing new. Lawyers should still be responsible for what they put before the court. I do not believe the risk of generative AI “hallucinations” outweighs the potential benefits the technology may yield for lawyers, clients, and the court system alike. There are a variety of ways the technology can be used to streamline the filing process, saving lawyers and courts time and clients’ money. The way this rule is written and the way people who oppose any use of generative AI think is overreactive to a couple of extreme examples of what can go wrong. Most lawyers are not using generative AI by simply asking the tech to write a brief on X for the purpose of Y and blindly submitting it to the court without reviewing or verifying accuracy. Many of the questions posed in the published letters to the court ask to what extent lawyers will now have to disclose whether they use generative AI. Will assistance in making a portion of a filing clearer (a benefit to the court) require a disclosure, or will using generative AI to outline an issue or strategize in their research be required? Even more, will simply using Westlaw, or Microsoft Word, or the many platforms that exist that are incorporating generative AI into their software need to be flagged by the attorney? Is it worth creating a new rule if it is likely that, with time, every filing will include a disclosure from the filer that they used AI?

 

Suppose something must be done to address fears of generative AI “hallucinations” slipping past filing attorneys and finding their way into courts. In that case, I alternatively agree with what another one of the letter authors suggested – guidelines instead of rules.  As frequently mentioned in opposition to the new proposed rule, the existing rules on the books do the heavy lifting in emphasizing that attorneys bear responsibility for the accuracy of their filings’ legal analysis and fact. However, with such a tool that has the capacity to yield many benefits for lawyers, as well as the capacity to produce error, it seems like a prime spot for guidelines to be laid out for proper usage, which would “provide definitions, establish best practices, and even reference acceptable models of use.”

 

One letter writer stated, “[o]nly data – and not anecdotes – should drive a rule change that affects such an important court as the Fifth Circuit.” I agree. This seems like a very quick response to something that is rapidly growing in use, developing, and being integrated into existing legal tech. If we jump the gun on producing new “rules” for every technological advancement that comes into play in the legal field, the rules may quickly become outdated or be rendered irrelevant by the next big thing. As stated in one of the letters published to the court, “Rule 11(b) does the heavy lifting. GPT does not change this.”

 

Student Bio: Paul Coste is a second-year law student at Suffolk University Law School and staff writer on the Journal of High Technology Law. Paul received a Bachelor of Science in Economics from Northeastern University.

 

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email