Get Informed, Stay Inspired

The Chief Justice of the United States advises caution as artificial intelligence continues to impact the legal industry.

The Chief Justice of the United States advises caution as artificial intelligence continues to impact the legal industry.

According to U.S. Supreme Court Chief Justice John Roberts, artificial intelligence has both positive and negative impacts on the legal industry. In a year-end report released on Sunday, he advised caution and humility as this advancing technology changes the way judges and lawyers approach their tasks.

In his 13-page report, Roberts expressed mixed feelings about AI. He acknowledged its potential to improve access to justice for disadvantaged litigants, revolutionize legal research, and help courts resolve cases more efficiently and affordably. However, he also raised concerns about privacy and the current limitations of technology to replicate human judgment.

Roberts stated that he believes human judges will continue to exist for a considerable amount of time. However, he also confidently predicts that AI will have a significant impact on judicial work, especially at the level of trials.

The recent remarks made by the head judge are his most notable remarks yet on the impact of AI on the legal system. These coincide with multiple lower courts grappling with how to effectively adjust to a new technology that can pass the bar exam, but also has the potential to create false information, referred to as “hallucinations.”

Roberts stressed the importance of using AI with caution and humility. He shared an example of how AI hallucinations had caused lawyers to refer to non-existent cases in legal documents, which he stated is never a good idea. Roberts did not provide further details, only mentioning that this phenomenon had received media attention in the current year.

One example is when Michael Cohen, the former fixer and lawyer for Donald Trump, stated in recently released court documents that he unintentionally provided his attorney with fabricated case references generated by an AI program. This false information was then included in an official court document. There have been other reported cases of lawyers using AI-generated references in legal briefs as well.

Last month, a federal appeals court in New Orleans made waves by introducing what seemed to be the initial regulation proposal from any of the 13 U.S. appeals courts regarding the usage of generative AI tools, such as OpenAI’s ChatGPT, by attorneys appearing before the court.

The 5th U.S. Circuit Court of Appeals has suggested a regulation that would mandate lawyers to confirm that they did not use any artificial intelligence programs to compose legal briefs, or if they did, that a human verified the precision of any text produced by AI in their court submissions.