The AI ​​begins creating fake legal cases, making its way into real courts.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

It is hardly surprising, then, that AI is also having a profound impact on our legal systems. (Representative)

We’ve seen deepfake, candid photos of celebrities created by artificial intelligence (AI). AI has played a role in creating music, driverless race cars and spreading misinformation, among other things.

It is hardly surprising, then, that AI is also having a profound impact on our legal systems.

It is well known that courts must decide disputes on the basis of law, which is presented to the court by lawyers as part of the client’s case. So it is very worrying that fake law invented by AI is being used in legal disputes.

This not only raises issues of legality and ethics, but also threatens to undermine trust and confidence in international legal systems.

How are fake laws made?

There is no doubt that generative AI is a powerful tool with transformative potential for society, including many aspects of the legal system. But its use comes with responsibilities and risks.

Lawyers are trained to use professional knowledge and experience carefully, and are generally not high risk takers. However, some unwary lawyers (and self-represented litigants) have been caught by artificial intelligence.

AI models are trained on massive data sets. When prompted by the user, they can create new content (both text and audio-visual).

Although content produced in this way looks very convincing, it can also be misleading. This is the result of an AI model trying to “fill in the gaps” when its training data is insufficient or faulty, and is commonly referred to as a “hallucination”.

In some contexts, generative AI hallucinations are not a problem. In fact, it can be seen as an example of creativity.

But if AI creates fraud or false content that is then used in the legal process, that’s a problem – especially when combined with time pressures on lawyers and lack of access to legal services for many. .

This potent combination can result in carelessness and shortcuts in legal research and document preparation, potentially creating reputational problems for the legal profession and a loss of public confidence in the administration of justice.

This is already happening.

The most famous generative AI “fake case” is the 2023 US case Mata v. Avianca, in which the lawyers submitted a brief containing fake quotes and references to the case in a New York court. Brief research was conducted using ChatGPT.

The lawyers, unaware that ChatGPT could be fraudulent, failed to check that the cases actually existed. The results were disastrous. After the error was uncovered, the court dismissed his client’s case, allowed the attorneys to act in bad faith, fined them and their firms, and exposed their actions to public scrutiny. Masked?

Despite the negative publicity, examples of other fake cases continue to surface. Donald Trump’s former lawyer Michael Cohen developed his lawyer’s cases with Google Bard, another creative AI chatbot. He believed they were real (they weren’t) and that his lawyer would check them (he didn’t). His attorney included those cases in a brief filed in U.S. federal court.

Fake cases have also come to light in recent cases in Canada and the United Kingdom.

If this trend goes unchecked, how can we ensure that careless use of generative AI does not undermine public confidence in the legal system? A continued failure by lawyers to exercise due care when using these tools has the potential to mislead and mislead courts, harm clients’ interests, and undermine the rule of law in general.

What is being done about it?

Around the world, legal regulators and courts have responded in different ways.

A number of US state bars and courts have issued guidance, opinions, or orders regarding the generative use of AI, ranging from responsible authorization to outright bans.

Law societies in the United Kingdom and British Columbia and courts in New Zealand have also developed guidelines.

In Australia, the NSW Bar Association has a creative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have issued articles on responsible use in accordance with the Rules of Conduct for Solicitors.

Many lawyers and judges, like the public, will have some understanding of creative AI and can recognize both its limitations and benefits. But there are others that may not be so familiar. Mentoring definitely helps.

But a necessary approach is required. Lawyers who use generative AI tools cannot use it as a substitute for their own judgment and due diligence and should check the accuracy and reliability of the information they receive.

In Australia, courts must adopt practice notes or rules that set out expectations when using generative AI in litigation. Court rules can also guide self-represented litigants, and let the public know that our courts are aware of the problem and are addressing it.

The legal profession can also adopt formal guidance to promote the responsible use of AI by lawyers. At a minimum, a technology qualification should become a requirement for lawyers to continue their legal education in Australia.

Setting clear requirements for the responsible and ethical use of generative AI by lawyers in Australia will encourage appropriate adoption and increase public confidence in our lawyers, our courts and the overall administration of justice in this country.

(Authors:Michael League, Professor of Law, UNSW Sydney and Vicki McNamara, Senior Research Associate, Center for the Future of the Legal Profession, UNSW Sydney)

(Disclosure Statement:Vicki McNamara is affiliated with the Law Society of NSW (as a member). Michael Legg does not work for, consult with, participate in, or receive funding from any company or organization that would benefit from this article, and he has disclosed no relevant affiliations beyond his academic appointment. of)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

(Other than the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment