• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

More AI issues - with the law!

Mock Turtle

Oh my, did I say that!
Premium Member
AI is creating fake legal cases and making its way into real courtrooms, with disastrous results

We've seen deepfake, explicit images of celebrities, created by artificial intelligence (AI). AI has also played a hand in creating music, driverless race cars and spreading misinformation, among other things. It's hardly surprising, then, that AI also has a strong impact on our legal systems. It's well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client's case. It's therefore highly concerning that fake law, invented by AI, is being used in legal disputes. Not only does this pose issues of legality and ethics, it also threatens to undermine faith and trust in global legal systems.

The best known generative AI "fake case" is the 2023 US case Mata v Avianca, in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT. The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client's case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny. Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump's former lawyer, gave his own lawyer cases generated by Google Bard, another generative AI chatbot. He believed they were real (they were not) and that his lawyer would fact check them (he did not). His lawyer included the cases in a brief filed with the US Federal Court. Fake cases have also surfaced in recent matters in Canada and the United Kingdom. If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine the public's trust in the legal system? Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients' interests, and generally undermine the rule of law.

Not good, so can the legal system be kept clean from such fakery? :eek:
 

ADigitalArtist

Veteran Member
Staff member
Premium Member
Well, I think lawyers who do this should face legal sanctions (or such).
You'd be amazed at just how many people across just about every career path put real data into chat boxes to generate letterheads, resumes, cover letters, email templates. Which means all that real data can be mixed algorithmically with all sorts of fake results, which is then utilized in someone elses template, citation, documents.

So many IT tickets where this has caused problems, especially when devs have used chatboxes to create code, which also contains all these bots of algorithmically crossed data.

It's an epidemic.
 
Top