In a startling revelation that blurs the lines between technology and legal proceedings, court documents have unveiled a controversial development—Google Bard. The exposure of this AI-driven tool has cast a spotlight on the alleged manipulation of legal cases involving Michael Cohen, Donald Trump’s former fixer. The revelation suggests a convergence of artificial intelligence and legal machinations, as Google Bard is implicated in fabricating cases.
This revelation not only raises ethical concerns surrounding the use of AI in legal contexts but also underscores the potential ramifications of technology intertwining with the intricacies of the judicial system. As the intersection of AI and law unfolds, the Google Bard saga prompts a critical examination of the ethical boundaries within which emerging technologies must navigate in the realm of legal proceedings.
Michael Cohen, a former fixer for former President Donald J. Trump, is shown to have unknowingly utilized artificial intelligence to create legal citations in a motion to stop court supervision early, according to court documents that were published on Friday (via The New York Times).
Cohen’s veracity is called into question as a result of the surprising disclosure, given he had already entered a guilty plea to campaign finance violations and served time in prison in 2018.
Cohen Enlisted Google Bard to Make Bogus Legal Citations
In a declaration under oath, Cohen revealed that he had used Google Bard, a tool that utilizes artificial intelligence, to generate bogus legal citations.
The motion that was submitted by his attorney, David M. Schwartz, in front of a federal judge in Manhattan included these citations, which were subsequently inserted without his knowledge. Cohen, who is currently out of prison and is according to the conditions of his release, had submitted a request for relief from the continued monitoring of the court.
Reports indicate that Cohen’s lack of understanding of the characteristics of Google Bard is the primary source of the problem. He stated that he had failed to keep up with “emerging trends (and related risks) in legal technology” and that he had failed to understand that Google Bard, which is comparable to ChatGPT, could generate citations that appeared to be authentic but were, in reality, false.
The New York Times observes that the potential impact from this revelation is enormous, particularly in light of the fact that Cohen is expected to play a critical testimony in the Manhattan criminal case against Donald Trump.
Cohen has been depicted as unreliable by Trump’s legal team on multiple occasions, and this episode provides more ammunition to further discredit him.
Read More: AI-Generated Customized Tales Revolutionize Bedtime and Parenting
Lawyers Caught Tapping AI in Courts
On December 12, Judge Jesse M. Furman, who was presiding over the case, issued an order stating that he was having difficulties identifying the three rulings that Schwartz had mentioned.
As a result of this, Schwartz is now obligated to provide the court with documents or provide a full explanation for the cases that do not exist. In addition, the judge ruled that Schwartz may be subject to sanctions for presenting the court with false cases during the proceeding.
Since this is the second time this year that attorneys in the Manhattan federal court have used artificial intelligence algorithms to cite non-existent legal rulings, it is clear that this episode is not an isolated occurrence nor is it an isolated incident.
When attorney Steven Schwartz employed manufactured cases in a civil action earlier this year, the employment of artificial intelligence, in this case Google Bard, is reminiscent of a similar incident that occurred earlier in the year employing ChatGPT.
There are concerns over the possibility for the technology to be misused due to the broader implications of lawyers depending on content generated by artificial intelligence in judicial processes.
Furthermore, the Cohen event shows the significance of vigilance and verification in the legal profession, despite the fact that artificial intelligence has the ability to assist in legal research.
While this was going on, in September, a powerful judge from a court of appeal in the United Kingdom made history by using ChatGPT to describe a legal matter.
Lord Justice Birss, an authority on intellectual property law, referred to the chatbot that was powered by artificial intelligence as “jolly useful” after it provided him with a comprehensive description of a particular sector of the law.
I believe that the fact that you can ask these huge language models to summarize information is the most interesting aspect of this information. According to what he shared with the press, “It is useful, and it will be used, and I can tell you that I have used it.”
Read More: The Most Incredible AI Breakthroughs in 2024