England, Wales caution judges over AI: What the rules say, what is the situation in India | Explained News

Judges in England and Wales can now use generative Artificial Intelligence (AI) systems like ChatGPT for basic tasks such as summarising large bodies of text, making presentations, or composing emails, a guidance issued by the UK judiciary on Tuesday (December 12) said. Generative AI is a form of AI that can generate new content, including text, images, sounds, and computer codes, the guideline said.

The document, titled “Guidance for Judicial Office Holders”, cautioned judges not to use chatbots for conducting legal research or carrying out legal analysis, and warned judges of signs that show legal arguments can be prepared using AI chatbots, as evidenced by recent cases in the US and Britain.

What does the guidance state?

The guidance issued on December 12 on the theme of AI said that any use of AI by or on behalf of the judiciary “must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice.”
The guidelines caution judges about indications that particular works may be produced by AI, such as references to cases that sound unfamiliar or have foreign citations. Another sign is parties citing different bodies of case law for the same legal issues or making submissions that do not conform with a judge’s general understanding of the law in an area.

Other indicators include the use of American spellings or references to overseas cases and content that appears to be persuasive and well-written but, on closer inspection, reveals obvious substantive errors.

Besides this, the guidelines also mention the limitations and potential risks of using AI that judges should be aware of.

What are these potential risks?

Festive offer

The guidance said that public AI chatbots, like ChatGPT, Google Bard, and Bing Chat, do not provide answers from authoritative or credible databases.

It also mentioned that such programs generate new text using algorithms based on prompts they receive and the data they have been trained upon, which indicates that the output generated by AI chatbots is what the model predicts to be the most likely combination of words based on data called “source information.”

As with any other information available on the internet in general, AI tools may be useful to find material one would recognise as correct but are a poor way of researching to find new information one cannot verify, the guideline adds.

The guideline also says that the quality of answers received depends on how one engages with the relevant AI tools, including the nature of the prompts entered. However, even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased. For instance, the currently available AI models that predict text have been trained on existing material on the internet, and their “view” of the law is often heavily based on US law, it stated.

Judges have also been warned of privacy risks attached to the use of AI and asked not to enter any private or confidential information into a public AI chatbot that isn’t already public. “Any information that you input into a public AI chatbot should be seen as being published to all the world. The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users,” the guideline says.

Suggestions like disabling chat history in AI chatbots or disabling AI platforms from accessing information have been provided. Additionally, the guideline says that if a judge unintentionally discloses private information, they must contact the Judicial Office and report the incident.

Finally, the advisory urges judges to be cognizant that litigants or lawyers could have also used AI tools. For instance, parties approaching the courts without lawyers may use such information without knowing it’s prone to error or that AI can produce fake text, images, and videos. “Judges should be aware of this new possibility and potential challenges posed by deepfake technology,” the guideline adds.

What were the UK and US cases?

On December 4, a woman who used nine “fabricated” ChatGPT cases to appeal against a penalty for capital gains tax had her case rejected by a court, the Independent reported.

Similarly, in June, a US judge imposed sanctions on two New York lawyers for submitting a legal brief that included six fictitious case citations generated by ChatGPT.

How have Indian courts reacted to the use of AI?

In March, Justice Anoop Chitkara of the Punjab & Haryana HC sought ChatGPT’s response while hearing the bail plea of an accused arrested in 2020 for rioting, criminal intimidation, murder, and criminal conspiracy. The court, however, made it clear that “any reference to ChatGPT and any observation made… is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.”

Similarly, the Supreme Court of India developed the Supreme Court Vidhik Anuvaad Software (SUVAS), a machine-assisted translation tool trained by AI to promote regional languages in judicial procedure that can translate English judicial documents into eleven vernacular languages.

In April 2021, the then-CJI SA Bobde launched the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE), a tool that collects relevant facts and laws and makes them available to a judge.


Previous Post Next Post