Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Technology
Callum Conway-Shaw

Judges In England And Wales Are Issued Official Guidance On AI

Judges will be able to use ChatGPT to help write legal rulings despite warnings that AI can invent cases that never happened. (Credit: AFP News)

Judges in England and Wales were issued official guidance on AI use for the first time yesterday.

The judiciary declared judges would be allowed to use generative artificial intelligence systems like OpenAI's ChatGPT for basic work tasks but must not use chatbots to conduct legal research or undertake legal analysis.

The guidance for magistrates, tribunal panel members and judges highlighted the risk that AI tools would make factual errors or draw on laws from foreign jurisdictions if asked to help with cases.

Judges were also warned about signs that legal arguments may have been prepared by an AI chatbot, as has already happened in the United States and recently in Britain.

Geoffrey Vos, who is Head of Civil Justice in England and Wales, said the guidance was the first of its kind in the jurisdiction.

He told reporters at a briefing before the guidance was published that AI "provides great opportunities for the justice system".

"But, because it is so new, we need to make sure that judges at all levels understand what it does, how it does it and what it cannot do," he added.

Vos said judges were well equipped to distinguish between genuine legal arguments and those prepared using AI, as well as the potential use of so-called deepfakes as evidence.

"Judges are trained to decide what is true and what is false and they are going to have to do that in the modern world of AI just as much as they had to do that before," he said.

Before this week, the use of the technology by the judiciary in England and Wales has attracted little attention, in part because judges are not required to describe preparatory work they may have undertaken to produce a judgment.

Although the guidance accepts that judges might find AI useful for some administrative or repetitive tasks, the use of AI for legal research was "not recommended", except to remind judges of material about which they were already familiar.

"Information provided by AI tools may be inaccurate, incomplete, misleading or out of date," the guidance said, noting it was often based heavily on law in the US. "Even if it purports to represent English law, it may not do so."

The guidance issued on Tuesday also warned of privacy risks. Judges were told to assume that information inputted into a public AI chatbot "should be seen as being published to all the world".

Vos, the Master of the Rolls, said there was no suggestion that any judicial officeholder had asked a chatbot about sensitive case-specific information and the guidance was issued for the avoidance of doubt.

In the long term AI offered "significant opportunities in developing a better, quicker and more cost-effective digital justice system", he added.

The legal sector is the latest to take an interest in regulating the use of AI after the Bank of England (BoE) carried out a full risk assessment of the technology last week.

In its bi-annual Financial Stability Review, the bank announced it would launch a fresh review into the impact of AI, amid fears the rapidly developing sector could pose serious risks to the UK's financial stability.

The BoE also claimed it was currently taking advice about the potential implications stemming from the adoption of AI and ML in the financial services sector, which accounts for around eight per cent of the British economy and has deep-rooted global connections.

The European Union is currently trying to finalise the world's first 'AI Law', which will aim to regulate systems based on the level of risk they pose.

Negotiations on the final legal text began in June, but a fierce debate in recent weeks over how to regulate general-purpose AI like ChatGPT and Google's Bard chatbot threatened talks at the last minute.

With this in mind, world leaders and tech specialists gathered as UK Prime Minister Rishi Sunak hosted the world's first 'AI Summit' in Bletchley Park, Buckinghamshire last month.

In the build-up to the conference, Sunak announced the establishment of a 'world first' UK AI safety institute.

The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.