BSV
$69.39
Vol 91.32m
4.13%
BTC
$91198
Vol 50569.73m
-0.23%
BCH
$450.35
Vol 1030.08m
3.08%
LTC
$91.57
Vol 2560.85m
4.79%
DOGE
$0.37
Vol 9982.94m
-1.87%
Getting your Trinity Audio player ready...

Senior judges in the United Kingdom have published artificial intelligence (AI) guidelines for the judiciary, advising how to use AI and warning of the potential risks of using the technology in cases.

Four senior judges in the U.K. have issued guidance to the judiciary in England and Wales advising them to restrict their use of AI in conducting legal research and avoid disclosing confidential or private information to AI chatbots.

“The use of Artificial Intelligence (‘AI’) throughout society continues to increase, and so does its relevance to the court and tribunal system. All judicial office holders must be alive to the potential risks,” said judges Baroness Carr of Walton-on-the-Hill (Lady Chief Justice of England & Wales), Sir Geoffrey Vos (Master of the Rolls), Sir Keith Lindblom (Senior President of Tribunals) and Lord Justice Colin Birss (Deputy Head of Civil Justice).

“Of particular importance, as the guidance document emphasizes, is the need to be aware that the public versions of these tools are open in nature and therefore that no private or confidential information should be entered into them.”

Published on December 12, the guidance is directed toward magistrates, tribunal panel members, and judges in England and Wales and highlights the risks that information provided by AI tools can be inaccurate, incomplete, misleading, or out of date.

Information gleaned from AI systems may also have undue influence from United States laws, said the guidance: “Even if it purports to represent English law, it may not do so.”

Despite its warnings, the guidance also pointed to potentially beneficial uses of AI, such as in administrative or repetitive tasks, for which its use is permitted. However, its use for legal research was “not recommended” except to remind judges of material they were already familiar with.

England’s second most senior judge, Sir Geoffrey Vos, said AI provides “great opportunities for the justice system, but because it’s so new, we need to make sure that judges at all levels understand [it properly].”

According to Vos and the other issuing judges, the guidance is the first step in a proposed “suite of future work to support the judiciary in their interactions with AI.”

This judicial guidance comes shortly after the U.K. held its inaugural AI Safety Summit at the beginning of November at the iconic Bletchley Park.

At the summit, 28 countries worldwide formed a unified agreement, titled the “Bletchley Declaration on AI safety,” emphasizing “the urgent need to understand and collectively manage potential risks” of AI.

“The declaration is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI—helping ensure the long-term future of our children and grandchildren,” said U.K. Prime Minister Rishi Sunak.

The event gathered leaders from across the globe, including from the United States, China, the European Union, Africa, Asia, and South America, as well as thought leaders and heads of tech companies developing AI, to discuss the future of the technology and the potential risks involved.

This was followed in December by the EU reaching a landmark agreement on regulating AI.

Laying down a marker for other jurisdictions, as it often does regarding regulation, the European Parliament and Council reached a provisional agreement on the Artificial Intelligence Act on December 8. The regulation aims to ensure that “fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high-risk AI while boosting innovation and making Europe a leader in the field.”

Although likely not implemented until 2025 at the earliest, the rules would establish obligations for AI based on its potential risks and level of impact.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

Recommended for you

This Week in AI: US, China clash; Amazon eyes in-house chips
China and the U.S. are butting heads anew over trade, while Amazon eyes to become a major player in the...
November 15, 2024
CREATE MORE Act and its impact on emerging tech
Philippine President Ferdinand Marcos Jr. signed the CREATE MORE Act into law, focusing on lowering corporate taxes, simplifying business processes,...
November 15, 2024
Advertisement
Advertisement
Advertisement