BSV
$56.41
Vol 66.06m
-10.98%
BTC
$100082
Vol 112987.63m
-5.62%
BCH
$478.19
Vol 587.91m
-11.07%
LTC
$108.06
Vol 1950.93m
-15.23%
DOGE
$0.35
Vol 6161.47m
-9.45%
Getting your Trinity Audio player ready...

Senior judges in the United Kingdom have published artificial intelligence (AI) guidelines for the judiciary, advising how to use AI and warning of the potential risks of using the technology in cases.

Four senior judges in the U.K. have issued guidance to the judiciary in England and Wales advising them to restrict their use of AI in conducting legal research and avoid disclosing confidential or private information to AI chatbots.

“The use of Artificial Intelligence (‘AI’) throughout society continues to increase, and so does its relevance to the court and tribunal system. All judicial office holders must be alive to the potential risks,” said judges Baroness Carr of Walton-on-the-Hill (Lady Chief Justice of England & Wales), Sir Geoffrey Vos (Master of the Rolls), Sir Keith Lindblom (Senior President of Tribunals) and Lord Justice Colin Birss (Deputy Head of Civil Justice).

“Of particular importance, as the guidance document emphasizes, is the need to be aware that the public versions of these tools are open in nature and therefore that no private or confidential information should be entered into them.”

Published on December 12, the guidance is directed toward magistrates, tribunal panel members, and judges in England and Wales and highlights the risks that information provided by AI tools can be inaccurate, incomplete, misleading, or out of date.

Information gleaned from AI systems may also have undue influence from United States laws, said the guidance: “Even if it purports to represent English law, it may not do so.”

Despite its warnings, the guidance also pointed to potentially beneficial uses of AI, such as in administrative or repetitive tasks, for which its use is permitted. However, its use for legal research was “not recommended” except to remind judges of material they were already familiar with.

England’s second most senior judge, Sir Geoffrey Vos, said AI provides “great opportunities for the justice system, but because it’s so new, we need to make sure that judges at all levels understand [it properly].”

According to Vos and the other issuing judges, the guidance is the first step in a proposed “suite of future work to support the judiciary in their interactions with AI.”

This judicial guidance comes shortly after the U.K. held its inaugural AI Safety Summit at the beginning of November at the iconic Bletchley Park.

At the summit, 28 countries worldwide formed a unified agreement, titled the “Bletchley Declaration on AI safety,” emphasizing “the urgent need to understand and collectively manage potential risks” of AI.

“The declaration is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI—helping ensure the long-term future of our children and grandchildren,” said U.K. Prime Minister Rishi Sunak.

The event gathered leaders from across the globe, including from the United States, China, the European Union, Africa, Asia, and South America, as well as thought leaders and heads of tech companies developing AI, to discuss the future of the technology and the potential risks involved.

This was followed in December by the EU reaching a landmark agreement on regulating AI.

Laying down a marker for other jurisdictions, as it often does regarding regulation, the European Parliament and Council reached a provisional agreement on the Artificial Intelligence Act on December 8. The regulation aims to ensure that “fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high-risk AI while boosting innovation and making Europe a leader in the field.”

Although likely not implemented until 2025 at the earliest, the rules would establish obligations for AI based on its potential risks and level of impact.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

Recommended for you

El Salvador softens BTC stance as economic reality bites
Nayib Bukele’s government has agreed to walk back its pro-BTC stance to secure a $1.3 billion IMF loan, saying that...
December 18, 2024
Ripple launches stablecoin; Tether invests in EU lifeboats
Ripple says choosing NYDFS for its newly minted RLUSD will help increase the token's acceptance. Elsewhere, Tether continues to look...
December 18, 2024
Advertisement
Advertisement
Advertisement