BSV
$66.66
Vol 107.45m
-0.78%
BTC
$98531
Vol 100118.86m
2.95%
BCH
$483.56
Vol 1081.96m
1.78%
LTC
$89.85
Vol 1115.23m
2.93%
DOGE
$0.4
Vol 12670.91m
6.21%
Getting your Trinity Audio player ready...

Top universities in the United Kingdom have drafted guiding principles to foster the development of generative artificial intelligence (AI).

As AI adoption surges, higher education institutions are scrambling to push their students to become AI-literate and competitive in the space. This push must, however, strike a balance, with some schools concerned that students are leveraging AI to cheat in tests.

U.K. universities want to be on the front foot, and as local media reports, two dozen top schools are now partnering on AI development and adoption.

The Russell Group of universities—which comprises the top U.K. schools, including Oxford, Cambridge, and Imperial College London—has drawn up guiding principles to push AI integration and development, reports the Guardian. The 24 schools believe the move will protect academic integrity while allowing them to capitalize on AI opportunities.

Under the new principles, the schools will teach AI to students, all while making them aware of its dangers. This includes guiding them to balance research and the risk of plagiarism. They will also educate the students on the dangers of AI bias, one of the most significant criticisms of the technology.

The 24 schools will also train their staff on AI, allowing them to guide the students and detect cases of AI-assisted cheating.

“These policies make it clear to students and staff where the use of generative AI is inappropriate and are intended to support them in making informed decisions and to empower them to use these tools appropriately and acknowledge their use where necessary,” the document says.

AI in higher education has become a concern in recent months. Initially, many universities banned the use of ChatGPT and its peers, arguing they were being used to cheat.

In a March academic paper, a group of university professors raised concern that AI will make it nearly impossible to detect plagiarism.

“The technology is improving very fast, and it’s going to be difficult for universities to outrun it,” the peer-reviewed paper stated.

In an ironic twist, the authors later revealed that the paper had been entirely written by ChatGPT.

With the new guiding principles, the 24 universities are pledging to be at the forefront of this emerging technology and work with their students to leverage it, says Russell Group CEO Tim Bradshaw.

University of Manchester’s Andrew Brass added, “We know that students are already utilizing this technology, so the question for us as educators is how do you best prepare them for this, and what are the skills they need to have to know how to engage with generative AI sensibly?”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

AI Summit PH 2023: Philippines is ripe to start using artificial intelligence

Recommended for you

FTX’s Gary Wang avoids jail, gifts feds fraud detection tool
Unlike his fallen FTX comrades, Gary Wang's decision to take the "cowardly path" resulted in him avoiding jail time and...
November 22, 2024
UK tests digital bond issuance; eyes digital asset leadership
The exact details of the digital gilts program have yet to be announced, but two approaches are being considered: slow,...
November 22, 2024
Advertisement
Advertisement
Advertisement