BSV
$72.91
Vol 132.69m
7.24%
BTC
$98668
Vol 78389.48m
-0.64%
BCH
$530.41
Vol 1566.87m
7.42%
LTC
$101.57
Vol 2212.93m
13.39%
DOGE
$0.46
Vol 19779.68m
17.61%
Getting your Trinity Audio player ready...

Carnegie Mellon University has confirmed the receipt of a $20 million funding grant to establish an artificial intelligence (AI) institute to keep up with the tide of technology development.

The new AI facility would revolve around the niche of decision-making using data. Named the AI Institute for Societal Decision Making, the proposed facility aims to reduce the chances of making errors regarding public health and natural disasters by government officials.

“We need to develop AI technology that works for the people,” said Aarti Singh, a Machine Learning professor tapped to be the institute’s first director. “It’s actually built on data that is vetted, algorithms that are vetted, with feedback from all the stakeholders and participatory design.”

Singh disclosed that the AI technology would require the help of public health officials, experts in behavioral sciences, and emergency workers to train a realistic model of the AI. Singh is also pushing for ethical usage of the technology to avoid misuse of moral guidelines as the industry remains in a state of flux.

“I think one of the key things is making sure that we are engaging with AI in an ethical way so that it is deployed when it’s needed,” said Singh.

Last week, Romania picked up the gauntlet with the launch of an AI chatbot designed to sift through comments on social media and assist the government with making decisions. Although widely hailed for its innovativeness, the AI chatbot drew criticism for its tendency to amplify certain opinions based on the popularity of social media accounts.

Amid reports of abuse of AI technology, the U.K. government created a new task force to ensure the safe use of the technology. Dubbed the Foundation Model Taskforce, the body has received $124.8 million from the government to assist in transforming the U.K. into an AI superpower.

Aware of the threats posed by AI

Despite its ability to improve productivity, some organizations are taking a defensive stance on AI over security concerns. Consumer electronic giant Samsung (NASDAQ: SSNLF) became the latest firm to ban staff from using generative AI platforms like ChatGPT and Bard following the leak of an internal source code.

Goldman Sachs (NASDAQ: GS), Wells Fargo (NASDAQ: WFC), and Citi (NASDAQ: C) are among firms that have restricted the use of AI technology among their employees amid fears of leaking the financial data of clients. The digital asset industry has had its share of chaos from AI use ranging from concerns about eliminating smart contract audit jobs to outright rug pulls and scams.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch CoinGeek Roundtable: AI, ChatGPT & Blockchain

Recommended for you

David Case gets technical with Bitcoin masterclass coding sessions
Whether you're a coding pro or a novice, David Case's livestream sessions on the X platform are not to be...
November 21, 2024
NY Supreme Court’s ruling saves BTC miner Greenidge from closing
However, the judge also ruled that Greenidge must reapply for the permit and that the Department of Environmental Conservation has...
November 20, 2024
Advertisement
Advertisement
Advertisement