BSV
$51.96
Vol 37.48m
1.25%
BTC
$76202
Vol 65131.78m
1.71%
BCH
$378.81
Vol 327.46m
0.68%
LTC
$71.53
Vol 411.8m
0.88%
DOGE
$0.19
Vol 4269.01m
1.66%
Getting your Trinity Audio player ready...

Carnegie Mellon University has confirmed the receipt of a $20 million funding grant to establish an artificial intelligence (AI) institute to keep up with the tide of technology development.

The new AI facility would revolve around the niche of decision-making using data. Named the AI Institute for Societal Decision Making, the proposed facility aims to reduce the chances of making errors regarding public health and natural disasters by government officials.

“We need to develop AI technology that works for the people,” said Aarti Singh, a Machine Learning professor tapped to be the institute’s first director. “It’s actually built on data that is vetted, algorithms that are vetted, with feedback from all the stakeholders and participatory design.”

Singh disclosed that the AI technology would require the help of public health officials, experts in behavioral sciences, and emergency workers to train a realistic model of the AI. Singh is also pushing for ethical usage of the technology to avoid misuse of moral guidelines as the industry remains in a state of flux.

“I think one of the key things is making sure that we are engaging with AI in an ethical way so that it is deployed when it’s needed,” said Singh.

Last week, Romania picked up the gauntlet with the launch of an AI chatbot designed to sift through comments on social media and assist the government with making decisions. Although widely hailed for its innovativeness, the AI chatbot drew criticism for its tendency to amplify certain opinions based on the popularity of social media accounts.

Amid reports of abuse of AI technology, the U.K. government created a new task force to ensure the safe use of the technology. Dubbed the Foundation Model Taskforce, the body has received $124.8 million from the government to assist in transforming the U.K. into an AI superpower.

Aware of the threats posed by AI

Despite its ability to improve productivity, some organizations are taking a defensive stance on AI over security concerns. Consumer electronic giant Samsung (NASDAQ: SSNLF) became the latest firm to ban staff from using generative AI platforms like ChatGPT and Bard following the leak of an internal source code.

Goldman Sachs (NASDAQ: GS), Wells Fargo (NASDAQ: WFC), and Citi (NASDAQ: C) are among firms that have restricted the use of AI technology among their employees amid fears of leaking the financial data of clients. The digital asset industry has had its share of chaos from AI use ranging from concerns about eliminating smart contract audit jobs to outright rug pulls and scams.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch CoinGeek Roundtable: AI, ChatGPT & Blockchain

Recommended for you

BSV joins Linux Foundation to advance open standards
The BSV Association has partnered with the Linux Foundation to advance its objective of promoting development that adheres to BSV...
November 6, 2024
How to construct transactions on BSV blockchain with Python
Python coders, it's time to start learning how to build Bitcoin transactions as nChain's Senior Software Engineer, Arthur Gordon, recently...
November 5, 2024
Advertisement
Advertisement
Advertisement