BSV
$46.1
Vol 15.43m
-2.04%
BTC
$68394
Vol 45664.9m
-0.81%
BCH
$334.44
Vol 265.87m
-0.64%
LTC
$66.66
Vol 320.86m
-0.32%
DOGE
$0.16
Vol 3286.66m
8.35%
Getting your Trinity Audio player ready...

The maker of the generative artificial intelligence (AI) platform ChatGPT has confirmed that it will create a new team to solve the challenge of controlling superintelligent AI.

OpenAI predicts that AI systems will achieve superintelligence before the end of the decade, which may pose significant risks to humanity. OpenAI hopes to make enough technological breakthroughs within four years to “steer and control AI systems much smarter than us.”

“But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” OpenAI said. “While superintelligence seems far off now, we believe it could arrive this decade.”

To undertake the daunting task, OpenAI announced hiring machine learning experts to join its superintelligence alignment team. The team will be led by OpenAI co-founder Ilya Sutskever and Head of Alignment Jan Leike, with researchers from other OpenAI units forming the team.

OpenAI says it will be earmarking 20% of its resources to the new team and will leverage its previous studies to get a headstart. The firm is adopting a three-pronged strategy to create a “human-level automated alignment researcher” to “iteratively align superintelligence.”

OpenAI stated that it would achieve this through developing a scalable training model, validating the model, and using adversarial methods to stress test the alignment pipeline. Although the plan looks feasible on paper, OpenAI disclosed that the entire research hangs on the balance of probabilities, but it is still a risk worth taking.

“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” OpenAI remarked.

In June, OpenAI launched a $1 million grant to support researchers building projects in the intersection of cybersecurity and AI. The fund will be focused on “attack-minded” projects, with successful projects receiving up to $10,000 in direct funding.

OpenAI faces increasing regulatory scrutiny

Following the launch of ChatGPT-3 and its successor ChatGPT-4, OpenAI faced scathing opposition from regulators in the EU, coming within a hair’s breadth of being banned in Italy. Consumer groups and critics pointed out the risks posed by the generative AI platform to finance, Web3, security, news, and education sectors.

In the U.S., the company is facing a class action lawsuit bordering on the illegal scraping of the personal data of millions of individuals used in training its AI models. The plaintiffs allege that OpenAI was in breach of privacy and copyright laws for failing to seek the consent of individuals.

To smoothen strained relationships with regulators, OpenAI CEO Sam Altman met with EU authorities in Brussels to speak on the downsides of overregulation. Altman has since toured over 16 cities across three continents as the firm meanders its way through the minefield of regulatory uncertainty.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch AI Summit PH 2023: Philippines is ripe to start using artificial intelligence

Recommended for you

FINRA: Metaverse to hit $3T by 2031, but poses regulatory risks
FINRA says it has observed more players in the securities industry diving into the metaverse but warns that they must...
November 4, 2024
This Week in AI: US tightens AI restrictions on China
The U.S. issued a rule restricting American investments in China, Hong Kong, and Macau, specifically within industries like AI, semiconductors,...
November 1, 2024
Advertisement
Advertisement
Advertisement