The maker of the generative artificial intelligence (AI) platform ChatGPT has confirmed that it will create a new team to solve the challenge of controlling superintelligent AI.
OpenAI predicts that AI systems will achieve superintelligence before the end of the decade, which may pose significant risks to humanity. OpenAI hopes to make enough technological breakthroughs within four years to “steer and control AI systems much smarter than us.”
“But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” OpenAI said. “While superintelligence seems far off now, we believe it could arrive this decade.”
To undertake the daunting task, OpenAI announced hiring machine learning experts to join its superintelligence alignment team. The team will be led by OpenAI co-founder Ilya Sutskever and Head of Alignment Jan Leike, with researchers from other OpenAI units forming the team.
OpenAI says it will be earmarking 20% of its resources to the new team and will leverage its previous studies to get a headstart. The firm is adopting a three-pronged strategy to create a “human-level automated alignment researcher” to “iteratively align superintelligence.”
OpenAI stated that it would achieve this through developing a scalable training model, validating the model, and using adversarial methods to stress test the alignment pipeline. Although the plan looks feasible on paper, OpenAI disclosed that the entire research hangs on the balance of probabilities, but it is still a risk worth taking.
“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” OpenAI remarked.
In June, OpenAI launched a $1 million grant to support researchers building projects in the intersection of cybersecurity and AI. The fund will be focused on “attack-minded” projects, with successful projects receiving up to $10,000 in direct funding.
OpenAI faces increasing regulatory scrutiny
Following the launch of ChatGPT-3 and its successor ChatGPT-4, OpenAI faced scathing opposition from regulators in the EU, coming within a hair’s breadth of being banned in Italy. Consumer groups and critics pointed out the risks posed by the generative AI platform to finance, Web3, security, news, and education sectors.
In the U.S., the company is facing a class action lawsuit bordering on the illegal scraping of the personal data of millions of individuals used in training its AI models. The plaintiffs allege that OpenAI was in breach of privacy and copyright laws for failing to seek the consent of individuals.
To smoothen strained relationships with regulators, OpenAI CEO Sam Altman met with EU authorities in Brussels to speak on the downsides of overregulation. Altman has since toured over 16 cities across three continents as the firm meanders its way through the minefield of regulatory uncertainty.
Watch AI Summit PH 2023: Philippines is ripe to start using artificial intelligence
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.