Logo of OpenAI on gradient background with silhouettes of two software developers

OpenAI new team will tackle AI risks amid growing concerns of misuse

OpenAI has confirmed its intent to launch a new outfit dedicated to countering the risks stemming from the operations of artificial intelligence (AI) systems.

The “Preparedness” unit will focus on curtailing the downsides associated with “frontier” AI models. In a company blog post, OpenAI said it believes that in the coming years, future models will outperform the capabilities of present systems, opening a new can of worms for society.

Led by Aleksander Madry, OpenAI’s new Preparedness team will iron out guardrails to prevent systemic risks around individual persuasion, cybersecurity, autonomous replication and adaptation, and chemical, biological, radiological, and nuclear (CBRN) threats.

“We take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence,” OpenAI said. “To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness.”

To achieve its goal, the team will answer several burning questions about the data theft of frontier AI systems by bad actors and the process of rolling out a “robust framework for monitoring.” OpenAI says the new outfit will begin internal red teaming for frontier models while conducting capability assessments and evaluations.

Going forward, the Preparedness team will create a Risk-Informed Development Policy (RDP) to be consistently updated in line with industry requirements. The RDP will build upon OpenAI‘s previous risk mitigation processes to create a governance structure to promote “accountability and oversight” throughout the process.

OpenAI says it is currently shopping for talent to fill up the ranks of its new team, seeking individuals with diverse technical backgrounds. Alongside the job listing, the AI developer has launched a Preparedness Challenge to identify less obvious areas of concern around AI misuse.

“We will offer $25,000 in API credits to up to 10 top submissions, publish novel ideas and entries, and look for candidates for Preparedness from among the top contenders in this challenge,” OpenAI stated.

In July, OpenAI floated a research team to address the challenge of controlling superintelligent AI systems, predicting an influx of advanced AI before the end of 2030.

A dash for proper guardrails

Alongside the rush for technical innovation by AI developers is a push by leading firms to establish proper safeguards for safe usage. In July, a group of technology firms made several voluntary commitments for responsible AI use focused on trust, security, and safety.

OpenAI pledged to support the deployment of AI in cybersecurity with a $1 million grant, while Google (NASDAQ: GOOGLinvested $20 million on research efforts to promote debates on “public policy solutions” for AI. Global regulators are upping the ante in their attempts to roll out frameworks for the control of AI and other emerging technologies to achieve parity with the pace of innovation.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.