Robotic hand gesturing

OpenAI, Meta, Google commit to voluntary AI safeguards

Artificial intelligence (AI) trailblazers, including OpenAI, Google (NASDAQ: GOOGL), and Meta (NASDAQ: META), have committed to the voluntary AI guardrails proposed by the Biden administration.

The White House recently confirmed that the companies had pledged to develop AI within the safety, security, and trust guardrails.

Anthropic, Inflection, Amazon (NASDAQ: AMZN), and Microsoft (NASDAQ: MSFT) also joined the pledge, which the White House says will encourage them to uphold the highest standards in developing AI. They must ensure that this rapid innovation doesn’t come at the expense of Americans’ rights and safety.

A key pledge is that the companies will ensure their AI products are safe before introducing them to the public. They must commit to internal and external security testing of their systems before releasing and sharing information across the industry and with authorities.

This commitment will prove challenging to implement. In pursuit of first-mover advantage, AI companies have strived to ensure their products are the first to hit the market, sometimes at the expense of user safety. For instance, OpenAI founder Sam Altman released ChatGPT to the public despite his engineering team expressing concerns about its safety.

The companies also pledged to build systems that put security first and earn the public’s trust. They must “commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.” They must also invest in reducing bias, protecting privacy, and publicly reporting their systems’ capabilities and limitations.

In his remarks, President Joe Biden said his government is committed to protecting users while promoting innovation.

“These commitments are real, and they’re concrete. They’re going to help the industry fulfill its fundamental obligation to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our shared values,” he stated.

All for show

The pledges will do little to shape the future of AI development. First, they are voluntary and can’t be enforced by any authority. Second, the seven companies have already made these commitments—or a version of them—in their template company practices.

OpenAI, for instance, claims to be committed to transparency on how it collects and utilizes data to train its AI models. However, it’s facing a class-action lawsuit in California for violating the copyrights and privacy of millions of people by scraping the internet for their data.

OpenAI founder Sam Altman is a vocal critic of increased regulations on AI. He even threatened to exit Europe over proposed new rules, a threat he later retracted.

However, OpenAI voiced its support for the new commitments. Anna Makanju, the company’s vice president of global affairs, described the pledge as “part of our ongoing collaboration with governments, civil society organizations, and others around the world to advance AI governance.”

Meta also supports voluntary commitments, stated the global affairs president, Nick Clegg. They “are an important first step in ensuring responsible guardrails are established for AI,” he said.

But according to Paul Barrett, the deputy director for the Stern Center for Business and Human Rights at New York University, it’s all for show.

“The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I,” he told the New York Times.

CoinGeek Weekly Livestream: The future of AI Generated Art on Aym

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.