Cyber law concept with AI

California waters down AI safety bill to appease industry opposition

Getting your Trinity Audio player ready...

California lawmakers have amended a bill that would hold artificial intelligence (AI) firms accountable for the harm caused by their products after the original draft received significant opposition from the industry, including AI firm Anthropic.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1074) seeks to protect whistleblowers and empower the state of California to intervene if it has reason to believe an AI-related catastrophe is going to occur.

However, California State Senator Scott Wiener (D-CA), the bill’s sponsor, acknowledged that changes had been made, citing input from San Francisco-based AI safety and research company Anthropic.

“While the amendments do not reflect 100 percent of the changes requested by Anthropic – a world leader on both innovation and safety – we accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” said Wiener, in an August 15 statement.

Originally, the bill would have allowed the state to sue firms for negligence over inadequate safety practices, even if the violations didn’t result in a “catastrophic event.” It would have also created a government oversight board responsible for implementing and enforcing safety practices.

After negative feedback from the tech industry, including a comprehensive list of suggestions from Anthropic, Weiner claimed his office had now found a happy medium:

“These amendments build on significant changes to SB-1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.”

Two of the provisions that Anthropic took particular issue with were that AI companies could be sued prior to the establishment of harm and the creation of a new “Frontier Model Division” to police state-of-the-art AI models.

But it wasn’t just the industry that voiced concerns. Congressional Representative Zoe Lofgren (D-CA) wrote to Wiener on August 7, warning, “there is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California.”

This would be a major blow to a state currently home to 35 of the top 50 AI companies in the world, according to Governor Gavin Newsom’s (D-CA) executive order last September, which called for studying AI technology’s development, use and risks.

In the end, after yielding to pressure, the changes to bill SB-1047 limit enforcement penalties, such as the injunctive option to require the deletion of models; criminal perjury provisions for lying about models were dropped, based on the adequacy of existing law about lying to the government; there’s no longer language that would create a Frontier Model Division, though some of the proposed responsibilities will be handed to other government bodies; and the legal standard by which developers must attest to compliance has been reduced from “reasonable assurance” to “reasonable care.”

Despite these compromises, SB-1074 would still allow the state to hold any AI developer responsible for harm caused by their products. Specifically, “mass casualties or at least five hundred million dollars ($500,000,000) of damage.”

It’s extremely difficult to predict all the ways an AI model may cause harm, but with the threshold set so high, it seems hard to argue that any developer whose AI results in such severe harm shouldn’t be, at least, among those answerable.

The bill will now go to a final assembly vote, to be held prior to August 31. After which, unless the governor vetoes, technology firms operating in California will face a new regulatory environment.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI & blockchain will be extremely important—here’s why

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.