Getting your Trinity Audio player ready...

California Governor Gavin Newsom has signed into law a bill that aims to install “commonsense guardrails” on the development of frontier artificial intelligence (AI) models, including increased transparency and protection for whistleblowers.

Senate Bill 53, also known as the “Transparency in Frontier Artificial Intelligence Act (TFAIA),” was introduced on January 7 by California State Senator Scott Wiener (D-San Francisco) to promote the “responsible development” of large-scale AI systems.

According to Wiener, the bill aims to address the “substantial risks” posed by advanced AI, while also supporting California’s world-leading AI development sector by providing low-cost computing resources to researchers and start-ups.

After several rounds of debate and amendments, Senate Bill 53 passed the state Senate in May, followed by the Assembly in September, after which it was sent to Governor Newsom’s desk.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance,” said Newsom. “AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”

The legislation was able to go ahead thanks to the U.S. Senate voting 99-1 in July to remove provisions of President Trump’s “Big Beautiful Bill” that would have prevented states from enacting AI regulations.

“The Senate came together tonight to say that we can’t just run over good state consumer protection laws,” Sen. Maria Cantwell (D-WA) said at the time. “States can fight robocalls, deepfakes and provide safe autonomous vehicle laws. This also allows us to work together nationally to provide a new federal framework on Artificial Intelligence that accelerates U.S. leadership in AI while still protecting consumers.”

What does SB53 do?

In terms of safeguards, SB53 establishes new requirements for frontier AI developers around transparency, accountability, and responsiveness.

Specifically, it requires large frontier developers to publish a framework on their website describing how the company has incorporated national standards, international standards, and industry-consensus best-practices; it creates a new mechanism for companies or the public to report potential critical safety incidents to California’s Office of Emergency Services; and it protects whistleblowers who disclose significant health and safety risks posed by frontier models, as well as creating a civil penalty for noncompliance.

Additionally, the bill directs the California Department of Technology to annually recommend updates to the law that are appropriate based on stakeholder input, technological developments, and international standards.

When it comes to supporting innovation, SB53 also establishes a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster.

Newsom’s office stated that the consortium, known as ‘CalCompute,’ would “advance the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation.”

Back to the top ↑

California’s AI balancing act

In 2024, Governor Newsom vetoed a previous attempt at AI legislation, SB 1047—also authored by Weiner—that would have implemented extensive safety protocols for powerful AI systems.

In a statement, Newsom wrote that, while the bill was “well-intentioned,” it was too focused on the largest models and overlooked the risks posed by smaller models or systems deployed in particularly risky environments.

“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” said Newsom. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”

Since then, calls for AI safeguards have only grown louder, adding to the pressure on the state to regulate the technology.

On September 22, over 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous AI use. They warned that “AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world.”

However, California lawmakers have also had to balance these calls for caution and guardrails with not wanting to harm one of the state’s and country’s golden geese.

In addition to being home to four of the five largest companies active in the sector, by market capitalization—Nvidia (NASDAQ: NVDA), Apple (NASDAQ: AAPL), Alphabet (Google) (NASDAQ: GOOGL), and Meta (NASDAQ: META)—California also boasts the headquarters of OpenAI, Anthropic, and numerous smaller developers. According to a report published by Forbes in April, California is home to 32 of the top 50 AI companies worldwide.

To meet this challenge, which requires balancing the protection of both the public and innovation, a group of leading AI academics and experts was brought together earlier this year, at the request of Governor Newsom, to discuss the topic. This resulted in the release of a “first-in-the-nation report” on sensible AI guardrails, “based on an empirical, science-based analysis of the capabilities and attendant risks of frontier models.”

The report included recommendations on ensuring evidence-based policymaking and argued for balancing the need for increased transparency with considerations such as security risks.

According to the Governor’s office: “SB 53 is responsive to the recommendations in the report — and will help ensure California’s position as an AI leader.”

It added that the state is “balancing its work to advance AI with commonsense laws to protect the public, embracing the technology to make our lives easier and make government more efficient, effective, and transparent.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Back to the top ↑

Watch: Adding the human touch behind AI

Recommended for you

BRICS slams Trump tariffs as China’s yuan stablecoin expands
September saw leaders of the BRICS alliance push back against U.S. trade tariffs while China's yuan stablecoin took a big...
October 3, 2025
Today’s stablecoin giants might not be walking tall forever
Stablecoin giants Tether and Circle face challenges as competition and regulation erode their dominance, while much of the market runs...
October 3, 2025
Advertisement
Advertisement
Advertisement