Getting your Trinity Audio player ready...
|
A bipartisan bill aimed at supporting U.S. innovation in artificial intelligence (AI) technology was introduced to the Senate last week. The ‘Future of AI Innovation Act’ proposes a number of measures, including setting up the AI Safety Institute to develop voluntary guidelines and standards and mandating federal science agencies to make datasets publicly available.
Last Thursday, United States Senators Todd Young (R-Ind), Maria Cantwell (D-Wash), Marsha Blackburn (R-Tenn), and John Hickenlooper (D-Col), all members of the Commerce Committee, introduced the bipartisan ‘Future of AI Innovation Act,’ which aim to consolidate U.S. leadership in AI and other emerging technologies through enhanced private sector collaboration.
The legislation’s primary focus is strengthening partnerships between government, business, civil society, and academia “to promote robust long-term innovation in AI.”
To achieve this, the bill proposes a number of measures, including establishing the U.S. AI Safety Institute, to be housed within the National Institute of Standards and Technology (NIST), and authorizing it to develop voluntary guidelines and standards in cooperation with the private sector and federal agencies.
“The Future of AI Innovation Act is critical to maintaining American leadership in the global race to advance AI,” Young said. “This bipartisan bill will create important partnerships between government, the private sector, and academia to establish voluntary standards and best practices that will ensure a fertile environment for AI innovation while accounting for potential risks.”
The legislation builds upon Young and Cantwell’s original FUTURE of AI Act, which created the National AI Advisory Committee (NAIAC), a committee of outside experts who make recommendations to the government on AI.
The Future of AI Innovation Act was drafted based on recommendations from NAIAC reports. As well as establishing the AI Safety Institute, key proposals in the bill include:
- directing federal science agencies to make curated datasets available for public use, to accelerate new advancements in AI applications.
- creating grand challenge prize competitions to spur private sector AI solutions and innovation.
- creating “testbed programs” between NIST, the National Science Foundation (NSF), the Department of Energy (DOE), and the private sector to develop security risk tools and testing environments for companies to evaluate their systems.
- and forming a coalition with U.S. allies to cooperate on global standards and create a multilateral research collaboration between scientific and academic institutions worldwide.
“The NIST AI Safety Institute, testbeds at our national labs, and the grand challenge prizes will bring together private sector and government experts to develop standards, create new assessment tools, and overcome existing barriers,” Cantwell said. “It will lay a strong foundation for America’s evolving AI tech economy for years to come.”
This sentiment was echoed by Blackburn, who added that the bill “would also require the identification of regulatory barriers to AI innovation and strengthens our national posture in standard setting bodies – making sure the government helps, rather than hinders, technological advancement.”
Ensuring the government doesn’t stand in the way of innovation is perhaps an allusion to the tentative steps already taken to regulate AI in the United States.
US increases focus on AI
In October 2023, President Joe Biden signed an executive order to regulate AI development. It mandated that AI system developers report to the federal government on their progress and introduced new standards for testing before public release.
To help identify potential threats, the order also required that several government agencies submit risk assessments regarding the use of AI in their respective domains. These assessments came from nine federal agencies, including the Departments of Defense, Transportation, Treasury, and Health and Human Services, and were to be submitted to the Department of Homeland Security (DHS).
These measures were focused more on security concerns around AI and intended to somewhat reign in developers. However, the executive order also birthed the National Artificial Intelligence Research Resource (NAIRR) pilot program, which officially became up and running in January 2024.
The pilot is a concerted effort to democratize access to advanced AI resources and foster innovation across various sectors by lowering the barrier of entry to the advanced AI systems sector.
NAIRR aimed to connect U.S. researchers and educators to computational, data, and training resources needed to progress AI research. The Organization is spearheaded by the NSF in partnership with 10 other federal agencies and numerous non-governmental partners, including Nvidia (NASDAQ: NVDA), Google (NASDAQ: GOOGL), and Microsoft (NASDAQ: MSFT).
As part of this effort, Biden’s executive order also tasked various federal departments and agencies with a series of actions to increase AI safety, security, and privacy while catalyzing and democratizing innovation, education, and equality in the industry.
Examples of actions included the White House Chief of Staff’s Office convening an AI and tech talent task force; the Technology and Modernization Board evaluating ways to prioritize agencies’ adoption of AI; and the Department of Justice (DOJ) convening federal agencies’ civil rights offices to discuss the intersection of AI and civil rights.
The various departments and agencies were given 90 days to complete the actions mandated by the order, and in February, it was announced that they had all complied.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Artificial intelligence needs blockchain