AI concept with downtown Chicago

White House unveils AI blueprint for US federal agencies

The White House has released a new memorandum for federal agencies with mandates around the implementation and risk management of artificial intelligence (AI) in United States government offices.

The memorandum, titled “M-24-10: Advancing Governance Innovation and Risk Management for Agency Use of Artificial Intelligence,” was released last March 28 by the Executive Office of the President through the Office of Management and Budget.

Like many of the documents related to AI that we have seen released by government bodies and global organizations in the past, this new memorandum provides a framework for AI governance, innovation, and risk management for various federal agencies in the U.S. with the hope that some of these practices are adopted around the world.

The memorandum is designed to ensure that adopting AI technologies across federal agencies mitigates risk while being innovative and considering ethical concerns.

Here are the key components and directives of the memorandum that federal agencies are urged to follow and implement by December 1, 2024.

The government’s AI initaitives

AI Governance: The memorandum mandates the appointment of a Chief AI Officer (CAIO) within each federal agency to oversee AI governance, innovation, and risk management. The CAIO will be tasked with coordinating AI initiatives within their respective agencies and ensuring that AI technologies are developed and used in ways that align with federal policies and principles. In addition, the CAIOs are responsible for developing strategies for AI use, creating an inventory of AI use cases, and working closely with existing officials on AI-related issues that affect their departments.

Responsible AI implementation: The memorandum encourages federal agencies to take ethics into account while each department continues to build out its AI capabilities and subsequently deploys their AI, particularly when it comes to enhancing IT infrastructure, data management, cybersecurity, and workforce development related to AI. The memorandum also stresses the importance of departments sharing AI resources, such as models, code, and data, across the federal government to foster collaboration and catalyze innovation.

Risk management for AI: A significant portion of the 34-page memorandum is dedicated to risk management practices for AI, especially in regard to AI applications that could impact public rights and safety. That being said, the memorandum requires federal agencies to implement “minimum risk management practices,” including testing for bias and fairness, ongoing monitoring of AI systems, and regular evaluation to ensure AI applications do not undermine public trust or safety.

Global AI policy surge: The challenge of enforcement

Recently, there have been a lot of AI policies and regulations that have either been approved or are making their way through a country or organization’s legislative body without much resistance.

We have seen the EU pass the AI Act, we have seen the United Nations adopt a draft resolution on AI, we have seen the Biden administration release an AI executive order, and now, we are seeing the white house release its latest AI memorandum.

It is clear that each country is trying to show the world that they are serious players in the AI race with hopes that the AI policy and regulation they create will either become a global standard or, at the very least, inspiration for the AI policies and regulations that follow; but one thing that these policies do a very poor job of is explaining how they will be enforced.

Most AI policies and regulations fall short in their belief that industry participants are willing to disclose information about their AI models and their current and future use cases. Unless there is a very public facing company that is blatantly violating one of the laws or mandates, it will be difficult to pinpoint non-compliant companies.

What doesn’t help is the secretive nature of this highly technical industry. If a startup or company truly does have an innovative AI model, it most likely will not want to disclose the nature of its model or its use cases because it could put it at a disadvantage and could lead to its competitors effectively copying its technology. For these reasons, among many others, the comprehensive AI laws and regulations we see coming to fruition are toothless in many cases.

In other, more specific instances, the AI laws created are useful, especially those that enforce penalties on individuals using AI to cause harm. Earlier this year, the FCC issued a ruling that made AI-Generated Robocalls illegal under the Telephone Consumer Protection Act. This happened shortly after many New Hampshire residents were the recipient of an AI-generated audio that mimicked President Joe Biden’s voice and encouraged them not to vote. But thanks to the new ruling from the FCC, individuals that use generative AI to manipulate, target, or misinform others via robocalls can receive regulatory fines of more than $23,000 per call.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: How blockchain will keep AI honest

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.