U.S. Space Force banner

US Space Force bans generative AI tools to prevent data leaks

The U.S. Space Force has expressed concerns over data risks associated with using generative artificial intelligence products in its operations, effectively banning employees from using the tools.

The agency shared its new stance via an internal memo, citing concerns that proprietary data may fall into the wrong hands by relying on generative AI platforms. According to a Reuters report, the ban is designed to be a temporary major as the agency explores new methods to mitigate “AI aggregation risks.”

Going forward, employees of the U.S. Space Force will be precluded from using OpenAI’s ChatGPT, Bard, or other large language models (LLMs) on official computers. However, there is no restriction against employees using the technology for personal use.

“A strategic pause on the use of Generative AI and Large Language Models within the U.S. Space Force has been implemented as we determine the best path forward to integrate these capabilities into Guardians’ roles and the USSF mission,” said the agency in a statement.

The U.S. Space Force submitted that generative AI has great potential to revolutionize the global workforce, enhancing employees’ “ability to operate at speed.” The technology has since found use cases in finance, Web3 auditing, healthcare, education, and artistic endeavors.

Aware of the potential of generative AI, the Space Force’s memo creates room for a handful of employees to continue relying on the tools with the consent of the agency’s Chief Technology and Innovation Office.

In the coming weeks, the Space Force stated it will roll out a comprehensive guidance for generative AI usage. Pentagon offices, including the Space Force, have formed an AI task force to probe ways to ensure the technology’s safe, responsible, and strategic use.

Concerns over the data handling processes of generative AI platforms have soared to new highs, with regulatory agencies beaming a searchlight on their activities. Poland’s Personal Data Protection Office (UODO) announced the start of an investigation against OpenAI over data processing in “an unlawful and unreliable manner.”

“The development of new technologies has to respect the rights of individuals under, inter alia, the GDPR. It is the task of the European data protection authorities to protect EU citizens from the negative effects of information processing technologies,” said UODO Deputy President Jakub Groszkowski.

A growing trend for enterprises

While the U.S. Space Force is the first government agency to impose restrictions on the use of generative AI, private enterprises have taken steps to curtail their usage. Samsung banned employees from using ChatGPT over fears of a data leak after a staff entered a source code on the platform.

Following Samsung’s lead, technology and financial firms, including Apple, Goldman Sachs, and Amazon, have prohibited staff from using the generative AI platforms. Conversely, professional services firm KPMG has taken proactive steps to integrate AI usage across its verticals, investing $2 billion in Microsoft’s AI research.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI, ChatGPT, and Blockchain | CoinGeek Roundtable with Joshua Henslee

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.