Getting your Trinity Audio player ready...

AI vs. the United States Copyright Office

There’s tension brewing between artificial intelligence (AI) companies and the U.S. Copyright Office. This week, the Trump administration abruptly fired Shira Perlmutter, head of the U.S. Copyright Office. The firing came just days after the Office released a report that took a stance against using copyrighted material in AI training models.

AI companies have relied on the “fair use” doctrine to justify scraping copyrighted data to train their large language models for years. The argument has been that LLMs essentially do what a student does when studying from books—learning, not copying content. But the Copyright Office isn’t buying it. The report stated that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

This is a problem for AI developers whose models are built on top of massive data sets that include copyrighted books, articles, music, and more. The Office’s stance essentially shoots down the industry’s “student learning” analogy, which will present an obstacle when training future models on scraped content.

While no official reason was given for Perlmutter’s firing, the timing raised suspicions about the AI industry. It’s no secret that tech giants have the president’s ear. Elon Musk, for instance, has an active relationship with the White House and is the CEO of xAI, a company that would directly benefit from looser copyright restrictions. The timing of Perlmutter’s dismissal has led to speculation that the move was less about internal performance and more about appeasing influential figures in the AI space.

If that’s the case, we will likely see the Copyright Office’s tone shift in the coming weeks. Whether it’s a new statement walking back its earlier stance or a quiet retraction of enforcement ambitions, if it is about conforming to the administration’s approach on technology, you can expect some reform in the coming weeks.

Is OpenAI going to IPO?

OpenAI is in talks with its largest investor to renegotiate their existing partnership in a way that clears the path for an Initial Public Offering (IPO).

OpenAI, the nonprofit entity that owns and operates a capped-profit entity, is facing pressure to become more investor-friendly amidst its recent announcement that it was no longer looking to convert its nonprofit into a for-profit entity. The problem is that raising billions of dollars while maintaining a nonprofit mission and profit cap isn’t attractive to investors. As one source told the Financial Times, the need to shift toward a more traditional corporate structure is “a high-level recognition of what’s required to raise this amount of money,” adding that raising “$40 billion under a capped profit structure is not achievable.”

Right now, OpenAI’s nonprofit parent company owns a for-profit subsidiary (a capped-profit LLC), which limits how much money investors can make. While the capped returns may have made sense when AI was still in its experimental phase, it won’t work now that OpenAI is a global company with high operating costs and expansion goals.

Investors are increasingly beginning to ask when they will see a return on their investment, but with more capital needed to fuel OpenAI’s growth, it’s looking like an IPO is an optimal path to liquidity for those on the cap table. But for that to happen, Microsoft (NASDAQ: MSFT)—a key stakeholder in OpenAI—has to be on board with the restructuring. The two are reportedly reworking their revenue-sharing agreement and access-to-technology arrangement, which could result in a structure that Microsoft is comfortable signing off on, giving OpenAI a path forward that will please its investors.

Elon Musk’s Grok underfire for South Africa controversy

This week, Musk’s generative AI chatbot Grok came under fire when the chatbot began replying to a series of tagged tweets and unexpectedly brought up “white genocide” in South Africa.

Grok’s integration with X (formerly Twitter) allows users to summon the bot by tagging @grok in post replies. It offers context, verifies facts, and answers follow-up questions. But this week, it went rogue. Instead of staying within the boundaries of user prompts, Grok volunteered unsolicited commentary about race relations in South Africa, completely unprompted.

The issue only lasted a few hours before engineers quietly corrected it. But still, it went on long enough for screenshots to go viral. Even if we decide to call this a minor issue, the fact that it occurred at all is enough to damage the reputation of the chatbot. Any mistake in an AI model that gets noticed on a large scale calls the entire AI model into question. People begin to wonder: wow, if the AI was capable of making that mistake, I wonder what other mistakes it has made to date, is currently making now, what sort of data it was trained on and programmed with to make that decision, and if it will happen again in the future.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: How AI transforms social networks with Dmitriy Fabrikant

Recommended for you

This Week in Crypto: Coinbase added to S&P 500 amid data breach
Coinbase is expected to be included in the S&P 500 on May 19, but on May 14, the company had...
May 16, 2025
Kuwait cracks down on illegal BTC mining
Kuwait have launched 31 new investigations and questioned 116 individuals allegedly involved in mining BTC in residential areas using stolen...
May 15, 2025
Advertisement
Advertisement
Advertisement