
Getting your Trinity Audio player ready...
|
DeepSeek’s unconventional approach to fundraising
In a surprising and uncommon move within the artificial intelligence (AI) industry, DeepSeek, the AI company that made a splash for allegedly offering a highly capable model at a fraction of the cost of its United States competitors, has announced that it isn’t actively seeking venture capital funding.
DeepSeek cites three primary reasons for avoiding venture capital (VC) money. First, they don’t want to dilute ownership or lose company control. Second, they fear that investments from Chinese firms could make potential global customers even more skeptical about their platform’s data privacy and security. Third, they claim they simply haven’t needed to raise external funding; so far, DeepSeek claims it relied on profits from their time as a quant hedge fund, High Flyer, before pivoting to AI.
While it’s not unusual for a company to bootstrap or minimize outside investment, DeepSeek’s approach seems illogical given the financial demands of AI development, especially when you consider that DeepSeek’s main product is open source. Running an AI operation is extremely expensive, and even the biggest tech giants in the U.S. struggle to make a profit from their AI operations. This raises the question of how DeepSeek manages to keep its operations afloat. If they are continuously improving their models, costs will only rise. Servicing more users will also drive up expenses. Without turning a profit or securing outside capital, they are effectively racing to the bottom of their bank account unless they have alternative funding strategies that don’t require giving up ownership.
Time will tell whether this gamble pays off. Either DeepSeek will prove that it can sustain itself through unconventional funding methods, or it will run out of cash and be forced to reconsider its stance on outside investment.
A surge in AI legislation
This year, hearing calls for more regulation has become rare, especially in high-growth industries. But despite the federal government’s deregulatory stance so far, AI legislation at the state level is surging. In just the first three months of this year, 838 AI-related bills are currently pending, surpassing the 742 proposed regulations introduced throughout 2024.
So why has there been a sudden increase? I’d say there are two key factors behind this. First, AI’s presence in daily life has become impossible to ignore. While AI has existed for decades, it’s now more visible and accessible to consumers than ever before. This level of adoption naturally calls for legislative updates to address consumer privacy, data security, and ethical concerns.
Second, AI regulation presents a political opportunity. Legislators who position themselves as early experts in AI policy stand to gain prestige and career advantages. Given how new the consumer-facing AI market is, those who lead the conversation now have a chance to shape future policies and secure key positions on technology committees.
It’s worth noting that most of this legislation is happening at the state level. However, the Trump administration has shown little interest in heavy-handed regulation, particularly if it could slow down AI innovation. If federal lawmakers decide to act, it could take the form of overriding state-level rules to prevent what industry leaders see as patchwork regulation that stifles growth. The White House’s stance on AI policy remains unclear, but with legislative momentum building at the state level, pressure for federal action is only increasing.US AI policy proposals from Google and OpenAI
As the March 15 deadline for submitting AI policy recommendations to the Office of Science and Technology Policy (OSTP) approaches, major tech players like Google (NASDAQ: GOOGL) and OpenAI have proposed ideas for shaping the U.S. AI landscape.
Both companies emphasized the need for government support in several areas. Here are a few of the highlights of each proposal.
When it comes to AI Infrastructure Investment, Google and OpenAI argue that the U.S. must expand infrastructure and energy resources to support AI model development. Additionally, they both recommend that the government begin adopting AI-powered solutions within federal agencies.
Both proposals also touch on regulation; OpenAI, in particular, warns against a fragmented, state-by-state regulatory landscape. The company advocates for federal regulations overriding state-level AI laws, and argued that inconsistent state policies could slow AI innovation.
OpenAI is also pushing for AI models to be legally permitted to train on copyrighted material under the fair use doctrine. This has been a contentious issue for them, with OpenAI already facing lawsuits from publications like The New York Times over unauthorized data usage.
Another key aspect of each proposal was that both companies stressed that the U.S. must actively promote its AI approach on the global stage. They call for policies that support U.S. AI firms in international markets while balancing export controls to prevent rivals like China from gaining an edge.
Google submitted a 12-page proposal, while OpenAI provided a 15-page document. Whether the Trump administration will ultimately adopt their recommendations remains to be seen. However, given how the administration has operated so far, placing a high value on its relationships with the tech industry, there’s a strong possibility that big tech will have significant influence over the final AI Action Plan.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Adding the human touch behind AI