BSV
$52.92
Vol 29.71m
-8.21%
BTC
$96345
Vol 51679.82m
-2.46%
BCH
$447.59
Vol 405.93m
-4.51%
LTC
$98.98
Vol 948.45m
-6.3%
DOGE
$0.31
Vol 6443.76m
-9.72%
Getting your Trinity Audio player ready...

ByteDance, the parent company of popular social media app TikTok recently found itself at the center of controversy over its use of OpenAI‘s technology.

While in the process of creating its Large Language Model (LLM), ByteDance violated Open AI’s terms of service, leading to it being banned from using OpenAI’s API.

“All API customers must adhere to our usage policies to ensure that our technology is used for good. While ByteDance’s use of our API was minimal, we have suspended their account while we further investigate. If we discover that their usage doesn’t follow these policies, we will ask them to make necessary changes or terminate their account,” said Niko Felix, a spokesperson for OpenAI.

ByteDance used OpenAI’s model outputs as a foundational element for its own AI project, codenamed Project Seed, which aims to develop an advanced LLM, potentially rivaling OpenAI’s offerings.

The intended applications spanned from enhancing user experiences on platforms like TikTok to developing sophisticated chatbot technologies for various consumer and business applications.

Although the practice of building on the infrastructure created by a market leader is not uncommon in smaller startups, it does not go unnoticed in a company as large as ByteDance. OpenAI quickly took action and suspended ByteDance from using its APIs.

ByteDance’s AI development path

ByteDance’s ban from using OpenAI’s API has significant implications. First, it highlights the ethical boundaries in AI development, emphasizing the importance of adhering to terms of service agreements.

Secondly, it questions the future direction of Project Seed and ByteDance’s AI aspirations. Without access to OpenAI’s resources, ByteDance may need to seek alternative routes or develop its technology in-house, potentially delaying its AI initiatives.

Jodi Seth, the Global Head of Corporate & Policy Communications at ByteDance, has released a statement that downplays the impact of the ban on ByteDance’s AI ambitions.

The initial stages of Project Seed involved using GPT-generated data to annotate the model, Seth said, noting that this data was phased out from ByteDance’s training datasets as early as mid-2023.

“ByteDance is licensed by Microsoft (NASDAQ: MSFT) to use the GPT APIs,” Seth remarked, highlighting the company’s legitimate use of OpenAI’s technology in certain areas. “We use GPT to power products and features in non-China markets, but use our self-developed model to power Doubao (an AI chatbot), which is available only in China.”

Geopolitical tensions in AI development

This ByteDance incident underscores a crucial lesson for large, globally recognized companies. The approach to innovation must be more measured and responsible than the “move fast and break things” ethos often adopted by startups.

For conglomerates like ByteDance, the stakes are higher, and competitors and policymakers around the globe closely watch their actions.

On top of that, giants like ByteDance need to navigate more than the technical and ethical complexities of the industry; they also need to consider the geopolitical nuances of developing revolutionary technology.

The ByteDance ban from OpenAI reflects a growing trend where competition and national interests intersect in the tech world. Many companies out of America are wary of foreign companies, particularly those based in China, driven by concerns over market dominance and geopolitical rivalries.

This sentiment, often resulting in more stringent scrutiny and quicker punitive measures, may explain why OpenAI quickly suspended ByteDance from accessing their API.

As the AI industry continues to expand, incidents like ByteDance’s breach of OpenAI’s Terms of Service and subsequent ban serve as a reminder of the intricate balance between innovation, ethics, and global competition, a balance that policymakers around the globe are trying to perfect and companies operating in the industry are attempting to navigate.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Fixing social media with the BSV blockchain

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement