BSV
$69.41
Vol 91.34m
4.42%
BTC
$91248
Vol 49992.81m
-0.18%
BCH
$450.06
Vol 997.96m
3.01%
LTC
$91.43
Vol 2569.99m
4.64%
DOGE
$0.37
Vol 10007.78m
-1.76%
Getting your Trinity Audio player ready...

OpenAI has developed a tool to detect when someone uses ChatGPT to generate content with 99.9% accuracy but has no plans to release it to the public.

The California company has long hinted that it was researching technology that can detect artificial intelligence (AI) content, but it has led its clients to believe that this technology was years away. However, according to insiders who spoke to the Wall Street Journal (WSJ), this tool has been available for months, but the company worries that it could make its products less appealing.

Detecting AI-generated content has become a significant challenge as adoption has soared. Legislators have formulated laws that require AI developers to include watermarks and other distinctive features in such content, but none has taken hold.

This challenge is more prevalent in some fields, like the education system, where a recent study found that 60% of middle- and high-school students use AI to help with schoolwork.

According to OpenAI insiders, this challenge was solved over a year ago, but the company doesn’t plan on releasing the tool to the public.

“It’s just a matter of pressing a button,” said one of the sources.

OpenAI says the delay is necessary to protect the users as the tool presents “important risks.”

“We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI,” a company spokesperson told WSJ.

The firm also claimed that if the technology is available to everyone, bad actors could decipher the technique and develop workarounds.

However, sources say that the real motive is user retention. A company survey last year found that 70% of ChatGPT users were not in favor of the new tool, with one in three saying they would quit the chatbot and turn to its rivals.

Since then, senior executives have suppressed the tool, claiming it wasn’t ready for a public launch. In a meeting two months ago, the top brass stated that this tool, which relies on watermarking outputs, was too controversial and that the company must explore other options.

OpenAI rivals, led by Google (NASDAQ: GOOGL), have not fared any better. The search engine giant, whose Gemini LLM is one of the industry leaders, has developed a similar tool, dubbed SynthID, but it has yet to launch it publicly.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Transformative AI applications are coming

Recommended for you

Sentinel Node upholds heightened security with 56M snapshots
CERTIHASH keeps up with its mission to offer enterprises heightened security for their data with BSV-powered Sentinel Node, recently registering...
November 14, 2024
ODHack 9.0: Better wallet, easy testnet coins for developers
OnlyDust's ODHack 9.0 hackathon event provides developers building on the BSV blockchain with new ways to test their applications without...
November 8, 2024
Advertisement
Advertisement
Advertisement