BSV
$52.59
Vol 32.73m
-5.38%
BTC
$96610
Vol 49289.86m
-0.77%
BCH
$449.41
Vol 408.59m
-0.52%
LTC
$99.45
Vol 922.28m
-2.11%
DOGE
$0.31
Vol 6606.21m
-3.41%
Getting your Trinity Audio player ready...

OpenAI, makers of the generative AI platform ChatGPT, has been dragged to court in California over allegations that it illegally uses private information to train its AI model.

The lawsuit, filed on June 28, alleges that OpenAI used the private data of millions of individuals to train ChatGPT without seeking their express consent. According to the court filing, the plaintiffs claim that the data in question were gleaned from blog posts, social media comments, and even meal recipes posted on the internet.

“By collecting previously obscure and personal data of millions and permanently entangling it with the Products, Defendants knowingly put Plaintiffs and the Classes in a zone of risk that is incalculable — but unacceptable by any measure of responsible data protection and use,” the filing read.

Clarkson Law Firm is handling the class action lawsuit and mentions five OpenAI-related entities as defendants. Microsoft Corporation (NASDAQ: MSFT), an early investor in OpenAI, was also mentioned as a defendant in a case where the plaintiffs demand a jury trial.

The plaintiffs allege that OpenAI’s misuse of data stood in breach of the Electronic Communication Privacy Act, Computer Fraud and Abuse Act, California Invasion of Privacy Act, and Illinois’ Biometric Information Privacy Act, among others.

The class-action suit further claims that OpenAI’s actions amounted to the offenses of negligence, invasion of privacy, unjust enrichment, failure to warn, and conversion. Aside from obtaining data without consent, the plaintiffs argue that OpenAI designed ChatGPT to be inappropriate for children and deceptively tracked children without their consent.

“While holding themselves out publicly as respecting privacy rights, Defendants tracked the information, behaviors, and preferences of vulnerable children solely for financial gain in violation of well-established privacy protections, societal norms, and the laws encapsulating those protections,” the filing states.

In early June, Japanese lawmaker Takashi Kii predicted an avalanche of copyright cases stemming from AI platforms’ rogue data collection methods.

Hurtling toward AI regulations

The surge in AI adoption has forced governments worldwide to scramble for new regulations to ensure the safe usage of the technology. Currently, the European Union (EU) is finishing its AI Act while the rest of the world is floating public consultations to decide on appropriate approaches.

Amid the regulatory scramble, consumer groups are urging governments to step up the pace in the face of graver risks posed by AI in health, finance, mass media, and Web3. Others are calling for a moratorium on AI development until necessary safeguards have been put in place by regulators.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

CoinGeek Conversations with Jerry Chan: Does AI know what it’s doing?

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement