BSV
$60.73
Vol 144.94m
3.39%
BTC
$81531
Vol 73180.8m
3.12%
BCH
$440.55
Vol 1062.15m
1.69%
LTC
$76.84
Vol 942.23m
0.32%
DOGE
$0.29
Vol 20141.66m
29.13%
Getting your Trinity Audio player ready...

OpenAI says it has stifled the operations of five state-affiliated threat actors leveraging ChatGPT to find vulnerabilities in systems for criminal purposes.

In its company blog post, OpenAI disclosed a partnership with Microsoft Threat Intelligence (NASDAQ: MSFT) to crack down on state-backed hacking organizations, which included China-affiliated threat actors Charcoal Typhoon and Salmon Typhoon, Iran’s Crimson Sandstorm, and Russia-affiliated Forest Blizzard.

The bad actors reportedly rely on ChatGPT and other OpenAI services to spot coding errors in the architecture of enterprises before installing malware and other trojans. Their key targets are financial, health, and educational institutions.

The Chinese-backed entities allegedly used OpenAI to translate technical papers, debug code, and gain insights into the cybersecurity tools employed by financial institutions. OpenAI’s report indicated a trend of using generative AI tools to create phishing campaigns and keep up with the activities of regional security agencies.

North Korea-affiliated group Emerald Sleet reportedly used OpenAI’s services for scripting tasks and phishing campaigns focused on the Asia-Pacific region. OpenAI also disclosed that Forest Blizzard turned to ChatGPT for research into radar imaging technology and satellite communication.

OpenAI has since taken action against accounts affiliated with the state-backed hacking syndicates interacting with its platforms, including putting them on the blacklist and issuing outright bans. However, it downplayed the impact of its services for state-backed hacking entities, akin to pre-existing non-AI-powered tools. 

“We terminated accounts associated with state-affiliated threat actors,” read the report. “Our findings show our model offers only limited, incremental capabilities for malicious cybersecurity tasks.”

OpenAI’s statement pointed to public transparency and industry collaboration as key to preventing misuse by bad actors while learning from recent incidents of abuse. The company says it is investing heavily in its safety teams to “pursue leads” and spot adversarial use of its content by bad actors.

Wreaking havoc in Web3

State-backed actors have left a trail of security breaches in Web3, raking over $1 billion in 2022 from their malware attacks. North Korea’s Lazarus Group became infamous following its attacks against HarmonyCoinEX, and Atomic Wallet, netting profits into millions of dollars in digital assets.

Security experts note the loot from the attacks is used to power North Korea’s nuclear weapons program in the face of stifling economic sanctions. To limit their operations, security agencies are teaming up to promote cross-border collaborations and information sharing while blacklisting known members of the hacking organizations.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI truly is not generative, it’s synthetic

Recommended for you

ODHack 9.0: Better wallet, easy testnet coins for developers
OnlyDust's ODHack 9.0 hackathon event provides developers building on the BSV blockchain with new ways to test their applications without...
November 8, 2024
BSV joins Linux Foundation to advance open standards
The BSV Association has partnered with the Linux Foundation to advance its objective of promoting development that adheres to BSV...
November 6, 2024
Advertisement
Advertisement
Advertisement