BSV
$80.65
Vol 84.58m
-4.49%
BTC
$98655
Vol 157610.66m
-3.95%
BCH
$598
Vol 1195.07m
-2.21%
LTC
$134.91
Vol 2708.42m
3.8%
DOGE
$0.43
Vol 10843.86m
-0.01%
Getting your Trinity Audio player ready...

OpenAI, makers of ChatGPT and Dall-E, have announced the launch of a grant program designed to support cybersecurity programs powered by artificial intelligence (AI).

The program will see OpenAI dole out $1 million to cybersecurity defenders around the globe in a three-pronged strategy. OpenAI noted via a blog post that it seeks to quantify the cybersecurity capabilities of AI models, empower defenders and elevate the discourse in the industry.

Details from the post suggest that OpenAI will be placing a premium on defensive-security projects hinged on the maxim that “defense must be correct 100% of the time.” The firm clarified that attack-minded projects may be funded later, with successful projects receiving up to $10,000 in direct funding and API credits.

“While it may be true that attackers face fewer constraints and take advantage of their flexibility, defenders have something more valuable—coordination towards a common goal of keeping people safe,” OpenAI said.

Projects revolving around detecting and mitigating social engineering tactics and spotting security flaws in source codes are more likely to receive OpenAI’s funding. Other areas of interest for the firm include the creation of deception technology to misdirect bad actors, assisting in developing threat models, and optimizing patch management processes.

“All projects should be intended to be licensed or distributed for maximal public benefit and sharing, and we will prioritize applications that have a clear plan for this,” the blog post read.

OpenAI’s initiative comes when the company is in the middle of a storm over the safety of using its generative AI platform ChatGPT. Samsung (NASDAQ: SSNLF) revealed that it suffered a data leak after an employee imputed source code into ChatGPT, leading to a ban on platform usage by employees.

Many firms have restricted ChatGPT usage for employees, including Apple (NASDAQ: AAPL), Amazon (NASDAQ: AMZN), Verizon (NASDAQ: VZ), and Accenture (NASDAQ: ACN).

Rogue AI usage fuelling increased scrutiny

The rise of digital asset scams, fake news, copyright infringement, and deep fakes have forced the hands of regulators worldwide to increase scrutiny over the industry. The U.K. government splurged over $125 million to fund a new task force to ensure the safe usage and development of AI in the country comprised of stakeholders and academics.

Australian and European Union regulators are toeing a similar path in regulating AI amid surging metrics for adoption. On the other hand, China imposed a nationwide ban on ChatGPT in favor of local iterations under strict government supervision.

Watch: AI and blockchain

Recommended for you

Elon Musk’s ownership of X accounts and what it means for you
A court filing concerning The Onion's acquisition of Infowars reveals that while Elon Musk bids free speech, he isn't keen...
December 6, 2024
Digital asset literacy dips; Australia university sets up education drive
Data collected by EdTech firm PiP World shows financial literacy dipping among digital asset users; elsewhere, Wollongong University launches initiatives...
December 6, 2024
Advertisement
Advertisement
Advertisement