BSV
$52.92
Vol 29.71m
-8.21%
BTC
$96345
Vol 51679.82m
-2.46%
BCH
$447.59
Vol 405.93m
-4.51%
LTC
$98.98
Vol 948.45m
-6.3%
DOGE
$0.31
Vol 6443.76m
-9.72%
Getting your Trinity Audio player ready...

OpenAI, makers of ChatGPT and Dall-E, have announced the launch of a grant program designed to support cybersecurity programs powered by artificial intelligence (AI).

The program will see OpenAI dole out $1 million to cybersecurity defenders around the globe in a three-pronged strategy. OpenAI noted via a blog post that it seeks to quantify the cybersecurity capabilities of AI models, empower defenders and elevate the discourse in the industry.

Details from the post suggest that OpenAI will be placing a premium on defensive-security projects hinged on the maxim that “defense must be correct 100% of the time.” The firm clarified that attack-minded projects may be funded later, with successful projects receiving up to $10,000 in direct funding and API credits.

“While it may be true that attackers face fewer constraints and take advantage of their flexibility, defenders have something more valuable—coordination towards a common goal of keeping people safe,” OpenAI said.

Projects revolving around detecting and mitigating social engineering tactics and spotting security flaws in source codes are more likely to receive OpenAI’s funding. Other areas of interest for the firm include the creation of deception technology to misdirect bad actors, assisting in developing threat models, and optimizing patch management processes.

“All projects should be intended to be licensed or distributed for maximal public benefit and sharing, and we will prioritize applications that have a clear plan for this,” the blog post read.

OpenAI’s initiative comes when the company is in the middle of a storm over the safety of using its generative AI platform ChatGPT. Samsung (NASDAQ: SSNLF) revealed that it suffered a data leak after an employee imputed source code into ChatGPT, leading to a ban on platform usage by employees.

Many firms have restricted ChatGPT usage for employees, including Apple (NASDAQ: AAPL), Amazon (NASDAQ: AMZN), Verizon (NASDAQ: VZ), and Accenture (NASDAQ: ACN).

Rogue AI usage fuelling increased scrutiny

The rise of digital asset scams, fake news, copyright infringement, and deep fakes have forced the hands of regulators worldwide to increase scrutiny over the industry. The U.K. government splurged over $125 million to fund a new task force to ensure the safe usage and development of AI in the country comprised of stakeholders and academics.

Australian and European Union regulators are toeing a similar path in regulating AI amid surging metrics for adoption. On the other hand, China imposed a nationwide ban on ChatGPT in favor of local iterations under strict government supervision.

Watch: AI and blockchain

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement