BSV
$70.59
Vol 72.6m
0.39%
BTC
$98920
Vol 61765.26m
0.93%
BCH
$527.73
Vol 813.47m
4.3%
LTC
$98.7
Vol 1177.64m
-0.52%
DOGE
$0.43
Vol 12338.72m
1.58%
Getting your Trinity Audio player ready...

Tech giant Microsoft (NASDAQ: MSFT) has unveiled new offerings to improve the safety of its artificial intelligence (AI) products for customers amid heightened regulatory scrutiny.

In its blog post, the new array of tools is focused on stifling the activities of bad actors seeking to trick AI systems via prompts. Microsoft’s Research has noted a significant uptick in Indirect Attacks (also known as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks) in recent months, fuelling attempts to halt the trend in its tracks.

Indirect Attacks occur when generative AI systems process information not originally authored by the developer, allowing for the transfer of external data akin to Cross-Site Scripting (XSS).

To protect users of its AI systems, Microsoft rolled out Prompt Shields as a comprehensive solution for both indirect and direct attacks on systems. Comprising three major components, Microsoft says the feature will play an advanced role in “reducing the risk” of attacks.

“By leveraging advanced machine learning algorithms and natural language processing, Prompt Shields effectively identify and neutralizes potential threats in user prompts and third-party data,” read the blog post.

“This cutting-edge capability will support the security and integrity of your AI applications, safeguarding your systems against malicious attempts at manipulation or exploitation,” it added.

The new security solution relies on Spotlighting, building on Microsoft’s existing Jailbreak Risk Detection feature to assist large language models (LLMs) identify shady external inputs from legitimate instructions.

Microsoft’s security measures leans on delimiters to “help mitigate indirect attacks” while turning to datamarkers to determine the entirety of content blocks.

Safe AI systems at the top of the pyramid

Microsoft is doubling down on AI, hiring ex-Google DeepMind Co-Founder Mustafa Suleyman to lead its resurgence into commercial AI offerings and other emerging technologies. Since the start of the year, Microsoft’s AI unit has been a beehive of activities, marked by the addition of an AI Copilot key, the proposed launch of an AI-enhanced computer, and plans to dabble with in-house chip production.

For all its expansionist goals, the technology company says consumer safety remains a major goal amid mounting regulatory concerns. Microsoft has since announced support for the voluntary commitments designed by the White House to ensure the safe and responsible design of AI systems and their use.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

Digital ID, biometrics in the pipeline for seamless travel
Suppliers of airport biometrics, SITA and IDEMIA, are working on a project centered on a decentralized trust network to make...
November 25, 2024
Lido DAO members liable for their actions, California judge rules
In a ruling that has sparked outrage among ‘Crypto Bros,’ the California judge said that Andreessen Horowitz and cronies are...
November 22, 2024
Advertisement
Advertisement
Advertisement