View of people walking by modern district of city near Microsoft Office

Microsoft ‘Prompt Shields’ for Azure AI protects apps from jailbreak, indirect attacks

Tech giant Microsoft (NASDAQ: MSFT) has unveiled new offerings to improve the safety of its artificial intelligence (AI) products for customers amid heightened regulatory scrutiny.

In its blog post, the new array of tools is focused on stifling the activities of bad actors seeking to trick AI systems via prompts. Microsoft’s Research has noted a significant uptick in Indirect Attacks (also known as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks) in recent months, fuelling attempts to halt the trend in its tracks.

Indirect Attacks occur when generative AI systems process information not originally authored by the developer, allowing for the transfer of external data akin to Cross-Site Scripting (XSS).

To protect users of its AI systems, Microsoft rolled out Prompt Shields as a comprehensive solution for both indirect and direct attacks on systems. Comprising three major components, Microsoft says the feature will play an advanced role in “reducing the risk” of attacks.

“By leveraging advanced machine learning algorithms and natural language processing, Prompt Shields effectively identify and neutralizes potential threats in user prompts and third-party data,” read the blog post.

“This cutting-edge capability will support the security and integrity of your AI applications, safeguarding your systems against malicious attempts at manipulation or exploitation,” it added.

The new security solution relies on Spotlighting, building on Microsoft’s existing Jailbreak Risk Detection feature to assist large language models (LLMs) identify shady external inputs from legitimate instructions.

Microsoft’s security measures leans on delimiters to “help mitigate indirect attacks” while turning to datamarkers to determine the entirety of content blocks.

Safe AI systems at the top of the pyramid

Microsoft is doubling down on AI, hiring ex-Google DeepMind Co-Founder Mustafa Suleyman to lead its resurgence into commercial AI offerings and other emerging technologies. Since the start of the year, Microsoft’s AI unit has been a beehive of activities, marked by the addition of an AI Copilot key, the proposed launch of an AI-enhanced computer, and plans to dabble with in-house chip production.

For all its expansionist goals, the technology company says consumer safety remains a major goal amid mounting regulatory concerns. Microsoft has since announced support for the voluntary commitments designed by the White House to ensure the safe and responsible design of AI systems and their use.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.