Microsoft launches new Azure AI tools to cut out LLM safety and reliability risks

by | Mar 28, 2024 | Technology

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

As the demand for generative AI continues to grow, concerns about its safe and reliable deployment have become more prominent than ever. Enterprises want to ensure that the large language model (LLM) applications being developed for internal or external use deliver outputs of the highest quality without veering into unknown territories.

Recognizing these concerns, Microsoft today announced the launch of new Azure AI tools that allow developers to address not only the problem of automatic hallucinations (a very common problem associated with gen AI) but also security vulnerabilities such as prompt injection, where the model is tricked into generating personal or harmful content — like the Taylor Swift deepfakes generated from Microsoft’s own AI image creator.

The offerings are currently being previewed and are expected to become broadly available in the coming months. However, Microsoft has not shared a specific timeline yet.

With the rise of LLMs, prompt injection attacks have become more prominent. Essentially, an attacker can change the input prompt of the model in such a way as to bypass the model’s normal operations, including safety controls, and manipulate it to reveal personal or harmful content, compromising security or privacy. These attacks can be carried out in two ways: directly, where the attacker directly interacts with the LLM, or indirectly, which involves the use of a third-party data source like a malicious webpage.

VB Event
The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security work …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

As the demand for generative AI continues to grow, concerns about its safe and reliable deployment have become more prominent than ever. Enterprises want to ensure that the large language model (LLM) applications being developed for internal or external use deliver outputs of the highest quality without veering into unknown territories.

Recognizing these concerns, Microsoft today announced the launch of new Azure AI tools that allow developers to address not only the problem of automatic hallucinations (a very common problem associated with gen AI) but also security vulnerabilities such as prompt injection, where the model is tricked into generating personal or harmful content — like the Taylor Swift deepfakes generated from Microsoft’s own AI image creator.

The offerings are currently being previewed and are expected to become broadly available in the coming months. However, Microsoft has not shared a specific timeline yet.

With the rise of LLMs, prompt injection attacks have become more prominent. Essentially, an attacker can change the input prompt of the model in such a way as to bypass the model’s normal operations, including safety controls, and manipulate it to reveal personal or harmful content, compromising security or privacy. These attacks can be carried out in two ways: directly, where the attacker directly interacts with the LLM, or indirectly, which involves the use of a third-party data source like a malicious webpage.

VB Event
The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security work …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This