With little urging, Grok will detail how to make bombs, concoct drugs (and much, much worse)

by | Apr 4, 2024 | Technology

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

Much like its founder Elon Musk, Grok doesn’t have much trouble holding back. 

With just a little workaround, the chatbot will instruct users on criminal activities including bomb-making, hotwiring a car and even seducing children. 

Researchers at Adversa AI came to this conclusion after testing Grok and six other leading chatbots for safety. The Adversa red teamers — which revealed the world’s first jailbreak for GPT-4 just two hours after its launch — used common jailbreak techniques on OpenAI’s ChatGPT models, Anthropic’s Claude, Mistral’s Le Chat, Meta’s LLaMA, Google’s Gemini and Microsoft’s Bing.

By far, the researchers report, Grok performed the worst across three categories. Mistal was a close second, and all but one of the others were susceptible to at least one jailbreak attempt. Interestingly, LLaMA could not be broken (at least in this research instance). 

VB Event
The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

“Grok doesn’t have most of the filters for the requests that are usually inappropriate,” Adversa AI co-founder Alex Polyakov told VentureBeat. “At the same time, its filters for extremely inappropriate requests such as seducing kids were easily bypassed using multiple jailbreaks, and Grok provided shocking details.” 

Defining the most common jailbreak methods

Jailbreaks are cunningly-crafted instructions that attempt to work aro …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

Much like its founder Elon Musk, Grok doesn’t have much trouble holding back. 

With just a little workaround, the chatbot will instruct users on criminal activities including bomb-making, hotwiring a car and even seducing children. 

Researchers at Adversa AI came to this conclusion after testing Grok and six other leading chatbots for safety. The Adversa red teamers — which revealed the world’s first jailbreak for GPT-4 just two hours after its launch — used common jailbreak techniques on OpenAI’s ChatGPT models, Anthropic’s Claude, Mistral’s Le Chat, Meta’s LLaMA, Google’s Gemini and Microsoft’s Bing.

By far, the researchers report, Grok performed the worst across three categories. Mistal was a close second, and all but one of the others were susceptible to at least one jailbreak attempt. Interestingly, LLaMA could not be broken (at least in this research instance). 

VB Event
The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

“Grok doesn’t have most of the filters for the requests that are usually inappropriate,” Adversa AI co-founder Alex Polyakov told VentureBeat. “At the same time, its filters for extremely inappropriate requests such as seducing kids were easily bypassed using multiple jailbreaks, and Grok provided shocking details.” 

Defining the most common jailbreak methods

Jailbreaks are cunningly-crafted instructions that attempt to work aro …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This