Patronus AI secures $17M to tackle AI hallucinations and copyright violations, fuel enterprise adoption

by | May 22, 2024 | Technology

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

As companies race to implement generative AI, concerns about the accuracy and safety of large language models (LLMs) threaten to derail widespread enterprise adoption. Stepping into the fray is Patronus AI, a San Francisco startup that just raised $17 million in Series A funding to automatically detect costly — and potentially dangerous — LLM mistakes at scale.

The round, which brings Patronus AI’s total funding to $20 million, was led by Glenn Solomon at Notable Capital, with participation from Lightspeed Venture Partners, former DoorDash executive Gokul Rajaram, Factorial Capital, Datadog, and several unnamed tech executives. 

Founded by former Meta machine learning (ML) experts Anand Kannappan and Rebecca Qian, Patronus AI has developed a first-of-its-kind automated evaluation platform that promises to identify errors like hallucinations, copyright infringement and safety violations in LLM outputs. Using proprietary AI, the system scores model performance, stress-tests models with adversarial examples and enables granular benchmarking — all without the manual effort required by most enterprises today.

Exposing the dark side of generative AI: hallucinations, copyright violations and safety risks

“There’s a range of things that our product is actually really good at being able to catch, in terms of mistakes,” said Kannappan, CEO of Patronus AI, in an interview with VentureBeat. “It includes things like hallucinations, and copyright and safety related risks, as well as a lot of enterprise-specific capabilities around things like style and tone of voice of the brand.”

VB Event
The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for aud …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

As companies race to implement generative AI, concerns about the accuracy and safety of large language models (LLMs) threaten to derail widespread enterprise adoption. Stepping into the fray is Patronus AI, a San Francisco startup that just raised $17 million in Series A funding to automatically detect costly — and potentially dangerous — LLM mistakes at scale.

The round, which brings Patronus AI’s total funding to $20 million, was led by Glenn Solomon at Notable Capital, with participation from Lightspeed Venture Partners, former DoorDash executive Gokul Rajaram, Factorial Capital, Datadog, and several unnamed tech executives. 

Founded by former Meta machine learning (ML) experts Anand Kannappan and Rebecca Qian, Patronus AI has developed a first-of-its-kind automated evaluation platform that promises to identify errors like hallucinations, copyright infringement and safety violations in LLM outputs. Using proprietary AI, the system scores model performance, stress-tests models with adversarial examples and enables granular benchmarking — all without the manual effort required by most enterprises today.

Exposing the dark side of generative AI: hallucinations, copyright violations and safety risks

“There’s a range of things that our product is actually really good at being able to catch, in terms of mistakes,” said Kannappan, CEO of Patronus AI, in an interview with VentureBeat. “It includes things like hallucinations, and copyright and safety related risks, as well as a lot of enterprise-specific capabilities around things like style and tone of voice of the brand.”

VB Event
The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for aud …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This