Verizon exec reveals responsible AI strategy amid ‘Wild West’ landscape

by | Jun 12, 2024 | Technology

It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More

Verizon is using generative AI applications to enhance customer support and experience for its more than 100 million phone customers, and is expanding its responsible AI team to mitigate its risks.

Michael Raj, a vice president overseeing AI for Verizon’s network enablement, said the company is implementing several measures as part of this initiative. These include requiring data scientists to register AI models with a central data team to ensure security reviews, and increasing scrutiny of the types of large language models (LLMs) used in Verizon’s applications to minimize bias and prevent toxic language.

AI auditing is like the “Wild West”

Raj spoke during the VentureBeat AI Impact event in New York City last week, where the focus was on how to audit generative AI applications, where the LLMs used can be notoriously unpredictable. He and other speakers agreed that the field of AI auditing is still in its early stages and that companies need to accelerate their efforts in this area, given that regulators have not yet set specific guidelines.

The steady drumbeat of big mistakes by customer support AI agents, for example from big names like Chevy, Air Canada, and even New York City, or even by leading LLM providers like Google, which featured black Nazis, has brought a renewed focus on the need for more reliability.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Regist …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More

Verizon is using generative AI applications to enhance customer support and experience for its more than 100 million phone customers, and is expanding its responsible AI team to mitigate its risks.

Michael Raj, a vice president overseeing AI for Verizon’s network enablement, said the company is implementing several measures as part of this initiative. These include requiring data scientists to register AI models with a central data team to ensure security reviews, and increasing scrutiny of the types of large language models (LLMs) used in Verizon’s applications to minimize bias and prevent toxic language.

AI auditing is like the “Wild West”

Raj spoke during the VentureBeat AI Impact event in New York City last week, where the focus was on how to audit generative AI applications, where the LLMs used can be notoriously unpredictable. He and other speakers agreed that the field of AI auditing is still in its early stages and that companies need to accelerate their efforts in this area, given that regulators have not yet set specific guidelines.

The steady drumbeat of big mistakes by customer support AI agents, for example from big names like Chevy, Air Canada, and even New York City, or even by leading LLM providers like Google, which featured black Nazis, has brought a renewed focus on the need for more reliability.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Regist …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This