The dangers of voice fraud: We can’t detect what we can’t see

by | Jun 30, 2024 | Technology

Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More

It’s hard to believe that deepfakes have been with us long enough that we don’t even blink at the sound of a new case of identity manipulation. But it hasn’t been quite that long for us to forget.

In 2018, a deepfake showing Barack Obama saying words he never uttered set the internet ablaze and prompted concern among U.S. lawmakers. They warned of a future where AI could disrupt elections or spread misinformation.

In 2019, a famous manipulated video of Nancy Pelosi spread like wildfire across social media. The video was subtly altered to make her speech seem slurred and her movements sluggish, implying her incapacity or intoxication during an official speech.

In 2020, deepfake videos were used to heighten political tension between China and India.

Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

And I won’t even get into the hundreds — if not thousands — of celebrity videos that have circulated the internet in the last few years, from Taylor Swift’s pornography scandal, to Mark Zuckerberg’s sinister speech about Facebook’s power. 

Yet despite these concerns, there’s a more subtle and potentially more deceptive threat looming: voice fraud. Which — at the risk of sounding like a doomer — could very well prove to be the nail that sealed the coffin.

The invisible problem

Unlike high-definition video, the typical transmission quality of audio, especially in phone calls, is markedly low. 

By now, we are desensitized to low fidelity audio — from poor signal, to background static, to distortions — which makes it incredibly difficult to distinguish a real anomaly.

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious.

Imagine receiving a phone call from a loved one’s number telling you th …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More

It’s hard to believe that deepfakes have been with us long enough that we don’t even blink at the sound of a new case of identity manipulation. But it hasn’t been quite that long for us to forget.

In 2018, a deepfake showing Barack Obama saying words he never uttered set the internet ablaze and prompted concern among U.S. lawmakers. They warned of a future where AI could disrupt elections or spread misinformation.

In 2019, a famous manipulated video of Nancy Pelosi spread like wildfire across social media. The video was subtly altered to make her speech seem slurred and her movements sluggish, implying her incapacity or intoxication during an official speech.

In 2020, deepfake videos were used to heighten political tension between China and India.

Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

And I won’t even get into the hundreds — if not thousands — of celebrity videos that have circulated the internet in the last few years, from Taylor Swift’s pornography scandal, to Mark Zuckerberg’s sinister speech about Facebook’s power. 

Yet despite these concerns, there’s a more subtle and potentially more deceptive threat looming: voice fraud. Which — at the risk of sounding like a doomer — could very well prove to be the nail that sealed the coffin.

The invisible problem

Unlike high-definition video, the typical transmission quality of audio, especially in phone calls, is markedly low. 

By now, we are desensitized to low fidelity audio — from poor signal, to background static, to distortions — which makes it incredibly difficult to distinguish a real anomaly.

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious.

Imagine receiving a phone call from a loved one’s number telling you th …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This