Much can be gleaned from the tone of someone’s voice — it’s a conduit for emotion, and emotion has a range of applications. It can aid in health monitoring by detecting early signs of or , and it has the potential to make conversational AI systems more engaging and responsive. Someday, it might even provide implicit feedback that could help voice assistants like Google Assistant, Apple’s Siri, and Amazon’s Alexa learn from their mistakes.

Emotion-classifying AI isn’t anything new, but traditional approaches are supervised, meaning that they ingest training data labeled according to speakers’ emotional states. Scientists at Amazon took a different approach recently, which they describe in a paper scheduled to be presented at the International Conference on Acoustics, Speech, and Signal Processing. Rather

Read More At Article Source | Article Attribution