Google AI researchers working with the today shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They also say their approach can improve automatic speech recognition for people with non-native English accents as well.

People with amyotrophic lateral sclerosis (ALS) often have slurred speech, but existing AI systems are typically trained on voice data without any affliction or accent.

The new approach is successful primarily due to the introduction of small amounts of data that represents people with accents and ALS.

“We show that 71% of the improvement comes from only 5 minutes of training data,” according to a paper titled “Personalizing ASR for Dysarthric and Accented Speech with Limited Data.”

Personalized models were able to achieve 62% and 35%

Read More At Article Source | Article Attribution