Visual Speech Recognition: Improving Speech Perception in Noise through Artificial Intelligence.

Publication/Presentation Date

10-1-2020

Abstract

OBJECTIVES: To compare speech perception (SP) in noise for normal-hearing (NH) individuals and individuals with hearing loss (IWHL) and to demonstrate improvements in SP with use of a visual speech recognition program (VSRP).

STUDY DESIGN: Single-institution prospective study.

SETTING: Tertiary referral center.

SUBJECTS AND METHODS: Eleven NH and 9 IWHL participants in a sound-isolated booth facing a speaker through a window. In non-VSRP conditions, SP was evaluated on 40 Bamford-Kowal-Bench speech-in-noise test (BKB-SIN) sentences presented by the speaker at 50 A-weighted decibels (dBA) with multiperson babble noise presented from 50 to 75 dBA. SP was defined as the percentage of words correctly identified. In VSRP conditions, an infrared camera was used to track 35 points around the speaker's lips during speech in real time. Lip movement data were translated into speech-text via an in-house developed neural network-based VSRP. SP was evaluated similarly in the non-VSRP condition on 42 BKB-SIN sentences, with the addition of the VSRP output presented on a screen to the listener.

RESULTS: In high-noise conditions (70-75 dBA) without VSRP, NH listeners achieved significantly higher speech perception than IWHL listeners (38.7% vs 25.0%,

CONCLUSIONS: The VSRP significantly increased speech perception in high-noise conditions for NH and IWHL participants and eliminated the difference in SP accuracy between NH and IWHL listeners.

Volume

163

Issue

4

First Page

771

Last Page

777

ISSN

1097-6817

Disciplines

Medicine and Health Sciences

PubMedID

32453650

Department(s)

Department of Surgery, Division of Otolaryngology

Document Type

Article

Share

COinS