Visual Speech Recognition: Improving Speech Perception in Noise through Artificial Intelligence.
OBJECTIVES: To compare speech perception (SP) in noise for normal-hearing (NH) individuals and individuals with hearing loss (IWHL) and to demonstrate improvements in SP with use of a visual speech recognition program (VSRP).
STUDY DESIGN: Single-institution prospective study.
SETTING: Tertiary referral center.
SUBJECTS AND METHODS: Eleven NH and 9 IWHL participants in a sound-isolated booth facing a speaker through a window. In non-VSRP conditions, SP was evaluated on 40 Bamford-Kowal-Bench speech-in-noise test (BKB-SIN) sentences presented by the speaker at 50 A-weighted decibels (dBA) with multiperson babble noise presented from 50 to 75 dBA. SP was defined as the percentage of words correctly identified. In VSRP conditions, an infrared camera was used to track 35 points around the speaker's lips during speech in real time. Lip movement data were translated into speech-text via an in-house developed neural network-based VSRP. SP was evaluated similarly in the non-VSRP condition on 42 BKB-SIN sentences, with the addition of the VSRP output presented on a screen to the listener.
RESULTS: In high-noise conditions (70-75 dBA) without VSRP, NH listeners achieved significantly higher speech perception than IWHL listeners (38.7% vs 25.0%,
CONCLUSIONS: The VSRP significantly increased speech perception in high-noise conditions for NH and IWHL participants and eliminated the difference in SP accuracy between NH and IWHL listeners.
Published In/Presented At
Raghavan, A. M., Lipschitz, N., Breen, J. T., Samy, R. N., & Kohlberg, G. D. (2020). Visual Speech Recognition: Improving Speech Perception in Noise through Artificial Intelligence. Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery, 163(4), 771–777. https://doi.org/10.1177/0194599820924331
Medicine and Health Sciences
Department of Surgery