Poster Session #2: UC South Ballroom

Signal Detection Analysis of Homophonous Sounds with 2D and 3D lip reading Presentations

Presentation Type

Poster

Faculty Mentor’s Full Name

Al Yonovitz

Abstract / Artist's Statement

Lip reading is an important component for communication by the deaf and hearing-impaired. This study investigated lip reading responses using improved video through 3D presentation. The actual process by which the lip reader translates the lip movements they identify into a message is very complex. The lip movements observed represent only fragments of the complete message. The lip reader’s recognition problems are complicated by the fact that the sounds of English are not easily discriminable. The visible movements associated with the production of the sounds are frequently very similar, and as a consequence are easily confused. The main purpose of this study was to investigate 1) the ability of lip readers to use visual information alone to identify phonemes in varying contexts including nearby co-articulation effects and vowel neighborhoods; and 2) lip reading responses using the effect of improved video presentation through 3D video, providing better and more realistic video presentation. The experimental procedure used signal detection and a two-alternative forced-choice method of subject response. Reaction times were used to construct ROC curves. Subjects contrasted homophonous sounds with 2D and 3D video presentations. This study provides evidence that subtle differences in production allow discrimination between visemes, the sounds that look the same on the lips.

Category

Life Sciences

This document is currently not available here.

Share

COinS
 
Apr 15th, 3:00 PM Apr 15th, 4:00 PM

Signal Detection Analysis of Homophonous Sounds with 2D and 3D lip reading Presentations

Lip reading is an important component for communication by the deaf and hearing-impaired. This study investigated lip reading responses using improved video through 3D presentation. The actual process by which the lip reader translates the lip movements they identify into a message is very complex. The lip movements observed represent only fragments of the complete message. The lip reader’s recognition problems are complicated by the fact that the sounds of English are not easily discriminable. The visible movements associated with the production of the sounds are frequently very similar, and as a consequence are easily confused. The main purpose of this study was to investigate 1) the ability of lip readers to use visual information alone to identify phonemes in varying contexts including nearby co-articulation effects and vowel neighborhoods; and 2) lip reading responses using the effect of improved video presentation through 3D video, providing better and more realistic video presentation. The experimental procedure used signal detection and a two-alternative forced-choice method of subject response. Reaction times were used to construct ROC curves. Subjects contrasted homophonous sounds with 2D and 3D video presentations. This study provides evidence that subtle differences in production allow discrimination between visemes, the sounds that look the same on the lips.