Show simple item record

ARTICULATORY INFORMATION FOR ROBUST SPEECH RECOGNITION

dc.contributor.advisorEspy-Wilson, Carol Yen_US
dc.contributor.authorMitra, Vikramjiten_US
dc.date.accessioned2011-07-06T05:34:22Z
dc.date.available2011-07-06T05:34:22Z
dc.date.issued2010en_US
dc.identifier.urihttp://hdl.handle.net/1903/11438
dc.description.abstractCurrent Automatic Speech Recognition (ASR) systems fail to perform nearly as good as human speech recognition performance due to their lack of robustness against speech variability and noise contamination. The goal of this dissertation is to investigate these critical robustness issues, put forth different ways to address them and finally present an ASR architecture based upon these robustness criteria. Acoustic variations adversely affect the performance of current phone-based ASR systems, in which speech is modeled as `beads-on-a-string', where the beads are the individual phone units. While phone units are distinctive in cognitive domain, they are varying in the physical domain and their variation occurs due to a combination of factors including speech style, speaking rate etc.; a phenomenon commonly known as `coarticulation'. Traditional ASR systems address such coarticulatory variations by using contextualized phone-units such as triphones. Articulatory phonology accounts for coarticulatory variations by modeling speech as a constellation of constricting actions known as articulatory gestures. In such a framework, speech variations such as coarticulation and lenition are accounted for by gestural overlap in time and gestural reduction in space. To realize a gesture-based ASR system, articulatory gestures have to be inferred from the acoustic signal. At the initial stage of this research an initial study was performed using synthetically generated speech to obtain a proof-of-concept that articulatory gestures can indeed be recognized from the speech signal. It was observed that having vocal tract constriction trajectories (TVs) as intermediate representation facilitated the gesture recognition task from the speech signal. Presently no natural speech database contains articulatory gesture annotation; hence an automated iterative time-warping architecture is proposed that can annotate any natural speech database with articulatory gestures and TVs. Two natural speech databases: X-ray microbeam and Aurora-2 were annotated, where the former was used to train a TV-estimator and the latter was used to train a Dynamic Bayesian Network (DBN) based ASR architecture. The DBN architecture used two sets of observation: (a) acoustic features in the form of mel-frequency cepstral coefficients (MFCCs) and (b) TVs (estimated from the acoustic speech signal). In this setup the articulatory gestures were modeled as hidden random variables, hence eliminating the necessity for explicit gesture recognition. Word recognition results using the DBN architecture indicate that articulatory representations not only can help to account for coarticulatory variations but can also significantly improve the noise robustness of ASR system.en_US
dc.titleARTICULATORY INFORMATION FOR ROBUST SPEECH RECOGNITIONen_US
dc.typeDissertationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentElectrical Engineeringen_US
dc.subject.pqcontrolledElectrical Engineeringen_US
dc.subject.pqcontrolledEngineeringen_US
dc.subject.pqcontrolledAcousticsen_US
dc.subject.pquncontrolledArticulatory Phonologyen_US
dc.subject.pquncontrolledArticulatory Speech Recognitionen_US
dc.subject.pquncontrolledRobust Automatic Speech Recognitionen_US
dc.subject.pquncontrolledSpeech inversionen_US
dc.subject.pquncontrolledTask Dynamic modelen_US
dc.subject.pquncontrolledVocal-Tract variablesen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record