Single-Microphone Speech Enhancement Inspired by Auditory System

dc.contributor.advisorShamma, Shihaben_US
dc.contributor.authorMirbagheri, Majiden_US
dc.contributor.departmentElectrical Engineeringen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.description.abstractEnhancing quality of speech in noisy environments has been an active area of research due to the abundance of applications dealing with human voice and dependence of their performance on this quality. While original approaches in the field were mostly addressing this problem in a pure statistical framework in which the goal was to estimate speech from its sum with other independent processes (noise), during last decade, the attention of the scientific community has turned to the functionality of human auditory system. A lot of effort has been put to bridge the gap between the performance of speech processing algorithms and that of average human by borrowing the models suggested for the sound processing in the auditory system. In this thesis, we will introduce algorithms for speech enhancement inspired by two of these models i.e. the cortical representation of sounds and the hypothesized role of temporal coherence in the auditory scene analysis. After an introduction to the auditory system and the speech enhancement framework we will first show how traditional speech enhancement technics such as wiener-filtering can benefit on the feature extraction level from discriminatory capabilities of spectro-temporal representation of sounds in the cortex i.e. the cortical model. We will next focus on the feature processing as opposed to the extraction stage in the speech enhancement systems by taking advantage of models hypothesized for human attention for sound segregation. We demonstrate a mask-based enhancement method in which the temporal coherence of features is used as a criterion to elicit information about their sources and more specifically to form the masks needed to suppress the noise. Lastly, we explore how the two blocks for feature extraction and manipulation can be merged into one in a manner consistent with our knowledge about auditory system. We will do this through the use of regularized non-negative matrix factorization to optimize the feature extraction and simultaneously account for temporal dynamics to separate noise from speech.en_US
dc.subject.pqcontrolledElectrical engineeringen_US
dc.subject.pquncontrolledAuditory Scene Analysisen_US
dc.subject.pquncontrolledComputational Neuroscienceen_US
dc.subject.pquncontrolledNoise Suppressionen_US
dc.subject.pquncontrolledSound Segregationen_US
dc.subject.pquncontrolledSpeech Enhancementen_US
dc.titleSingle-Microphone Speech Enhancement Inspired by Auditory Systemen_US
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
6.13 MB
Adobe Portable Document Format