Application of Auditory Representations on Speaker Identification
dc.contributor.advisor | Shamma, S. A. | en_US |
dc.contributor.author | Chi, Taishih | en_US |
dc.contributor.department | ISR | en_US |
dc.date.accessioned | 2007-05-23T10:04:41Z | |
dc.date.available | 2007-05-23T10:04:41Z | |
dc.date.issued | 1997 | en_US |
dc.description.abstract | The noise-robustness of auditory spectrum and cortical representation is examined by applying it to text-independent speaker identification tasks. A Bayes classifier residing on an M-ary hypothesis test is employed to evaluate the robustness of the auditory cepstrum and demonstrate its superior performance to that of the well-studied mel-cepstrum. In addition, the phase feature of the wavelet-transform based multiscale cortical representation is shown to be much more stable than the magnitude feature in characterizing speakers by correlator technique, which is traditionally used in scene matching application. This observation is consistent with physiological and psychoacoustic phenomena. The underlying purpose of this study is to inspect the inherent robustness of auditory representations derived from a human perception-based model. The experimental results indicate that biologically motivated features significantly enhance speaker identification accuracy in noisy environments. | en_US |
dc.format.extent | 957962 bytes | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | http://hdl.handle.net/1903/5898 | |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | ISR; MS 1997-9 | en_US |
dc.subject | detection | en_US |
dc.subject | speech processing | en_US |
dc.subject | feature extraction | en_US |
dc.subject | auditory processing | en_US |
dc.subject | speaker identification | en_US |
dc.subject | Intelligent Signal Processing | en_US |
dc.subject | Communications Systems | en_US |
dc.title | Application of Auditory Representations on Speaker Identification | en_US |
dc.type | Thesis | en_US |
Files
Original bundle
1 - 1 of 1