Show simple item record

dc.contributor.advisorEspy-Wilson, Carolen_US
dc.contributor.authorKo, Yi-Chunen_US
dc.date.accessioned2016-02-09T06:37:44Z
dc.date.available2016-02-09T06:37:44Z
dc.date.issued2015en_US
dc.identifierhttps://doi.org/10.13016/M2JF03
dc.identifier.urihttp://hdl.handle.net/1903/17396
dc.description.abstractThis thesis focuses on finding useful features for emotion recognition from speech signals. In comparison to the popular openSMILE “emobase” feature set, our proposed method reduced the size of feature space to about 28% yet boosted the recognition rate by 3.3%. Given we are at a point technologically where computing is cheap and fast, and lots of data are available, the approach to solving all sorts of problems is based on sophisticated machine learning techniques to implicitly make sense of data. Yet in this work, we study particular features that are felt to correlate with changes in emotion but have not been commonly selected for emotion recognition tasks. Jitter, shimmer, breathiness, and speaking rate are analyzed and are found to systematically change as a function of emotion. We not only explore these additional acoustic features that help improve the classification performance, but also try to understand the importance of the existing features in improving accuracy. Our results show that using our features together with MFCCs and pitch related features lead to a better performance.en_US
dc.language.isoenen_US
dc.titleA STUDY OF FEATURE SETS FOR EMOTION RECOGNITION FROM SPEECH SIGNALSen_US
dc.typeThesisen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentElectrical Engineeringen_US
dc.subject.pqcontrolledElectrical engineeringen_US
dc.subject.pquncontrolledEmotion Recognitionen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record