Show simple item record

Predicting facial movement using electromyography and machine learning

dc.contributor.advisorBrustad, Abby
dc.contributor.advisorMorales, Santiago
dc.contributor.advisorFox, Nathan
dc.contributor.authorChoi, Theresa
dc.date.accessioned2020-04-27T10:07:22Z
dc.date.available2020-04-27T10:07:22Z
dc.date.issued2020
dc.identifierhttps://doi.org/10.13016/3jcg-m67s
dc.identifier.urihttp://hdl.handle.net/1903/25925
dc.description.abstractVideo coding participants’ behavior is inherently a subjective and time-consuming process. The purpose of this study is to support traditional video coding methods of facial expressions by using machine learning on available electromyographic (EMG) data. For this, we tested the accuracy across four machine learning algorithms (i.e., decision tree, K-nearest neighbors (KNN), multilayer perceptron (MLP), and linear support vector classifier (SVC)). Specifically, we tested their accuracy in distinguishing between (a) any facial activity versus no movement, and (b) different facial expressions (Fearful, Happy, Neutral). Success was measured by final accuracy on a pre-chosen test set. Results showed that the decision tree and KNN classifiers had the highest potential for detecting facial activity with a test accuracy of 94%. However, after plotting their decision boundaries, both had a risk of overfitting, suggesting that the best classifier could instead be a safer choice of the MLP or SVC algorithms with 84% accuracy. For classifying different facial expressions, the MLP algorithm had the highest accuracy with 88% accuracy. Overall, the conclusion is that with further development, machine learning models could simplify the video coding process. While there were some models with very high accuracies (above 90%), they tended to risk overfitting and not generalize to larger datasets. Thus, the best use of these models would be in tandem with other coding methods, such as by quickly verifying low-accuracy classifications via video coding or by outputting cutoff parameters that can be used to facilitate other analyses.en_US
dc.language.isoen_USen_US
dc.subjectPsychologyen_US
dc.subjectBSOSen_US
dc.subjectChoien_US
dc.subjectmachine learningen_US
dc.subjectelectromyographyen_US
dc.subjectfacial activityen_US
dc.titlePredicting facial movement using electromyography and machine learningen_US
dc.typePresentationen_US
dc.relation.isAvailableAtMaryland Center for Undergraduate Research
dc.relation.isAvailableAtDigital Repository at the University of Maryland
dc.relation.isAvailableAtUniversity of Maryland (College Park, Md)


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record