Predicting facial movement using electromyography and machine learning

Loading...
Thumbnail Image

Files

emg_ml_poster.pdf (1.26 MB)
No. of downloads: 83

Publication or External Link

Date

2020

Citation

Abstract

Video coding participants’ behavior is inherently a subjective and time-consuming process. The purpose of this study is to support traditional video coding methods of facial expressions by using machine learning on available electromyographic (EMG) data. For this, we tested the accuracy across four machine learning algorithms (i.e., decision tree, K-nearest neighbors (KNN), multilayer perceptron (MLP), and linear support vector classifier (SVC)). Specifically, we tested their accuracy in distinguishing between (a) any facial activity versus no movement, and (b) different facial expressions (Fearful, Happy, Neutral). Success was measured by final accuracy on a pre-chosen test set. Results showed that the decision tree and KNN classifiers had the highest potential for detecting facial activity with a test accuracy of 94%. However, after plotting their decision boundaries, both had a risk of overfitting, suggesting that the best classifier could instead be a safer choice of the MLP or SVC algorithms with 84% accuracy. For classifying different facial expressions, the MLP algorithm had the highest accuracy with 88% accuracy. Overall, the conclusion is that with further development, machine learning models could simplify the video coding process. While there were some models with very high accuracies (above 90%), they tended to risk overfitting and not generalize to larger datasets. Thus, the best use of these models would be in tandem with other coding methods, such as by quickly verifying low-accuracy classifications via video coding or by outputting cutoff parameters that can be used to facilitate other analyses.

Notes

Rights