STRAP CAPACITIVE SENSOR MEASUREMENTS OF ARM DEFORMATION CROSS SECTIONS FOR GESTURE RECOGNITION

Loading...
Thumbnail Image

Publication or External Link

Date

2021

Citation

Abstract

Wearable Human Machine Interfaces (wHMI) for gesture recognition have the potential to enable and improve a variety of fields including virtual reality, prosthetics, assistive exoskeletons, and therapeutic robotics. Challenges faced by current technologies include lack of suitable manufacturing infrastructure for newer technologies, cost, and difficulties with modeling forces with in individual grasps/fingers. Many of these technologies also suffer from degraded accuracy due to shifting of the sensors on the users’ body. As a result, their predictive power is often lowered by long term wear, user motion, and donning/doffing devices between uses. Typical methods to get around this include algorithms which attempt to correct for changes in the position of sensors on the user, using adhesive to attach sensors directly to the users’ body, and using a large number of sensors to generate a ‘map’ of the limbs activity while lowering the dependence on any single sensors position.In this thesis, the Capsense, a new wHMI constructed from conventional off the shelf (COTS) parts is explored. This technology simplifies traditional multi-point sensor architectures by using capacitive sensor ’straps’ to measure the shape change of individual cross sections of the arm due to muscle use. Because these straps are only dependent on radial location, they should be more robust to rotation around the arms circumference. The principle behind the operation and construction of the device is explained in detail; and a multi-gesture trial is performed on 5 subjects. Due to COVID era restrictions, further trials are limited to a single subject.
These trials explore how robust the gesture recognition models generated with the Capsense are to long term wear, inter-day use, and when the elbow is added as an additional degree of freedom. Finally, the ability to model and predict individual finger forces and forces in full hand grasps is explored. Results obtained are shown to be comparable in accuracy to existing state of the art technologies for selected gestures, and point towards greater robustness to user motions and long term wear than other technologies, though difficulty with tracking thumb motions was found to be a significant limitation of Capsense.

Notes

Rights