Noninvasive neural decoding of overt and covert hand movement
Publication or External Link
It is generally assumed that the signal-to-noise ratio and information content of neural data acquired noninvasively via magnetoencephalography (MEG) or scalp electroencephalography (EEG) are insufficient to extract detailed information about natural, multi-joint movements of the upper limb. If valid, this assumption could severely limit the practical usage of noninvasive signals in brain-computer interface (BCI) systems aimed at continuous complex control of arm-like prostheses for movement impaired persons. Fortunately this dissertation research casts doubt on the veracity of this assumption by extracting continuous hand kinematics from MEG signals collected during a 2D center-out drawing task (Bradberry et al. 2009, NeuroImage, 47:1691-700) and from EEG signals collected during a 3D center-out reaching task (Bradberry et al. 2010, Journal of Neuroscience, 30:3432-7). In both studies, multiple regression was performed to find a matrix that mapped past and current neural data from multiple sensors to current hand kinematic data (velocity). A novel method was subsequently devised that incorporated the weights of the mapping matrix and the standardized low resolution electromagnetic tomography (sLORETA) software to reveal that the brain sources that encoded hand kinematics in the MEG and EEG studies were corroborated by more traditional studies that required averaging across trials and/or subjects. Encouraged by the favorable results of these off-line decoding studies, a BCI system was developed for on-line decoding of covert movement intentions that provided users with real-time visual feedback of the decoder output. Users were asked to use only their thoughts to move a cursor to acquire one of four targets on a computer screen. With only one training session, subjects were able to accomplish this task. The promising results of this dissertation research significantly advance the state-of-the-art in noninvasive BCI systems.