UMD Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/3

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 2 of 2
  • Thumbnail Image
    Item
    Noninvasive neural decoding of overt and covert hand movement
    (2010) Bradberry, Trent Jason; Contreras-Vidal, José L.; Bioengineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    It is generally assumed that the signal-to-noise ratio and information content of neural data acquired noninvasively via magnetoencephalography (MEG) or scalp electroencephalography (EEG) are insufficient to extract detailed information about natural, multi-joint movements of the upper limb. If valid, this assumption could severely limit the practical usage of noninvasive signals in brain-computer interface (BCI) systems aimed at continuous complex control of arm-like prostheses for movement impaired persons. Fortunately this dissertation research casts doubt on the veracity of this assumption by extracting continuous hand kinematics from MEG signals collected during a 2D center-out drawing task (Bradberry et al. 2009, NeuroImage, 47:1691-700) and from EEG signals collected during a 3D center-out reaching task (Bradberry et al. 2010, Journal of Neuroscience, 30:3432-7). In both studies, multiple regression was performed to find a matrix that mapped past and current neural data from multiple sensors to current hand kinematic data (velocity). A novel method was subsequently devised that incorporated the weights of the mapping matrix and the standardized low resolution electromagnetic tomography (sLORETA) software to reveal that the brain sources that encoded hand kinematics in the MEG and EEG studies were corroborated by more traditional studies that required averaging across trials and/or subjects. Encouraged by the favorable results of these off-line decoding studies, a BCI system was developed for on-line decoding of covert movement intentions that provided users with real-time visual feedback of the decoder output. Users were asked to use only their thoughts to move a cursor to acquire one of four targets on a computer screen. With only one training session, subjects were able to accomplish this task. The promising results of this dissertation research significantly advance the state-of-the-art in noninvasive BCI systems.
  • Thumbnail Image
    Item
    CORTICAL DYNAMICS OF AUDITORY-VISUAL SPEECH: A FORWARD MODEL OF MULTISENSORY INTEGRATION.
    (2004-08-30) van Wassenhove, Virginie; Poeppel, David; Grant, Ken W.; Neuroscience and Cognitive Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In noisy settings, seeing the interlocutor's face helps to disambiguate what is being said. For this to happen, the brain must integrate auditory and visual information. Three major problems are (1) bringing together separate sensory streams of information, (2) extracting auditory and visual speech information, and (3) identifying this information as a unified auditory-visual percept. In this dissertation, a new representational framework for auditory visual (AV) speech integration is offered. The experimental work (psychophysics and electrophysiology (EEG)) suggests specific neural mechanisms for solving problems (1), (2), and (3) that are consistent with a (forward) 'analysis-by-synthesis' view of AV speech integration. In Chapter I, multisensory perception and integration are reviewed. A unified conceptual framework serves as background for the study of AV speech integration. In Chapter II, psychophysics testing the perception of desynchronized AV speech inputs show the existence of a ~250ms temporal window of integration in AV speech integration. In Chapter III, an EEG study shows that visual speech modulates early on the neural processing of auditory speech. Two functionally independent modulations are (i) a ~250ms amplitude reduction of auditory evoked potentials (AEPs) and (ii) a systematic temporal facilitation of the same AEPs as a function of the saliency of visual speech. In Chapter IV, an EEG study of desynchronized AV speech inputs shows that (i) fine-grained (gamma, ~25ms) and (ii) coarse-grained (theta, ~250ms) neural mechanisms simultaneously mediate the processing of AV speech. In Chapter V, a new illusory effect is proposed, where non-speech visual signals modify the perceptual quality of auditory objects. EEG results show very different patterns of activation as compared to those observed in AV speech integration. An MEG experiment is subsequently proposed to test hypotheses on the origins of these differences. In Chapter VI, the 'analysis-by-synthesis' model of AV speech integration is contrasted with major speech theories. From a Cognitive Neuroscience perspective, the 'analysis-by-synthesis' model is argued to offer the most sensible representational system for AV speech integration. This thesis shows that AV speech integration results from both the statistical nature of stimulation and the inherent predictive capabilities of the nervous system.