A. James Clark School of Engineering

Permanent URI for this communityhttp://hdl.handle.net/1903/1654

The collections in this community comprise faculty research works, as well as graduate theses and dissertations.

Browse

Search Results

Now showing 1 - 10 of 30
  • Thumbnail Image
    Item
    Statistical Models of Neural Computations and Network Interactions in High-Dimensional Neural Data
    (2023) Mukherjee, Shoutik; Babadi, Behtash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Recent advances in neural recording technologies, like high-density electrodes and two-photon calcium imaging, now enable the simultaneous acquisition of several hundred neurons over large patches of cortex. The availability of high volumes of simultaneously acquired neural activity presents exciting opportunities to study the network-level properties that support the neural code. This dissertation consists of two themes in analyzing network-level neural coding in large populations, particularly in the context of audition. Namely, we address modeling the instantaneous and directed interactions in large neuronal assemblies; and modeling neural computations in the mammalian auditory system.In the first part of this dissertation, an algorithm for adaptively modeling higher-order coordinated spiking as a discretized mark point process is proposed. Analyzing coordinated spiking involves a large number of possible simultaneous spiking events and covariates. We propose the adaptive Orthogonal Matching Pursuit (AdOMP) to tractably model dynamic higher-order coordination of ensemble spiking. Moreover, we generalize an elegant procedure for constructing confidence intervals for sparsity-regularized estimates to greedy algorithms and subsequently derive an inference framework for detecting facilitation or suppression of coordinated spiking. Application to simulated and experimentally recorded multi-electrode data recordings reveals significant gains over several existing benchmarks. The second part pertains to functional network analysis of large neuronal ensembles using OMP to impose sparsity constraints on models of neuronal responses. The efficacy of functional network analysis based on greedy model estimation is first demonstrated in two sets of two-photon calcium imaging data of mouse primary auditory cortex. The first dataset was collected during a tone discrimination task, where we additionally show that properties of the functional network structure encode information relevant to the animal’s task performance. The second dataset was collected from a cohort of young and aging mice during passive presentations of pure-tones in noise to study aging-related network changes in A1. The constituency of neurons engaged in functional networks changed by age; we characterized these changes and their correspondence to differences in functional network structure. We next demonstrated the efficacy of greedy estimation in functional network analysis in application to electrophysiological spiking recordings across multiple areas of songbird auditory cortex, and present initial findings on interareal network structure differences between responses to tutor songs and non-tutor songs that suggest the learning-related effects on functional networks. The third part of this dissertation concerns neural system identification. Neu- rons in ferret primary auditory cortex are known to exhibit stereotypical spectrotem- poral specificity in their responses. However, spectrotemporal receptive fields (STRF) measured in non-primary areas can be intricate, reflecting mixed spectrotemporal selectivity, and hence be challenging to interpret. We propose a point process model of spiking responses of neurons in PEG, a secondary auditory area, where neurons’ spiking rates are modulated by a high-dimensional biologically inspired stimulus rep- resentation. The proposed method is shown to accurately model a neuron’s response to speech and artificial stimuli, and offers the interpretation of complex STRFs as the sparse combination of higher-dimensional features. Moreover, comparative analyses between PEG and A1 neurons suggest the role of such an hierarchical model is to facilitate encoding natural stimuli.The fourth part of this dissertation is a study in computational auditory scene analysis that seeks to model the role of selective attention in binaural segregation within the framework of a temporal coherence model of auditory streaming. Masks can be obtained by clustering cortical features according to their instantaneous coincidences with pitch and interaural cues. We model selective attention by restrict- ing the ranges of pitch or interaural timing differences used to obtain masks, and evaluate the robustness of the selective attention model in comparison to the baseline model that uses all perceptual cues. Selective attention was as robust to noise and reverberation as the baseline, suggesting the proposed attentive temporal coherence model, in the context of prior experimental findings, may describe the computations by which downstream unattended-speaker representations are suppressed in scene analysis. Finally, the fifth part of this dissertation discusses future directions in studying network interactions in large neural datasets, especially in consideration of current trends towards the adoption of optogenetic stimulation to study neural coding. As a first step in these new directions, a simulation study introducing a reinforcement learning-guided approach to optogenetic stimulation target selection is presented.
  • Thumbnail Image
    Item
    CORTICAL REPRESENTATIONS OF INTELLIGIBLE AND UNINTELLIGIBLE SPEECH: EFFECTS OF AGING AND LINGUISTIC CONTENT
    (2023) Karunathilake , I.M Dushyanthi; Simon, Jonathan Z.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Speech communication requires real-time processing of rapidly varying acoustic sounds across various speech landmarks while recruiting complex cognitive processes to derive the intended meaning. Behavioral studies have highlighted that speech comprehension is altered by factors like aging, linguistic content, and intelligibility, yet the systematic neural mechanisms underlying these changes are not well understood. This thesis aims to explore how the neural bases are modulated by each of these factors using three different experiments, by comparing speech representation in the cortical responses, measured by Magnetoencephalography (MEG). We use neural encoding (Temporal Response Functions (TRFs)) and decoding (reconstruction accuracy) models which describe the mapping between stimulus features and the cortical responses, which are instrumental in understanding cortical temporal processing mechanisms in the brain.Firstly, we investigate age-related changes in timing and fidelity of the cortical representation of speech-in-noise. Understanding speech in a noisy environment becomes more challenging with age, even for healthy aging. Our findings demonstrate that some of the age-related difficulties in understanding speech in noise experienced by older adults are accompanied by age-related temporal processing differences in the auditory cortex. This is an important step towards incorporating neural measures to both diagnostic evaluation and treatments aimed at speech comprehension problems in older adults. Next, we investigate how the cortical representation of speech is influenced by the linguistic content by comparing neural responses to four types of continuous speech-like passages: non-speech, non-words, scrambled words, and narrative. We find neural evidence for emergent features of speech processing from acoustics to linguistic processes at the sentential level as incremental steps in the processing of speech input occur. We also show the gradual computation of hierarchical speech features over time, encompassing both bottom-up and top-down mechanisms. Top-down driven mechanisms at linguistic level demonstrates N400-like response, suggesting involvement of predictive coding mechanisms. Finally, we find potential neural markers of speech intelligibility using a priming paradigm, where intelligibility is varied while keeping the acoustic structure constant. Our findings suggest that segmentation of sounds into words emerges with better speech intelligibility and most strongly at ~400 ms in prefrontal cortex (PFC), in line with engagement of top-down mechanisms associated with priming. Taken together, this thesis furthers our understanding on neural mechanisms underlying speech comprehension and potential objective neural markers to evaluate the level of speech comprehension.
  • Thumbnail Image
    Item
    Perceptual Binding and Temporal Coherence in the Auditory COrtex
    (2023) Dutta, Kelsey Jayne; Shamma, Shihab A; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Auditory streaming and perceptual binding are functions performed by the auditory brain rapidly and without conscious effort. They are fundamental for how we analyze and understand the sound environment including the perception of speech and ability to attend to one speaker while ignoring background noise. Recent work has suggested that temporal coherence of frequency components is a key cue which causes the brain to group channels and into a unified auditory stream. Coherent frequency inputs will lead to coherent neuronal firing, and we hypothesize that such neurons will demonstrate reciprocal enhancement of firing rate or suppression of responses to incoherent channels. This dissertation examines neuronal activity from the auditory cortex of ferrets in order to better understand the role of temporal coherence in formation of auditory streams. One experiment examines the role of temporal coherence in a selective attention task paradigm, and the other uses a stochastic figure-ground stimulus to examine neural correlates of a perceptual “pop-out” during passive listening. A third project develops a biophysically plausible model for a pitch-processing neuron in the early auditory system.
  • Thumbnail Image
    Item
    Efficient Machine Learning Techniques for Neural Decoding Systems
    (2022) wu, xiaomin; Bhattacharyya, Shuvra S.; Chen, Rong; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this thesis, we explore efficient machine learning techniques for calcium imaging based neural decoding in two directions: first, techniques for pruning neural network models to reduce computational complexity and memory cost while retaining high accuracy; second, new techniques for converting graph-based input into low-dimensional vector form, which can be processed more efficiently by conventional neural network models. Neural decoding is an important step in connecting brain activity to behavior --- e.g., to predict movement based on acquired neural signals. Important application areas for neural decoding include brain-machine interfaces and neuromodulation. For application areas such as these, real-time processing of neural signals is important as well as high quality information extraction from the signals. Calcium imaging is a modality that is of increasing interest for studying brain activity. Miniature calcium imaging is a neuroimaging modality that can observe cells in behaving animals with high spatial and temporalresolution, and with the capability to provide chronic imaging. Compared to alternative modalities, calcium imaging has potential to enable improved neural decoding accuracy. However, processing calcium images in real-time is a challenging task as it involves multiple time-consuming stages: neuron detection, motion correction, and signal extraction. Traditional neural decoding methods, such as those based on Wiener and Kalman filters, are fast; however, they are outperformed in terms of accuracy by recently-developed deep neural network (DNN) models. While DNNs provide improved accuracy, they involve high computational complexity, which exacerbates the challenge of real-time processing. Addressing the challenges of high-accuracy, real-time, DNN-based neural decoding is the central objective of this research. As a first step in addressing these challenges, we have developed the NeuroGRS system. NeuroGRS is designed to explore design spaces for compact DNN models and optimize the computational complexity of the models subject to accuracy constraints. GRS, which stands for Greedy inter-layer order with Random Selection of intra-layer units, is an algorithm that we have developed for deriving compact DNN structures. We have demonstrated the effectiveness of GRS to transform DNN models into more compact forms that significantly reduce processing and storage complexity while retaining high accuracy. While NeuroGRS provides useful new capabilities for deriving compact DNN models subject to accuracy constraints, the approach has a significant limitation in the context of neural decoding. This limitation is its lack of scalability to large DNNs. Large DNNs arise naturally in neural decoding applications when the brain model under investigation involves a large number of neurons. As the size of the input DNN increases, NeuroGRS becomes prohibitively expensive in terms of computationaltime. To address this limitation, we have performed a detailed experimental analysis of how pruned solutions evolve as GRS operates, and we have used insights from this analysis to develop a new DNN pruning algorithm called Jump GRS (JGRS). JGRS maintains similar levels of model quality --- in terms of predictive accuracy --- as GRS while operating much more efficiently and being able to handle much larger DNNs under reasonable amounts of time and reasonable computational resources. Jump GRS incorporates a mechanism that bypasses (``jumps over'') validation and retraining during carefully-selected iterations of the pruning process. We demonstrate the advantages and improved scalability of JGRS compared to GRS through extensive experiments in the context of DNNs for neural decoding. We have also developed methods for raising the level of abstraction in the signal representation used for calcium imaging analysis. As a central part of this work, we invented the WGEVIA (Weighted Graph Embedding with Vertex Identity Awareness) algorithm, which enables DNN-based processing of neuron activity that is represented in the form of microcircuits. In contrast to traditional representations of neural signals, which involve spiking signals, a microcircuit representation is a graphical representation. Each vertex in a microcircuit corresponds to a neuron, and each edge carries a weight that captures information about firing relationships between the neurons associated with the vertices that are incident to the edge. Our experiments demonstrate that WGEVIA is effective at extracting information from microcircuits. Moreover,raising the level of abstraction to microcircuit analysis has the potential to enable more powerful signal extraction under limited processing time and resources.
  • Thumbnail Image
    Item
    Decoding the Brain in Complex Auditory Environments
    (2022) Rezaeizadeh, Mohsen; Shamma, Shihab; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Humans have an exceptional ability to engage with sequences of sounds and extract meaningful information from them. We can appreciate music or absorb speech during a conversation, not like anything else on the planet. It is unclear exactly how the brain effortlessly processes these rapidly changing complex soundscapes. This dissertation explored the neural mechanisms underlying these remarkable traits in an effort to expand our knowledge of human cognition with numerous clinical and engineering applications. Brain-imaging techniques have provided a powerful tool to access mental representations' content and dynamics. Non-invasive imaging such as Electroencephalography (EEG) and Magnetoencephalography (MEG) provides a fine-grained dissection of the sequence of brain activities. The analysis of these time-resolved signals can be enhanced with temporal decoding methods that offer vast and untapped potential for determining how mental representations unfold over time. In the present thesis, we use these decoding techniques, along with a series of novel experimental paradigms, on EEG and MEG signals to investigate the neural mechanisms of auditory processing in the human brain, ranging from neural representation of acoustic features to the higher level of cognition, such as music perception and speech imagery. First, we reported our findings regarding the role of temporal coherence in auditory source segregation. We showed that the perception of a target sound source can only be segregated from a complex acoustic background if the acoustic features (e.g., pitch, location, and timbre) induce temporally modulated neural responses that are mutually correlated. We used EEG signals to measure the neural responses to the individual acoustic feature in complex sound mixtures. We decoded the effect of attention on these responses. We showed that attention and the coherent temporal modulation of the acoustic features of the target sound are the key factors that induce the binding of the target features and its emergence as the foreground sound source. Next, we explored how the brain learns the statistical structures of sound sequences in different musical contexts. The ability to detect probabilistic patterns is central to many aspects of human cognition, ranging from auditory perception to the enjoyment of music. We used artificially generated melodies derived from uniform or non-uniform musical scales. We collected EEG signals and decoded the neural responses to the tones in a melody with different transition probabilities. We observed that the listener's brain only learned the melodies' statistical structures when derived from non-uniform scales. Finally, we investigated brain processing during speech and music imagery with Brain-Computer Interface applications. We developed an encoder-decoder neural network architecture to find a transformation between neural responses to the listened and imagined sounds. Using this map, we could reconstruct the imagery signals reliably, which could be used as a template to decode the actual imagery neural signals. This was possible even when we generalized the model to unseen data of an unseen subject. We decoded these predicted signals and identified the imagined segment with remarkable accuracy.
  • Thumbnail Image
    Item
    BAYESIAN INFERENCE OF LATENT SPECTRAL AND TEMPORAL NETWORK ORGANIZATIONS FROM HIGH DIMENSIONAL NEURAL DATA
    (2022) Rupasinghe, Anuththara; Babadi, Behtash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The field of neuroscience has striven for more than a century to understand how the brain functionally coordinates billions of neurons to perform its many tasks. Recent advancements in neural data acquisition techniques such as multi-electrode arrays, two-photon calcium imaging, and high-speed light-sheet microscopy have significantly contributed to this endeavor's progression by facilitating concurrent observation of spiking activity in large neuronal populations. However, existing methods for network-level inference from these data have several shortcomings: including undermining the non-linear dynamics, ignoring non-stationary brain activity, and causing error propagation by performing inference in a multi-stage fashion. The goal of this dissertation is to close this gap by developing models and methods to directly infer the dynamic spectral and temporal network organizations in the brain, from these ensemble neural data. In the first part of this dissertation, we introduce Bayesian methods to infer dynamic frequency-domain network organizations in neuronal ensembles from spiking observations, by integrating techniques such as point process modeling, state-space estimation, and multitaper spectral estimation. Firstly, we introduce a semi-stationary multitaper multivariate spectral analysis method tailored for neuronal spiking data and establish theoretical bounds on its performance. Building upon this estimator, we then introduce a framework to derive spectrotemporal Granger causal interactions in a population of neurons from spiking data. We demonstrate the validity of these methods through simulations, and applications on real data recorded from cortical neurons of rats during sleep, and human subjects undergoing anesthesia. Finally, we extend these methods to develop a precise frequency-domain inference method to characterize human heart rate variability from electrocardiogram data. The second part introduces a methodology to directly estimate signal and noise correlation networks from two-photon calcium imaging observations. We explicitly model the observation noise, temporal blurring of spiking activities, and other underlying non-linearities in a Bayesian framework, and derive an efficient variational inference method. We demonstrate the validity of the resulting estimators through theoretical analysis and extensive simulations, all of which establish significant gains over existing methods. Applications of our method on real data recorded from the mouse primary auditory cortex reveal novel and distinct spatial patterns in the correlation networks. Finally, we use our methods to investigate how the correlation networks in the auditory cortex change under different stimulus conditions, and during perceptual learning. In the third part, we investigate the respiratory network and the swimming-respiration coordination in larval zebrafish by applying several spectro-temporal analysis techniques, on whole-brain light-sheet microscopy imaging data. Firstly, using multitaper spectrotemporal analysis techniques, we categorize brain regions that are synchronized with the respiratory rhythm based on their distinct phases. Then, we demonstrate that zebrafish swimming is phase-locked to breathing. Next, through the analysis of neural activity and behavior under optogenetic stimulations and two-photon ablations, we identify the brain regions that are key for this swimming-respiration coordination. Finally, using the Izhikevich model for spiking neurons, we develop and simulate a circuit model that replicates this swimming-respiration coupling phenomenon, providing new insights into the possible underlying neural circuitry.
  • Thumbnail Image
    Item
    Time-locked Cortical Processing of Speech in Complex Environments
    (2021) Kulasingham, Joshua Pranjeevan; Simon, Jonathan Z; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Our ability to communicate using speech depends on complex, rapid processing mechanisms in the human brain. These cortical processes make it possible for us to easily understand one another even in noisy environments. Measurements of neural activity have found that cortical responses time-lock to the acoustic and linguistic features of speech. Investigating the neural mechanisms that underlie this ability could lead to a better understanding of human cognition, language comprehension, and hearing and speech impairments. We use Magnetoencephalography (MEG), which non-invasively measures the magnetic fields that arise from neural activity, to further explore these time-locked cortical processes. One method for detecting this activity is the Temporal Response Function (TRF), which models the impulse response of the neural system to continuous stimuli. Prior work has found that TRFs reflect several stages of speech processing in the cortex. Accordingly, we use TRFs to investigate cortical processing of both low-level acoustic and high-level linguistic features of continuous speech. First, we find that cortical responses time-lock at high gamma frequencies (~100 Hz) to the acoustic envelope modulations of the low pitch segments of speech. Older and younger listeners show similar high gamma responses, even though slow envelope TRFs show age-related differences. Next, we utilize frequency domain analysis, TRFs and linear decoders to investigate cortical processing of high-level structures such as sentences and equations. We find that the cortical networks involved in arithmetic processing dissociate from those underlying language processing, although bothinvolve several overlapping areas. These processes are more separable when subjects selectively attend to one speaker over another distracting speaker. Finally, we compare both conventional and novel TRF algorithms in terms of their ability to estimate TRF components, which may provide robust measures for analyzing group and task differences in auditory and speech processing. Overall, this work provides insights into several stages of time-locked cortical processing of speech and highlights the use of TRFs for investigating neural responses to continuous speech in complex environments.
  • Thumbnail Image
    Item
    Towards Trust and Transparency in Deep Learning Systems through Behavior Introspection & Online Competency Prediction
    (2021) Allen, Julia Filiberti; Gabriel, Steven A.; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Deep neural networks are naturally “black boxes”, offering little insight into how or why they make decisions. These limitations diminish the adoption likelihood of such systems for important tasks and as trusted teammates. We employ introspective techniques to abstract machine activation patterns into human-interpretable strategies and identify relationships between environmental conditions (why), strategies (how), and performance (result) on both a deep reinforcement learning two-dimensional pursuit game application and image-based deep supervised learning obstacle recognition application. Pursuit-evasion games have been studied for decades under perfect information and analytically-derived policies for static environments. We incorporate uncertainty in a target’s position via simulated measurements and demonstrate a novel continuous deep reinforcement learning approach against speed-advantaged targets. The resulting approach was tested under many scenarios and performance exceeded that of a baseline course-aligned strategy. We manually observed separation of learned pursuit behaviors into strategy groups and manually hypothesized environmental conditions that affected performance. These manual observations motivated automation and abstraction of conditions, performance and strategy relationships. Next, we found that deep network activation patterns could be abstracted into human-interpretable strategies for two separate deep learning approaches. We characterized machine commitment by the introduction of a novel measure and revealed significant correlations between machine commitment, strategies, environmental conditions, and task performance. As such, we motivated online exploitation of machine behavior estimation for competency-aware intelligent systems. And finally, we realized online prediction capabilities for conditions, strategies, and performance. Our competency-aware machine learning approach is easily portable to new applications due to its Bayesian nonparametric foundation, wherein all inputs are compactly transformed into the same compact data representation. In particular, image data is transformed into a probability distribution over features extracted from the data. The resulting transformation forms a common representation for comparing two images, possibly from different types of sensors. By uncovering relationships between environmental conditions (why), machine strategies (how), & performance (result) and by giving rise to online estimation of machine competency, we increase transparency and trust in machine learning systems, contributing to the overarching explainable artificial intelligence initiative. 
  • Thumbnail Image
    Item
    Computing with Trajectories: Characterizing Dynamics and Connectivity in Spatiotemporal Neuroimaging Data
    (2020) Venkatesh, Manasij; Pessoa, Luiz; JaJa, Joseph F; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Human functional Magnetic Resonance Imaging (fMRI) data are acquired while participants engage in diverse perceptual, motor, cognitive, and emotional tasks. Although data are acquired temporally, they are most often treated in a quasi-static manner. Yet, a fuller understanding of the mechanisms that support mental functions necessitates the characterization of dynamic properties. Firstly, we describe an approach employing a class of recurrent neural networks called reservoir computing, and show their feasibility and potential for the analysis of temporal properties of brain data. We show that reservoirs can be used effectively both for condition classification and for characterizing lower-dimensional "trajectories" of temporal data. Classification accuracy was approximately 90% for short clips of "social interactions" and around 70% for clips extracted from movie segments. Data representations with 12 or fewer dimensions (from an original space with over 300) attained classification accuracy within 5% of the full data. We hypothesize that such low-dimensional trajectories may provide "signatures" that can be associated with tasks and/or mental states. The approach was applied across participants (that is, training in one set of participants, and testing in a separate group), showing that representations generalized well to unseen participants. In the second part, we use fully-trained recurrent neural networks to capture and characterize spatiotemporal properties of brain events. We propose an architecture based on long short-term memory (LSTM) networks to uncover distributed spatiotemporal signatures during dynamic experimental conditions. We demonstrate the potential of the approach using naturalistic movie-watching fMRI data. We show that movie clips result in complex but distinct spatiotemporal patterns in brain data that can be classified using LSTMs (≈90% for 15-way classification), demonstrating that learned representations generalized to unseen participants. LSTMs were also superior to existing methods in predicting behavior and personality traits of individuals. We propose a dimensionality reduction approach that uncovers low-dimensional trajectories and captures essential informational properties of brain dynamics. Finally, we employed saliency maps to characterize spatiotemporally-varying brain-region importance. The spatiotemporal saliency maps revealed dynamic but consistent changes in fMRI activation data. Taken together, we believe the above approaches provide a powerful framework for visualizing, analyzing, and discovering dynamic spatially distributed brain representations during naturalistic conditions. Finally, we address the problem of comparing functional connectivity matrices obtained from temporal fMRI data. Understanding the correlation structure associated with multiple brain measurements informs about potential "functional groupings" and network organization. The correlation structure can be conveniently captured in a matrix format that summarizes the relationships among a set of brain measurements involving two regions, for example. Such functional connectivity matrix is an important component of many types of investigation focusing on network-level properties of the brain, including clustering brain states, characterizing dynamic functional states, performing participant identification (so-called "fingerprinting"), understanding how tasks reconfigure brain networks, and inter-subject correlation analysis. In these investigations, some notion of proximity or similarity of functional connectivity matrices is employed, such as their Euclidean distance or Pearson correlation (by correlating the matrix entries). We propose the use of a geodesic distance metric that reflects the underlying non-Euclidean geometry of functional correlation matrices. The approach is evaluated in the context of participant identification (fingerprinting): given a participant's functional connectivity matrix based on resting-state or task data, how effectively can the participant be identified? Using geodesic distance, identification accuracy was over 95% on resting-state data and exceeded the Pearson correlation approach by 20%. For whole-cortex regions, accuracy improved on a range of tasks by between 2% and as much as 20%. We also investigated identification using pairs of subnetworks (say, dorsal attention plus default mode), and particular combinations improved accuracy over whole-cortex participant identification by over 10%. The geodesic distance also outperformed Pearson correlation when the former employed a fourth of the data as the latter. Finally, we suggest that low-dimensional distance visualizations based on the geodesic approach help uncover the geometry of task functional connectivity in relation to that during resting-state. We propose that the use of the geodesic distance is an effective way to compare the correlation structure of the brain across a broad range of studies.
  • Thumbnail Image
    Item
    Bayesian Modeling and Estimation Techniques for the Analysis of Neuroimaging Data
    (2020) Das, Proloy; Babadi, Behtash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Brain function is hallmarked by its adaptivity and robustness, arising from underlying neural activity that admits well-structured representations in the temporal, spatial, or spectral domains. While neuroimaging techniques such as Electroencephalography (EEG) and magnetoencephalography (MEG) can record rapid neural dynamics at high temporal resolutions, they face several signal processing challenges that hinder their full utilization in capturing these characteristics of neural activity. The objective of this dissertation is to devise statistical modeling and estimation methodologies that account for the dynamic and structured representations of neural activity and to demonstrate their utility in application to experimentally-recorded data. The first part of this dissertation concerns spectral analysis of neural data. In order to capture the non-stationarities involved in neural oscillations, we integrate multitaper spectral analysis and state-space modeling in a Bayesian estimation setting. We also present a multitaper spectral analysis method tailored for spike trains that captures the non-linearities involved in neuronal spiking. We apply our proposed algorithms to both EEG and spike recordings, which reveal significant gains in spectral resolution and noise reduction. In the second part, we investigate cortical encoding of speech as manifested in MEG responses. These responses are often modeled via a linear filter, referred to as the temporal response function (TRF). While the TRFs estimated from the sensor-level MEG data have been widely studied, their cortical origins are not fully understood. We define the new notion of Neuro-Current Response Functions (NCRFs) for simultaneously determining the TRFs and their cortical distribution. We develop an efficient algorithm for NCRF estimation and apply it to MEG data, which provides new insights into the cortical dynamics underlying speech processing. Finally, in the third part, we consider the inference of Granger causal (GC) influences in high-dimensional time series models with sparse coupling. We consider a canonical sparse bivariate autoregressive model and define a new statistic for inferring GC influences, which we refer to as the LASSO-based Granger Causal (LGC) statistic. We establish non-asymptotic guarantees for robust identification of GC influences via the LGC statistic. Applications to simulated and real data demonstrate the utility of the LGC statistic in robust GC identification.