Physics

Permanent URI for this communityhttp://hdl.handle.net/1903/2269

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Analyzing and Enhancing Molecular Dynamics Through the Synergy of Physics and Artificial Intelligence
    (2024) Wang, Dedi; Tiwary, Pratyush; Biophysics (BIPH); Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Rapid advances in computational power have made all-atom molecular dynamics (MD) a powerful tool for studying systems in biophysics, chemical physics and beyond. By solving Newton's equations of motion in silico, MD simulations allow us to track the time evolution of complex molecular systems in an all-atom, femtosecond resolution, enabling the evaluation of both their thermodynamic and kinetic properties. Though MD simulations are powerful, their effectiveness is often hampered by the large amount of data they produce. For instance, a standard microsecond-long simulation of a protein can easily generate hundreds of gigabytes of data, which can be difficult to analyze. Moreover, the time required to conduct these simulations can be prohibitively long. Microsecond-long simulations often take weeks to complete, whereas the processes of interest may occur on the timescale of milliseconds or even hundreds of seconds. These factors collectively pose significant challenges in leveraging MD simulations for comprehensive analysis and exploration of chemical and biological systems. In this thesis, I address these challenges by leveraging physics-inspired insights to learn unique, useful, and also meaningful low-dimensional representations of complex molecular systems. These representations enable effective analysis and interpretation of the vast amount of data generated from experiments and simulations. These representations have proven to be valuable in providing mechanistic insights into some fundamental problems within theoretical chemistry and biophysics, such as understanding the interplay between long-range and short-range forces in ion pair dissociation and the transformation of proteins from unstable random coils to structured forms. Furthermore, these physics-informed representations play a crucial role in enhancing MD simulations. They facilitate the construction of simplified kinetic models, enabling the generation of dynamical trajectories spanning significantly longer time scales than those accessible by conventional MD simulations. Additionally, they can serve as blueprints to guide the sampling process in combination with existing enhanced sampling methods. Through this thesis, I showcase how the synergy between physics and AI can advance our understanding of molecular systems and facilitate more efficient and insightful analysis in the fields of computational chemistry and biophysics.
  • Thumbnail Image
    Item
    Unveiling secrets of brain function with generative modeling: Motion perception in primates & Cortical network organization in mice
    (2023) Vafaii, Hadi; Pessoa, Luiz; Butts, Daniel A; Physics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This Dissertation is comprised of two main projects, addressing questions in neuroscience through applications of generative modeling. Project #1 (Chapter 4) is concerned with how neurons in the brain encode, or represent, features of the external world. A key challenge here is building artificial systems that represent the world similarly to biological neurons. In Chapter 4, I address this by combining Helmholtz's “Perception as Unconscious Inference”---paralleled by modern generative models like variational autoencoders (VAE)---with the hierarchical structure of the visual cortex. This combination results in the development of a hierarchical VAE model, which I subsequently test for its ability to mimic neurons from the primate visual cortex in response to motion stimuli. Results show that the hierarchical VAE perceives motion similar to the primate brain. I also evaluate the model's capability to identify causal factors of retinal motion inputs, such as object motion. I find that hierarchical latent structure enhances the linear decodability of data generative factors and does so in a disentangled and sparse manner. A comparison with alternative models indicates the critical role of both hierarchy and probabilistic inference. Collectively, these results suggest that hierarchical inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding. Project #2 (Chapter 5) is about how spontaneous fluctuations in the brain are spatiotemporally structured and reflect brain states such as resting. The correlation structure of spontaneous brain activity has been used to identify large-scale functional brain networks, in both humans and rodents. The majority of studies in this domain use functional MRI (fMRI), and assume a disjoint network structure, meaning that each brain region belongs to one and only one community. In Chapter 5, I apply a generative algorithm to a simultaneous fMRI and wide-field calcium imaging dataset and demonstrate that the mouse cortex can be decomposed into overlapping communities. Examining the overlap extent shows that around half of the mouse cortical regions belong to multiple communities. Comparative analyses reveal that calcium-derived network structure reproduces many aspects of fMRI-derived network structure. Still, there are important differences as well, suggesting that the inferred network topologies are ultimately different across imaging modalities. In conclusion, wide-field calcium imaging unveils overlapping functional organization in the mouse cortex, reflecting several but not all properties observed in fMRI signals.
  • Thumbnail Image
    Item
    UNCOVERING PATTERNS IN COMPLEX DATA WITH RESERVOIR COMPUTING AND NETWORK ANALYTICS: A DYNAMICAL SYSTEMS APPROACH
    (2020) Krishnagopal, Sanjukta; Girvan, Michelle; Physics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this thesis, we explore methods of uncovering underlying patterns in complex data, and making predictions, through machine learning and network science. With the availability of more data, machine learning for data analysis has advanced rapidly. However, there is a general lack of approaches that might allow us to 'open the black box'. In the machine learning part of this thesis, we primarily use an architecture called Reservoir Computing for time-series prediction and image classification, while exploring how information is encoded in the reservoir dynamics. First, we investigate the ways in which a Reservoir Computer (RC) learns concepts such as 'similar' and 'different', and relationships such as 'blurring', 'rotation' etc. between image pairs, and generalizes these concepts to different classes unseen during training. We observe that the high dimensional reservoir dynamics display different patterns for different relationships. This clustering allows RCs to perform significantly better in generalization with limited training compared with state-of-the-art pair-based convolutional/deep Siamese Neural Networks. Second, we demonstrate the utility of an RC in the separation of superimposed chaotic signals. We assume no knowledge of the dynamical equations that produce the signals, and require only that the training data consist of finite time samples of the component signals. We find that our method significantly outperforms the optimal linear solution to the separation problem, the Wiener filter. To understand how representations of signals are encoded in an RC during learning, we study its dynamical properties when trained to predict chaotic Lorenz signals. We do so by using a novel, mathematical fixed-point-finding technique called directional fibers. We find that, after training, the high dimensional RC dynamics includes fixed points that map to the known Lorenz fixed points, but the RC also has spurious fixed points, which are relevant to how its predictions break down. While machine learning is a useful data processing tool, its success often relies on a useful representation of the system's information. In contrast, systems with a large numbers of interacting components may be better analyzed by modeling them as networks. While numerous advances in network science have helped us analyze such systems, tools that identify properties on networks modeling multi-variate time-evolving data (such as disease data) are limited. We close this gap by introducing a novel data-driven, network-based Trajectory Profile Clustering (TPC) algorithm for 1) identification of disease subtypes and 2) early prediction of subtype/disease progression patterns. TPC identifies subtypes by clustering patients with similar disease trajectory profiles derived from bipartite patient-variable networks. Applying TPC to a Parkinson’s dataset, we identify 3 distinct subtypes. Additionally, we show that TPC predicts disease subtype 4 years in advance with 74% accuracy.