Computer Science Research Works

Permanent URI for this collectionhttp://hdl.handle.net/1903/1593

Browse

Search Results

Now showing 1 - 10 of 155
  • Item
    Supplementary material for Applying Wearable Sensors and Machine Learning to the Diagnostic Challenge of Distinguishing Parkinson's Disease from Other Forms of Parkinsonism
    (2025) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Reich, Stephen G.; Savitt, Joseph M.; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.
    Parkinson's Disease (PD) and other forms of parkinsonism share motor symptoms, including tremor, bradykinesia, and rigidity. This overlap in the clinical presentation creates a diagnostic challenge, underscoring the need for objective differentiation. However, applying machine learning (ML) to clinical datasets faces challenges such as imbalanced class distributions, small sample sizes for non-PD parkinsonism, and heterogeneity within the non-PD group. This study analyzed wearable sensor data from 260 PD participants and 18 individuals with etiologically diverse forms of non-PD parkinsonism during clinical mobility tasks, using a single sensor placed on the lower-back. We evaluated the performance of ML models in distinguishing these two groups and identified the most informative mobility tasks for classification. Additionally, we examined clinical characteristics of misclassified participants and presented case studies of common challenges in clinical practice, including diagnostic uncertainty at the initial visit and changes in diagnosis over time. We also suggested potential steps to address dataset challenges which limited the models' performance. We demonstrate that ML-based analysis is a promising approach for distinguishing idiopathic PD from non-PD parkinsonism, though its accuracy remains below that of expert clinicians. Using the Timed Up and Go test as a single mobility task outperformed the use of all tasks combined, achieving a balanced accuracy of 78.2%. We also identified differences in some clinical scores between participants correctly and falsely classified by our models. These findings demonstrate the feasibility of using ML and wearable sensors for differentiating PD from other parkinsonian disorders, addressing key challenges in diagnosis, and streamlining diagnostic workflows.
  • Item
    Supplementary material for machine learning and statistical analyses of sensor data reveal variability between repeated trials in Parkinson’s disease mobility assessments
    (2024) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Shakya, Sunita; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.
    Mobility tasks like the Timed Up and Go test (TUG), cognitive TUG (cogTUG), and walking with turns provide insight into motor control, balance, and cognitive functions affected by Parkinson’s disease (PD). We assess the test-retest reliability of these tasks in 262 PD participants and 50 controls by evaluating machine learning models based on wearable sensor-derived measures and statistical metrics. This evaluation examines total duration, subtask duration, and other quantitative measures across two trials. We show that the diagnostic accuracy for distinguishing PD from controls decreases by a mean of 1.8% between the first and the second trial, suggesting that task repetition may not be necessary for accurate diagnosis. Although the total duration remains relatively consistent between trials (intraclass correlation coefficient (ICC) = 0.62 to 0.95), greater variability is seen in subtask duration and sensor-derived measures, reflected in machine learning performance and statistical differences. Our findings also show that this variability differs not only between controls and PD participants but also among groups with varying levels of PD severity, indicating the need to consider population characteristics. Relying solely on total task duration and conventional statistical metrics to gauge the reliability of mobility tasks may fail to reveal nuanced variations in movement.
  • Thumbnail Image
    Item
    NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media
    (AAAS, 2023-06-28) Feng, Brandon Y.; Guo, Haiyun; Xie, Mingyang; Boominathan, Vivek; Sharma, Manoj K.; Veeraraghavan, Ashok; Metzler, Christopher A.
    Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators—but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.
  • Item
    Supplementary material for Machine learning analysis of wearable sensor data from mobility testing distinguishes Parkinson's disease from other forms of parkinsonism
    (2024-03-13) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.; Cummings, Michael P.
    Parkinson's Disease (PD) and other forms of parkinsonism share characteristic motor symptoms, including tremor, bradykinesia, and rigidity. This overlap in the clinical presentation creates a diagnostic challenge, underscoring the need for objective differentiation tools. In this study, we analyzed wearable sensor data collected during mobility testing from 260 PD participants and 18 participants with etiologically diverse forms of parkinsonism. Our findings illustrate that machine learning-based analysis of data from a single wearable sensor can effectively distinguish idiopathic PD from non-PD parkinsonism with a balanced accuracy of 83.5%, comparable to expert diagnosis. Moreover, we found that diagnostic performance can be improved through severity-based partitioning of participants, achieving a balanced accuracy of 95.9%, 91.2% and 100% for mild, moderate and severe cases, respectively. Beyond its diagnostic implications, our results suggest the possibility of streamlining the testing protocol by using the Timed Up and Go test as a single mobility task. Furthermore, we present a detailed analysis of several case studies of challenging scenarios commonly encountered in clinical practice, including diagnostic uncertainty at the initial visit, and changes in clinical diagnosis at a subsequent visit. Together, these findings demonstrate the potential of applying machine learning on sensor-based measures of mobility to distinguish between PD and other forms of parkinsonism.
  • Item
    Supplementary material for machine learning analysis of data from a simplified mobility testing procedure with a single sensor and single task accurately differentiates Parkinson's disease from controls
    (2023) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Shakya, Sunita; von Coelln, Rainer; Cummings, Michael P.; Fenderson, Rebecca; van Hoven, Maxwell; Hausdorff, Jeffrey M.; Cummings, Michael P.
    Quantitative mobility analysis using wearable sensors, while promising as a diagnostic tool for Parkinson's disease (PD), is not commonly applied in clinical settings. Major obstacles include uncertainty regarding the best protocol for instrumented mobility testing and subsequent data processing, as well as the added workload and complexity of this multi-step process. To simplify sensor-based mobility testing in diagnosing PD, we analyzed data from 262 PD participants and 50 controls performing several motor tasks wearing a sensor on the lower back containing a triaxial accelerometer and a triaxial gyroscope. Using ensembles of heterogeneous machine learning models incorporating a range of classifiers trained on a large set of sensor features, we show that our models effectively differentiate between participants with PD and controls, both for mixed-stage PD (92.6% accuracy) and a group selected for mild PD only (89.4% accuracy). Omitting algorithmic segmentation of complex mobility tasks decreased the diagnostic accuracy of our models, as did the inclusion of kinesiological features. Feature importance analysis revealed Timed Up & Go (TUG) tasks to contribute highest-yield predictive features, with only minor decrease in accuracy for models based on cognitive TUG as a single mobility task. Our machine learning approach facilitates major simplification of instrumented mobility testing without compromising predictive performance.
  • Thumbnail Image
    Item
    Exploring the Computational Explanatory Gap
    (MDPI, 2017-01-16) Reggia, James A.; Huang, Di-Wei; Katz, Garrett
    While substantial progress has been made in the field known as artificial consciousness, at the present time there is no generally accepted phenomenally conscious machine, nor even a clear route to how one might be produced should we decide to try. Here, we take the position that, from our computer science perspective, a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. We explain how addressing the computational explanatory gap can identify computational correlates of consciousness. We suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but would also inform the search for neurobiological correlates of consciousness and would, with high probability, contribute to demystifying the “hard problem” of understanding the mind–brain relationship. We compile a listing of previously proposed computational correlates of consciousness and, based on the results of recent computational modeling, suggest that the gating mechanisms associated with top-down cognitive control of working memory should be added to this list. We conclude that developing neurocognitive architectures that contribute to bridging the computational explanatory gap provides a credible and achievable roadmap to understanding the ultimate prospects for a conscious machine, and to a better understanding of the mind–brain problem in general.
  • Thumbnail Image
    Item
    Deep Multimodal Learning for the Diagnosis of Autism Spectrum Disorder
    (MDPI, 2020-06-10) Tang, Michelle; Kumar, Pulkit; Chen, Hao; Shrivastava, Abhinav
    Recent medical imaging technologies, specifically functional magnetic resonance imaging (fMRI), have advanced the diagnosis of neurological and neurodevelopmental disorders by allowing scientists and physicians to observe the activity within and between different regions of the brain. Deep learning methods have frequently been implemented to analyze images produced by such technologies and perform disease classification tasks; however, current state-of-the-art approaches do not take advantage of all the information offered by fMRI scans. In this paper, we propose a deep multimodal model that learns a joint representation from two types of connectomic data offered by fMRI scans. Incorporating two functional imaging modalities in an automated end-to-end autism diagnosis system will offer a more comprehensive picture of the neural activity, and thus allow for more accurate diagnoses. Our multimodal training strategy achieves a classification accuracy of 74% and a recall of 95%, as well as an F1 score of 0.805, and its overall performance is superior to using only one type of functional data.
  • Thumbnail Image
    Item
    Visualization of WiFi Signals Using Programmable Transfer Functions
    (MDPI, 2022-04-26) Rowden, Alexander; Krokos, Eric; Whitley, Kirsten; Varshney, Amitabh
    In this paper, we show how volume rendering with a Programmable Transfer Function can be used for the effective and comprehensible visualization of WiFi signals. A traditional transfer function uses a low-dimensional lookup table to map the volumetric scalar field to color and opacity. In this paper, we present the concept of a Programmable Transfer Function. We then show how generalizing traditional lookup-based transfer functions to Programmable Transfer Functions enables us to leverage view-dependent and real-time attributes of a volumetric field to depict the data variations of WiFi surfaces with low and high-frequency components. Our Programmable Transfer Functions facilitate interactive knowledge discovery and produce meaningful visualizations.
  • Thumbnail Image
    Item
    A Grammar-Based Approach for Applying Visualization Taxonomies to Interaction Logs
    (Wiley, 2022-07-29) Gathani, Sneha; Monadjemi, Shayan; Ottley, Alvitta; Battle, Leilani
    Researchers collect large amounts of user interaction data with the goal of mapping user's workflows and behaviors to their high-level motivations, intuitions, and goals. Although the visual analytics community has proposed numerous taxonomies to facilitate this mapping process, no formal methods exist for systematically applying these existing theories to user interaction logs. This paper seeks to bridge the gap between visualization task taxonomies and interaction log data by making the taxonomies more actionable for interaction log analysis. To achieve this, we leverage structural parallels between how people express themselves through interactions and language by reformulating existing theories as regular grammars. We represent interactions as terminals within a regular grammar, similar to the role of individual words in a language, and patterns of interactions or non-terminals as regular expressions over these terminals to capture common language patterns. To demonstrate our approach, we generate regular grammars for seven existing visualization taxonomies and develop code to apply them to three public interaction log datasets. In analyzing these regular grammars, we find that the taxonomies at the low-level (i.e., terminals) show mixed results in expressing multiple interaction log datasets, and taxonomies at the high-level (i.e., regular expressions) have limited expressiveness, due to primarily two challenges: inconsistencies in interaction log dataset granularity and structure, and under-expressiveness of certain terminals. Based on our findings, we suggest new research directions for the visualization community to augment existing taxonomies, develop new ones, and build better interaction log recording processes to facilitate the data-driven development of user behavior taxonomies.
  • Thumbnail Image
    Item
    A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training
    (Association for Computer Machinery (ACM), 2023-06-21) Singh, Siddarth; Ruwase, Olatunji; Awan, Ammar Ahmad; Rajbhandari, Samyam; He, Yuxiong; Bhatele, Abhinav
    Mixture-of-Experts (MoE) is a neural network architecture that adds sparsely activated expert blocks to a base model, increasing the number of parameters without impacting computational costs. However, current distributed deep learning frameworks are limited in their ability to train high-quality MoE models with large base models. In this work, we present DeepSpeed-TED, a novel, threedimensional, hybrid parallel algorithm that combines data, tensor, and expert parallelism to enable the training of MoE models with 4–8× larger base models than the current state-of-the-art. We also describe memory optimizations in the optimizer step, and communication optimizations that eliminate unnecessary data movement. We implement our approach in DeepSpeed and achieve speedups of 26% over a baseline (i.e. without our communication optimizations) when training a 40 billion parameter MoE model (6.7 billion base model with 16 experts) on 128 V100 GPUs.