UMD Data Collection
Permanent URI for this collectionhttp://hdl.handle.net/1903/27670
University of Maryland faculty and researchers can upload their research products in DRUM for rapid dissemination, global visibility and impact, and long-term preservation. Depositing data in DRUM can assist in compliance with data management and sharing requirements from the NSF, NIH, and other funding agencies and journals. You can also deposit code, documents, images, supplemental material, and other research products. DRUM tracks views and downloads of your research, and all DRUM records are indexed by Google and Google Scholar. Additionally, DRUM assigns permanent DOIs for your items, making it easy for other researchers to cite your work.
Submissions to the Data Collection
To add files to the UMD Data Collection, submit a new item through your associated department or program's DRUM collection and check the box indicating your upload contains a dataset.
Find more information and guidelines for depositing into the Data Collection on the University of Maryland Libraries' DRUM for Data page.
Assistance
Please direct questions regarding the UMD Data Collection or assistance in preparing and depositing data to: lib-research-data@umd.edu.
Browse
Search Results
Item Supplementary material for Applying Wearable Sensors and Machine Learning to the Diagnostic Challenge of Distinguishing Parkinson's Disease from Other Forms of Parkinsonism(2025) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Reich, Stephen G.; Savitt, Joseph M.; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.Parkinson's Disease (PD) and other forms of parkinsonism share motor symptoms, including tremor, bradykinesia, and rigidity. This overlap in the clinical presentation creates a diagnostic challenge, underscoring the need for objective differentiation. However, applying machine learning (ML) to clinical datasets faces challenges such as imbalanced class distributions, small sample sizes for non-PD parkinsonism, and heterogeneity within the non-PD group. This study analyzed wearable sensor data from 260 PD participants and 18 individuals with etiologically diverse forms of non-PD parkinsonism during clinical mobility tasks, using a single sensor placed on the lower-back. We evaluated the performance of ML models in distinguishing these two groups and identified the most informative mobility tasks for classification. Additionally, we examined clinical characteristics of misclassified participants and presented case studies of common challenges in clinical practice, including diagnostic uncertainty at the initial visit and changes in diagnosis over time. We also suggested potential steps to address dataset challenges which limited the models' performance. We demonstrate that ML-based analysis is a promising approach for distinguishing idiopathic PD from non-PD parkinsonism, though its accuracy remains below that of expert clinicians. Using the Timed Up and Go test as a single mobility task outperformed the use of all tasks combined, achieving a balanced accuracy of 78.2%. We also identified differences in some clinical scores between participants correctly and falsely classified by our models. These findings demonstrate the feasibility of using ML and wearable sensors for differentiating PD from other parkinsonian disorders, addressing key challenges in diagnosis, and streamlining diagnostic workflows.Item Supplementary material for machine learning and statistical analyses of sensor data reveal variability between repeated trials in Parkinson’s disease mobility assessments(2024) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Shakya, Sunita; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.Mobility tasks like the Timed Up and Go test (TUG), cognitive TUG (cogTUG), and walking with turns provide insight into motor control, balance, and cognitive functions affected by Parkinson’s disease (PD). We assess the test-retest reliability of these tasks in 262 PD participants and 50 controls by evaluating machine learning models based on wearable sensor-derived measures and statistical metrics. This evaluation examines total duration, subtask duration, and other quantitative measures across two trials. We show that the diagnostic accuracy for distinguishing PD from controls decreases by a mean of 1.8% between the first and the second trial, suggesting that task repetition may not be necessary for accurate diagnosis. Although the total duration remains relatively consistent between trials (intraclass correlation coefficient (ICC) = 0.62 to 0.95), greater variability is seen in subtask duration and sensor-derived measures, reflected in machine learning performance and statistical differences. Our findings also show that this variability differs not only between controls and PD participants but also among groups with varying levels of PD severity, indicating the need to consider population characteristics. Relying solely on total task duration and conventional statistical metrics to gauge the reliability of mobility tasks may fail to reveal nuanced variations in movement.Item Supplementary material for Machine learning analysis of wearable sensor data from mobility testing distinguishes Parkinson's disease from other forms of parkinsonism(2024-03-13) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.; Cummings, Michael P.Parkinson's Disease (PD) and other forms of parkinsonism share characteristic motor symptoms, including tremor, bradykinesia, and rigidity. This overlap in the clinical presentation creates a diagnostic challenge, underscoring the need for objective differentiation tools. In this study, we analyzed wearable sensor data collected during mobility testing from 260 PD participants and 18 participants with etiologically diverse forms of parkinsonism. Our findings illustrate that machine learning-based analysis of data from a single wearable sensor can effectively distinguish idiopathic PD from non-PD parkinsonism with a balanced accuracy of 83.5%, comparable to expert diagnosis. Moreover, we found that diagnostic performance can be improved through severity-based partitioning of participants, achieving a balanced accuracy of 95.9%, 91.2% and 100% for mild, moderate and severe cases, respectively. Beyond its diagnostic implications, our results suggest the possibility of streamlining the testing protocol by using the Timed Up and Go test as a single mobility task. Furthermore, we present a detailed analysis of several case studies of challenging scenarios commonly encountered in clinical practice, including diagnostic uncertainty at the initial visit, and changes in clinical diagnosis at a subsequent visit. Together, these findings demonstrate the potential of applying machine learning on sensor-based measures of mobility to distinguish between PD and other forms of parkinsonism.Item Supplementary materials for statistical and machine learning analyses demonstrate test-retest reliability assessment is misled by focusing on total duration of mobility tasks in Parkinson's disease(2023) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Shakya, Sunita; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.; Cummings, Michael P.Mobility tasks like the Timed Up and Go test (TUG), cognitive TUG (cogTUG), and walking with turns provide insight into dynamic motor control, balance, and cognitive functions affected by Parkinson’s disease (PD). We evaluate the test-retest reliability of these tasks by assessing the performance of machine learning models based on quantitative sensor-derived measures, and statistical measures to examine total duration, subtask duration, and other quantitative measures across both trials. We show that the diagnostic accuracy of differentiating between PD and control participants decreases from the first to the second trial of our mobility tasks, suggesting that mobility testing can be simplified by not repeating tasks without losing relevant information. Although the total duration remains relatively consistent between trials, there is more variability in subtask duration and sensor-derived measures, evident in the differences in machine learning model performance and statistical metrics. Relying solely on total task duration and conventional statistical metrics to gauge the reliability of mobility tasks overlooks the nuanced variations in movement captured by other quantitative measures.Item Supplementary material for machine learning analysis of data from a simplified mobility testing procedure with a single sensor and single task accurately differentiates Parkinson's disease from controls(2023) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Shakya, Sunita; von Coelln, Rainer; Cummings, Michael P.; Fenderson, Rebecca; van Hoven, Maxwell; Hausdorff, Jeffrey M.; Cummings, Michael P.Quantitative mobility analysis using wearable sensors, while promising as a diagnostic tool for Parkinson's disease (PD), is not commonly applied in clinical settings. Major obstacles include uncertainty regarding the best protocol for instrumented mobility testing and subsequent data processing, as well as the added workload and complexity of this multi-step process. To simplify sensor-based mobility testing in diagnosing PD, we analyzed data from 262 PD participants and 50 controls performing several motor tasks wearing a sensor on the lower back containing a triaxial accelerometer and a triaxial gyroscope. Using ensembles of heterogeneous machine learning models incorporating a range of classifiers trained on a large set of sensor features, we show that our models effectively differentiate between participants with PD and controls, both for mixed-stage PD (92.6% accuracy) and a group selected for mild PD only (89.4% accuracy). Omitting algorithmic segmentation of complex mobility tasks decreased the diagnostic accuracy of our models, as did the inclusion of kinesiological features. Feature importance analysis revealed Timed Up & Go (TUG) tasks to contribute highest-yield predictive features, with only minor decrease in accuracy for models based on cognitive TUG as a single mobility task. Our machine learning approach facilitates major simplification of instrumented mobility testing without compromising predictive performance.