Computer Science Research Works

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 153
  • Item
    NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media
    (AAAS, 2023-06-28) Feng, Brandon Y.; Guo, Haiyun; Xie, Mingyang; Boominathan, Vivek; Sharma, Manoj K.; Veeraraghavan, Ashok; Metzler, Christopher A.
    Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators—but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.
  • Item
    Supplementary material for Machine learning analysis of wearable sensor data from mobility testing distinguishes Parkinson's disease from other forms of parkinsonism
    (2024-03-13) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Hausdorff, Jeffrey M.; von Coelln, Rainer; Cummings, Michael P.; Cummings, Michael P.
    Parkinson's Disease (PD) and other forms of parkinsonism share characteristic motor symptoms, including tremor, bradykinesia, and rigidity. This overlap in the clinical presentation creates a diagnostic challenge, underscoring the need for objective differentiation tools. In this study, we analyzed wearable sensor data collected during mobility testing from 260 PD participants and 18 participants with etiologically diverse forms of parkinsonism. Our findings illustrate that machine learning-based analysis of data from a single wearable sensor can effectively distinguish idiopathic PD from non-PD parkinsonism with a balanced accuracy of 83.5%, comparable to expert diagnosis. Moreover, we found that diagnostic performance can be improved through severity-based partitioning of participants, achieving a balanced accuracy of 95.9%, 91.2% and 100% for mild, moderate and severe cases, respectively. Beyond its diagnostic implications, our results suggest the possibility of streamlining the testing protocol by using the Timed Up and Go test as a single mobility task. Furthermore, we present a detailed analysis of several case studies of challenging scenarios commonly encountered in clinical practice, including diagnostic uncertainty at the initial visit, and changes in clinical diagnosis at a subsequent visit. Together, these findings demonstrate the potential of applying machine learning on sensor-based measures of mobility to distinguish between PD and other forms of parkinsonism.
  • Item
    Supplementary material for machine learning analysis of data from a simplified mobility testing procedure with a single sensor and single task accurately differentiates Parkinson's disease from controls
    (2023) Khalil, Rana M.; Shulman, Lisa M.; Gruber-Baldini, Ann L.; Shakya, Sunita; von Coelln, Rainer; Cummings, Michael P.; Fenderson, Rebecca; van Hoven, Maxwell; Hausdorff, Jeffrey M.; Cummings, Michael P.
    Quantitative mobility analysis using wearable sensors, while promising as a diagnostic tool for Parkinson's disease (PD), is not commonly applied in clinical settings. Major obstacles include uncertainty regarding the best protocol for instrumented mobility testing and subsequent data processing, as well as the added workload and complexity of this multi-step process. To simplify sensor-based mobility testing in diagnosing PD, we analyzed data from 262 PD participants and 50 controls performing several motor tasks wearing a sensor on the lower back containing a triaxial accelerometer and a triaxial gyroscope. Using ensembles of heterogeneous machine learning models incorporating a range of classifiers trained on a large set of sensor features, we show that our models effectively differentiate between participants with PD and controls, both for mixed-stage PD (92.6% accuracy) and a group selected for mild PD only (89.4% accuracy). Omitting algorithmic segmentation of complex mobility tasks decreased the diagnostic accuracy of our models, as did the inclusion of kinesiological features. Feature importance analysis revealed Timed Up & Go (TUG) tasks to contribute highest-yield predictive features, with only minor decrease in accuracy for models based on cognitive TUG as a single mobility task. Our machine learning approach facilitates major simplification of instrumented mobility testing without compromising predictive performance.
  • Item
    Exploring the Computational Explanatory Gap
    (MDPI, 2017-01-16) Reggia, James A.; Huang, Di-Wei; Katz, Garrett
    While substantial progress has been made in the field known as artificial consciousness, at the present time there is no generally accepted phenomenally conscious machine, nor even a clear route to how one might be produced should we decide to try. Here, we take the position that, from our computer science perspective, a major reason for this is a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational processing. We explain how addressing the computational explanatory gap can identify computational correlates of consciousness. We suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but would also inform the search for neurobiological correlates of consciousness and would, with high probability, contribute to demystifying the “hard problem” of understanding the mind–brain relationship. We compile a listing of previously proposed computational correlates of consciousness and, based on the results of recent computational modeling, suggest that the gating mechanisms associated with top-down cognitive control of working memory should be added to this list. We conclude that developing neurocognitive architectures that contribute to bridging the computational explanatory gap provides a credible and achievable roadmap to understanding the ultimate prospects for a conscious machine, and to a better understanding of the mind–brain problem in general.
  • Item
    Deep Multimodal Learning for the Diagnosis of Autism Spectrum Disorder
    (MDPI, 2020-06-10) Tang, Michelle; Kumar, Pulkit; Chen, Hao; Shrivastava, Abhinav
    Recent medical imaging technologies, specifically functional magnetic resonance imaging (fMRI), have advanced the diagnosis of neurological and neurodevelopmental disorders by allowing scientists and physicians to observe the activity within and between different regions of the brain. Deep learning methods have frequently been implemented to analyze images produced by such technologies and perform disease classification tasks; however, current state-of-the-art approaches do not take advantage of all the information offered by fMRI scans. In this paper, we propose a deep multimodal model that learns a joint representation from two types of connectomic data offered by fMRI scans. Incorporating two functional imaging modalities in an automated end-to-end autism diagnosis system will offer a more comprehensive picture of the neural activity, and thus allow for more accurate diagnoses. Our multimodal training strategy achieves a classification accuracy of 74% and a recall of 95%, as well as an F1 score of 0.805, and its overall performance is superior to using only one type of functional data.
  • Item
    Visualization of WiFi Signals Using Programmable Transfer Functions
    (MDPI, 2022-04-26) Rowden, Alexander; Krokos, Eric; Whitley, Kirsten; Varshney, Amitabh
    In this paper, we show how volume rendering with a Programmable Transfer Function can be used for the effective and comprehensible visualization of WiFi signals. A traditional transfer function uses a low-dimensional lookup table to map the volumetric scalar field to color and opacity. In this paper, we present the concept of a Programmable Transfer Function. We then show how generalizing traditional lookup-based transfer functions to Programmable Transfer Functions enables us to leverage view-dependent and real-time attributes of a volumetric field to depict the data variations of WiFi surfaces with low and high-frequency components. Our Programmable Transfer Functions facilitate interactive knowledge discovery and produce meaningful visualizations.
  • Item
    A Grammar-Based Approach for Applying Visualization Taxonomies to Interaction Logs
    (Wiley, 2022-07-29) Gathani, Sneha; Monadjemi, Shayan; Ottley, Alvitta; Battle, Leilani
    Researchers collect large amounts of user interaction data with the goal of mapping user's workflows and behaviors to their high-level motivations, intuitions, and goals. Although the visual analytics community has proposed numerous taxonomies to facilitate this mapping process, no formal methods exist for systematically applying these existing theories to user interaction logs. This paper seeks to bridge the gap between visualization task taxonomies and interaction log data by making the taxonomies more actionable for interaction log analysis. To achieve this, we leverage structural parallels between how people express themselves through interactions and language by reformulating existing theories as regular grammars. We represent interactions as terminals within a regular grammar, similar to the role of individual words in a language, and patterns of interactions or non-terminals as regular expressions over these terminals to capture common language patterns. To demonstrate our approach, we generate regular grammars for seven existing visualization taxonomies and develop code to apply them to three public interaction log datasets. In analyzing these regular grammars, we find that the taxonomies at the low-level (i.e., terminals) show mixed results in expressing multiple interaction log datasets, and taxonomies at the high-level (i.e., regular expressions) have limited expressiveness, due to primarily two challenges: inconsistencies in interaction log dataset granularity and structure, and under-expressiveness of certain terminals. Based on our findings, we suggest new research directions for the visualization community to augment existing taxonomies, develop new ones, and build better interaction log recording processes to facilitate the data-driven development of user behavior taxonomies.
  • Item
    A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training
    (Association for Computer Machinery (ACM), 2023-06-21) Singh, Siddarth; Ruwase, Olatunji; Awan, Ammar Ahmad; Rajbhandari, Samyam; He, Yuxiong; Bhatele, Abhinav
    Mixture-of-Experts (MoE) is a neural network architecture that adds sparsely activated expert blocks to a base model, increasing the number of parameters without impacting computational costs. However, current distributed deep learning frameworks are limited in their ability to train high-quality MoE models with large base models. In this work, we present DeepSpeed-TED, a novel, threedimensional, hybrid parallel algorithm that combines data, tensor, and expert parallelism to enable the training of MoE models with 4–8× larger base models than the current state-of-the-art. We also describe memory optimizations in the optimizer step, and communication optimizations that eliminate unnecessary data movement. We implement our approach in DeepSpeed and achieve speedups of 26% over a baseline (i.e. without our communication optimizations) when training a 40 billion parameter MoE model (6.7 billion base model with 16 experts) on 128 V100 GPUs.
  • Item
    Incidental Incremental In-Band Fingerprint Verification: a Novel Authentication Ceremony for End-to-End Encrypted Messaging
    (Association for Computer Machinery (ACM), 2022-10-24) Malkin, Nathan
    End-to-end encryption in popular messaging applications relies on centralized key servers. To keep these honest, users are supposed to meet in person and compare “fingerprints” of their public keys. Very few people do this, despite attempts to make this process more usable, making trust in the systems tenuous. To encourage broader adoption of verification behaviors, this paper proposes a new type of authentication ceremony, incidental incremental in-band fingerprint verification (I3FV), in which users periodically share with their friends photos or videos of themselves responding to simple visual or behavioral prompts (“challenges”). This strategy allows verification to be performed incidentally to normal user activities, incrementally over time, and in-band within the messaging application. By replacing a dedicated security task with a fun, alreadywidespread activity, I3FV has the potential to vastly increase the number of people verifying keys and therefore strengthen trust in encrypted messaging.
  • Item
    “Is this my president speaking?” Tamper-proofing Speech in Live Recordings
    (Association for Computer Machinery (ACM), 2023-06-18) Shahid, Irtaza; Roy, Nirupam
    Malicious editing of audiovisual content has emerged as a popular tool for targeted defamation, spreading disinformation, and triggering political unrest. Public speeches and statements of political leaders, public figures, or celebrities are particularly at target due to their effectiveness in influencing the masses. Ubiquitous audiovisual recording of live speeches with smart devices and unrestricted content sharing and redistributing on social media make it difficult to address this threat using existing authentication techniques. Given public recordings of live events lack source control over the media, standard solutions falter. This paper presents TalkLock, a speech integrity verification system that can enable live speakers to protect their speeches from malicious alterations even when the speech is recorded by any member of the audience. The core idea is to generate meta-information from the speech signal in real-time and disseminate it through a secure QR code-based screen-camera communication. The QR code when recorded along with the speech embeds the meta-information in the content and it can be used later for independent verification in stand-alone applications or online platforms. A user study with live speech and real-world experiments with different types of voices, languages, environments, and distances show that TalkLock can verify fake content with 94.4% accuracy.
  • Item
    Detock: High Performance Multi-region Transactions at Scale
    (Association for Computer Machinery (ACM), 2023-06) Nguyen, Cuong D.T.; Miller, Johann K.; Abadi, Daniel J.
    Many globally distributed data stores need to replicate data across large geographic distances. Since synchronously replicating data across such distances is slow, those systems with high consistency requirements often geo-partition data and direct all linearizable requests to the primary region of the accessed data. This significantly improves performance for workloads where most transactions access data close to where they originate from. However, supporting serializable multi-geo-partition transactions is a challenge, and they often degrade the performance of the whole system. This becomes even more challenging when they conflict with single-partition requests, where optimistic protocols lead to high numbers of aborts, and pessimistic protocols lead to high numbers of distributed deadlocks. In this paper, we describe the design of concurrency control and deadlock resolution protocols, built within a practical, complete implementation of a geographically replicated database system called Detock, that enables processing strictly-serializable multi-region transactions with near-zero performance degradation at extremely high conflict and order of magnitude higher throughput relative to state-of-the art geo-replication approaches, while improving latency by up to a factor of 5.
  • Item
    Automating NISQ Application Design with Meta Quantum Circuits with Constraints (MQCC)
    (Association for Computer Machinery (ACM), 2023-04) Deng, Haowei; Peng, Yuxiang; Hicks, Michael; Wu, Xiaodi
    Near-term intermediate scale quantum (NISQ) computers are likely to have very restricted hardware resources, where precisely controllable qubits are expensive, error-prone, and scarce. Programmers of such computers must therefore balance trade-offs among a large number of (potentially heterogeneous) factors specific to the targeted application and quantum hardware. To assist them, we propose Meta Quantum Circuits with Constraints (MQCC), a meta-programming framework for quantum programs. Programmers express their application as a succinct collection of normal quantum circuits stitched together by a set of (manually or automatically) added meta-level choice variables, whose values are constrained according to a programmable set of quantitative optimization criteria. MQCC’s compiler generates the appropriate constraints and solves them via an SMT solver, producing an optimized, runnable program. We showcase a few MQCC’s applications for its generality including an automatic generation of efficient error syndrome extraction schemes for fault-tolerant quantum error correction with heterogeneous qubits and an approach to writing approximate quantum Fourier transformation and quantum phase estimation that smoothly trades off accuracy and resource use. We also illustrate that MQCC can easily encode prior one-off NISQ application designs-–multi-programming (MP), crosstalk mitigation (CM)—as well as a combination of their optimization goals (i.e., a combined MP-CM).
  • Item
    Absynthe: Abstract Interpretation-Guided Synthesis
    (Association for Computer Machinery (ACM), 2023-06) Guria, Sankha Narayan; Foster, Jeffrey S.; Van Horn, David
    Synthesis tools have seen significant success in recent times. However, past approaches often require a complete and accurate embedding of the source language in the logic of the underlying solver, an approach difficult for industrial-grade languages. Other approaches couple the semantics of the source language with purpose-built synthesizers, necessarily tying the synthesis engine to a particular language model. In this paper, we propose Absynthe, an alternative approach based on user-defined abstract semantics that aims to be both lightweight and language agnostic, yet effective in guiding the search for programs. A synthesis goal in Absynthe is specified as an abstract specification in a lightweight user-defined abstract domain and concrete test cases. The synthesis engine is parameterized by the abstract semantics and independent of the source language. Absynthe validates candidate programs against test cases using the actual concrete language implementation to ensure correctness. We formalize the synthesis rules for Absynthe and describe how the key ideas are scaled-up in our implementation in Ruby. We evaluated Absynthe on SyGuS strings benchmark and found it competitive with other enumerative search solvers. Moreover, Absynthe’s ability to combine abstract domains allows the user to move along a cost spectrum, i.e., expressive domains prune more programs but require more time. Finally, to verify Absynthe can act as a general purpose synthesis tool, we use Absynthe to synthesize Pandas data frame manipulating programs in Python using simple abstractions like types and column labels of a data frame. Absynthe reaches parity with AutoPandas, a deep learning based tool for the same benchmark suite. In summary, our results demonstrate Absynthe is a promising step forward towards a general-purpose approach to synthesis that may broaden the applicability of synthesis to more full-featured languages.
  • Item
    Just Do Something: Comparing Self-proposed and Machine-recommended Stress Interventions among Online Workers with Home Sweet Ofice
    (Association for Computer Machinery (ACM), 2023-04-23) Tong, Xin; Mauriello, Matthew Louis; Mora-Mendoza, Marco Antonio; Prabhu, Nina; Kim, Jane Paik; Paredes, Pablo E.
    Modern stress management techniques have been shown to be efective, particularly when applied systematically and with the supervision of an instructor. However, online workers usually lack sufcient support from therapists and learning resources to selfmanage their stress. To better assist these users, we implemented a browser-based application, Home Sweet Office (HSO), to administer a set ofstress micro-interventions which mimic existing therapeutic techniques, including somatic, positive psychology, meta cognitive, and cognitive behavioral categories. In a four-week feld study, we compared random and machine-recommended interventions to interventions that were self-proposed by participants in order to investigate effective content and recommendation methods. Our primary fndings suggest that both machine-recommended and self-proposed interventions had signifcantly higher momentary efficacy than random selection, whereas machine-recommended interventions offer more activity diversity compared to self-proposed interventions. We conclude with refections on these results, discuss features and mechanisms which might improve efficacy, and suggest areas for future work.
  • Item
    A Review and Collation of Graphical Perception Knowledge for Visualization Recommendation
    (Association for Computer Machinery (ACM), 2023-04-23) Zeng, Zhua; Battle, Leilani
    Selecting appropriate visual encodings is critical to designing effective visualization recommendation systems, yet few findings from graphical perception are typically applied within these systems. We observe two significant limitations in translating graphical perception knowledge into actionable visualization recommendation rules/constraints: inconsistent reporting of findings and a lack of shared data across studies. How can we translate the graphical perception literature into a knowledge base for visualization recommendation? We present a review of 59 papers that study user perception and performance across ten visual analysis tasks. Through this study, we contribute a JSON dataset that collates existing theoretical and experimental knowledge and summarizes key study outcomes in graphical perception. We illustrate how this dataset can inform automated encoding decisions with three representative visualization recommendation systems. Based on our findings, we highlight open challenges and opportunities for the community in collating graphical perception knowledge for a range of visualization recommendation scenarios.
  • Item
    Structure Assisted Spectrum Sensing for Low-power Acoustic Event Detection
    (Association for Computer Machinery (ACM), 2023-05-09) Garg, Nakul; Takawale, Harshvardhan; Bai, Yang; Shahid, Irtaza; Roy, Nirupam
    Acoustic sensing has conventionally been dependent on highfrequency sampling of analog signals and frequency domain analysis in digital domain which is power-hungry. While these techniques work well for regular devices, low-power acoustic sensors demand for an alternative approach. In this work, we propose Lyra, a novel low-power acoustic sensing architecture that employs carefully designed passive structures to filter incoming sound waves and extract their frequency components. We eliminate power-hungry components such as ADC and digital FFT operations and instead propose to use low-power analog circuitry to process the signals. Lyra aims to provide a low-power platform for a range of maintenance-free acoustic event monitoring and ambient computing applications.
  • Item
    Exploring Immersive Interpersonal Communication via AR
    (Association for Computer Machinery (ACM), 2023-04-16) Lee, Kyungjun; Li, Hong; Wellyanto, Muhammad Rizky; Tham, Yu Jiang; Monroy-Hernández, Andrés; Liu, Fannie; Smith, Brian A.; Vaish, Rajan
    A central challenge of social computing research is to enable people to communicate expressively with each other remotely. Augmented reality has great promise for expressive communication since it enables communication beyond texts and photos and towards immersive experiences rendered in recipients' physical environments. Little research, however, has explored AR's potential for everyday interpersonal communication. In this work, we prototype an AR messaging system, ARwand, to understand people's behaviors and perceptions around communicating with friends via AR messaging. We present our findings under four themes observed from a user study with 24 participants, including the types of immersive messages people choose to send to each other, which factors contribute to a sense of immersiveness, and what concerns arise over this new form of messaging. We discuss important implications of our findings on the design of future immersive communication systems.
  • Item
    Toucha11y: Making Inaccessible Public Touchscreens Accessible
    (Association for Computer Machinery (ACM), 2023-04-19) Li, Jiasheng; Yan, Zeyu; Shah, Arush; Lazar, Jonathan; Peng, Huaishu
    Despite their growing popularity, many public kiosks with touchscreens are inaccessible to blind people. Toucha11y is a working prototype that allows blind users to use existing inaccessible touchscreen kiosks independently and with little effort. Toucha11y consists of a mechanical bot that can be instrumented to an arbitrary touchscreen kiosk by a blind user and a companion app on their smartphone. The bot, once attached to a touchscreen, will recognize its content, retrieve the corresponding information from a database, and render it on the user’s smartphone. As a result, a blind person can use the smartphone’s built-in accessibility features to access content and make selections. The mechanical bot will detect and activate the corresponding touchscreen interface. We present the system design of Toucha11y along with a series of technical evaluations. Through a user study, we found out that Toucha11y could help blind users operate inaccessible touchscreen devices.
  • Item
    Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR
    (Association for Computer Machinery (ACM), 2023-04-19) Maddali, Hanuma Teja; Lazar, Amanda
    Recent technological advances are enabling HCI researchers to explore interaction possibilities for remote XR collaboration using high-fidelity reconstructions of physical activity spaces. However, creating these reconstructions often lacks user involvement with an overt focus on capturing sensory context that does not necessarily augment an informal social experience. This work seeks to understand social context that can be important for reconstruction to enable XR applications for informal instructional scenarios. Our study involved the evaluation of an XR remote guidance prototype by 8 intergenerational groups of closely related gardeners using reconstructions of personally meaningful spaces in their gardens. Our findings contextualize physical objects and areas with various motivations related to gardening and detail perceptions of XR that might affect the use of reconstructions for remote interaction. We discuss implications for user involvement to create reconstructions that better translate real-world experience, encourage reflection, incorporate privacy considerations, and preserve shared experiences with XR as a medium for informal intergenerational activities.
  • Item
    Code Code Evolution: Understanding How People Change Data Science Notebooks Over Time
    (Association for Computer Machinery (ACM), 2023-04) Raghunandan, Deepthi; Roy, Aayushi; Shi, Shenzhi; Elmqvist, Niklas; Battle, Leilani
    Sensemaking is the iterative process of identifying, extracting, and explaining insights from data, where each iteration is referred to as the “sensemaking loop.” However, little is known about how sensemaking behavior evolves from exploration and explanation during this process. This gap limits our ability to understand the full scope of sensemaking, which in turn inhibits the design of tools that support the process. We contribute the first mixed-method to characterize how sensemaking evolves within computational notebooks. We study 2,574 Jupyter notebooks mined from GitHub by identifying data science notebooks that have undergone significant iterations, presenting a regression model that automatically characterizes sensemaking activity, and using this regression model to calculate and analyze shifts in activity across GitHub versions. Our results show that notebook authors participate in various sensemaking tasks over time, such as annotation, branching analysis, and documentation. We use our insights to recommend extensions to current notebook environments.