Multimodal Biomedical Data Visualization: Enhancing Network, Clinical, and Image Data Depiction
dc.contributor.advisor | Varshney, Amitabh | en_US |
dc.contributor.author | Cheng, Hsueh-Chien | en_US |
dc.contributor.department | Computer Science | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2018-01-23T06:42:38Z | |
dc.date.available | 2018-01-23T06:42:38Z | |
dc.date.issued | 2017 | en_US |
dc.description.abstract | In this dissertation, we present visual analytics tools for several biomedical applications. Our research spans three types of biomedical data: reaction networks, longitudinal multidimensional clinical data, and biomedical images. For each data type, we present intuitive visual representations and efficient data exploration methods to facilitate visual knowledge discovery. Rule-based simulation has been used for studying complex protein interactions. In a rule-based model, the relationships of interacting proteins can be represented as a network. Nevertheless, understanding and validating the intended behaviors in large network models are ineffective and error prone. We have developed a tool that first shows a network overview with concise visual representations and then shows relevant rule-specific details on demand. This strategy significantly improves visualization comprehensibility and disentangles the complex protein-protein relationships by showing them selectively alongside the global context of the network. Next, we present a tool for analyzing longitudinal multidimensional clinical datasets, that we developed for understanding Parkinson's disease progression. Detecting patterns involving multiple time-varying variables is especially challenging for clinical data. Conventional computational techniques, such as cluster analysis and dimension reduction, do not always generate interpretable, actionable results. Using our tool, users can select and compare patient subgroups by filtering patients with multiple symptoms simultaneously and interactively. Unlike conventional visualizations that use local features, many targets in biomedical images are characterized by high-level features. We present our research characterizing such high-level features through multiscale texture segmentation and deep-learning strategies. First, we present an efficient hierarchical texture segmentation approach that scales up well to gigapixel images to colorize electron microscopy (EM) images. This enhances visual comprehensibility of gigapixel EM images across a wide range of scales. Second, we use convolutional neural networks (CNNs) to automatically derive high-level features that distinguish cell states in live-cell imagery and voxel types in 3D EM volumes. In addition, we present a CNN-based 3D segmentation method for biomedical volume datasets with limited training samples. We use factorized convolutions and feature-level augmentations to improve model generalization and avoid overfitting. | en_US |
dc.identifier | https://doi.org/10.13016/M2HM52M5Q | |
dc.identifier.uri | http://hdl.handle.net/1903/20368 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Computer science | en_US |
dc.subject.pquncontrolled | Convolutional Neural Network | en_US |
dc.subject.pquncontrolled | Data visualization | en_US |
dc.subject.pquncontrolled | Deep learning | en_US |
dc.subject.pquncontrolled | Image segmentation | en_US |
dc.subject.pquncontrolled | Volume rendering | en_US |
dc.title | Multimodal Biomedical Data Visualization: Enhancing Network, Clinical, and Image Data Depiction | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1