Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 8 of 8
  • Thumbnail Image
    Item
    Towards Visual Analytics in Virtual Environments
    (2018) Krokos, Eric; Varshney, Amitabh; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Virtual reality (VR) is poised to become the new medium through which we engage, view, and consume content. In contrast to traditional 2D desktop displays, which restrict our interaction space onto an arbitrary 2D-plane with unnatural interaction mechanisms, VR expands the visualization and interaction space into our 3D domain, enabling natural observations and interactions with information. With the rise of Big Data, processing and visualizing such enormous datasets is of utmost importance and remains a difficult challenge. Machine learning, specifically deep learning, is rising to meet this challenge. In this work, we present several studies: (a) demonstrating the effectiveness of immersive environments over traditional desktops for memory recall, (b) quantifying cybersickness in virtual environments, (c) enabling human analysts and deep learning to support, refine, and enhance each other through visualization, and (d) visualizing root-DNS information, enabling analysts to find new and interesting anomalies and patterns. In our first work, we conduct a user study where participants memorize and recall a series of spatially-distributed faces on both a desktop and head-mounted display (HMD). We found that the use of virtual memory palaces in the HMD condition improves recall accuracy when compared to the traditional desktop condition. This improvement was statistically significant. Next, we present our work on quantifying cybersickness through EEG analysis. We found statistically significant correlations with increases in delta, theta, and alpha brain waves with self-reported sickness levels, enabling future virtual reality developers to design countermeasures. Third, we present our work on enabling domain experts to discover hidden labels and communities within unlabeled (or coarsely labeled) high-dimensional datasets using deep learning with visualization. Lastly, we present a 3D visualization of root-DNS traffic, revealing characteristics of a DDOS attack and changes in the distribution of queries received over time. Together, this work takes the first steps in bringing together machine learning, visual analytics, and virtual reality.
  • Thumbnail Image
    Item
    SOFTWARE INFRASTRUCTURE FOR VISUAL AND INTEGRATIVE ANALYSIS OF MICROBIOME DATA
    (2018) Wagner, Justin; Corrada Bravo, Hector; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Microbiome sequencing allows researchers to reconstruct bacterial community census profiles at resolutions greater than previous methodologies. As a result, increasingly large numbers of these taxonomic community profiles are now generated, analyzed, and published by researchers in the field. In this work, I present new methods and software infrastructure for visualization and sharing of microbiome data. The overall goal is to enable a researcher to complete cycles of exploratory and confirmatory analysis over metagenomic data. I describe Metaviz, an interactive statistical and visual analysis tool specifically designed for effective taxonomic hierarchy navigation and data analysis feature selection. I next detail the incorporation of Metaviz into the Human Microbiome Project Data Portal. I then show a novel method to visualize longitudinal data across multiple features built as an extension over Metaviz. Finally, previous work has shown that specific subjects in an experimental cohort can be identified using their microbiome data. I developed software using a secure multi-party computation library to complete comparative analyses of metagenomic data across cohorts without directly revealing feature count values for individuals.
  • Thumbnail Image
    Item
    A Systematic and Minimalist Approach to Lower Barriers in Visual Data Exploration
    (2016) Yalcin, Mehmet Adil; Bederson, Benjamin B; Elmqvist, Niklas E; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    With the increasing availability and impact of data in our lives, we need to make quicker, more accurate, and intricate data-driven decisions. We can see and interact with data, and identify relevant features, trends, and outliers through visual data representations. In addition, the outcomes of data analysis reflect our cognitive processes, which are strongly influenced by the design of tools. To support visual and interactive data exploration, this thesis presents a systematic and minimalist approach. First, I present the Cognitive Exploration Framework, which identifies six distinct cognitive stages and provides a high-level structure to design guidelines, and evaluation of analysis tools. Next, in order to reduce decision-making complexities in creating effective interactive data visualizations, I present a minimal, yet expressive, model for tabular data using aggregated data summaries and linked selections. I demonstrate its application to common categorical, numerical, temporal, spatial, and set data types. Based on this model, I developed Keshif as an out-of-the-box, web-based tool to bootstrap the data exploration process. Then, I applied it to 160+ datasets across many domains, aiming to serve journalists, researchers, policy makers, businesses, and those tracking personal data. Using tools with novel designs and capabilities requires learning and help-seeking for both novices and experts. To provide self-service help for visual data interfaces, I present a data-driven contextual in-situ help system, HelpIn, which contrasts with separated and static videos and manuals. Lastly, I present an evaluation on design and graphical perception for dense visualization of sorted numeric data. I contrast the non-hierarchical treemaps against two multi-column chart designs, wrapped bars and piled bars. The results support that multi-column charts are perceptually more accurate than treemaps, and the unconventional piled bars may require more training to read effectively. This thesis contributes to our understanding on how to create effective data interfaces by systematically focusing on human-facing challenges through minimalist solutions. Future work to extend the power of data analysis to a broader public should continue to evaluate and improve design approaches to address many remaining cognitive, social, educational, and technical challenges.
  • Thumbnail Image
    Item
    Towards Data-Driven Large Scale Scientific Visualization and Exploration
    (2013) Ip, Cheuk Yiu; Varshney, Amitabh; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Technological advances have enabled us to acquire extremely large datasets but it remains a challenge to store, process, and extract information from them. This dissertation builds upon recent advances in machine learning, visualization, and user interactions to facilitate exploration of large-scale scientific datasets. First, we use data-driven approaches to computationally identify regions of interest in the datasets. Second, we use visual presentation for effective user comprehension. Third, we provide interactions for human users to integrate domain knowledge and semantic information into this exploration process. Our research shows how to extract, visualize, and explore informative regions on very large 2D landscape images, 3D volumetric datasets, high-dimensional volumetric mouse brain datasets with thousands of spatially-mapped gene expression profiles, and geospatial trajectories that evolve over time. The contribution of this dissertation include: (1) We introduce a sliding-window saliency model that discovers regions of user interest in very large images; (2) We develop visual segmentation of intensity-gradient histograms to identify meaningful components from volumetric datasets; (3) We extract boundary surfaces from a wealth of volumetric gene expression mouse brain profiles to personalize the reference brain atlas; (4) We show how to efficiently cluster geospatial trajectories by mapping each sequence of locations to a high-dimensional point with the kernel distance framework. We aim to discover patterns, relationships, and anomalies that would lead to new scientific, engineering, and medical advances. This work represents one of the first steps toward better visual understanding of large-scale scientific data by combining machine learning and human intelligence.
  • Thumbnail Image
    Item
    Highly Parallel Geometric Characterization and Visualization of Volumetric Data Sets
    (2012) Juba, Derek Christopher; Varshney, Amitabh; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Volumetric 3D data sets are being generated in many different application areas. Some examples are CAT scans and MRI data, 3D models of protein molecules represented by implicit surfaces, multi-dimensional numeric simulations of plasma turbulence, and stacks of confocal microscopy images of cells. The size of these data sets has been increasing, requiring the speed of analysis and visualization techniques to also increase to keep up. Recent advances in processor technology have stopped increasing clock speed and instead begun increasing parallelism, resulting in multi-core CPUS and many-core GPUs. To take advantage of these new parallel architectures, algorithms must be explicitly written to exploit parallelism. In this thesis we describe several algorithms and techniques for volumetric data set analysis and visualization that are amenable to these modern parallel architectures. We first discuss modeling volumetric data with Gaussian Radial Basis Functions (RBFs). RBF representation of a data set has several advantages, including lossy compression, analytic differentiability, and analytic application of Gaussian blur. We also describe a parallel volume rendering algorithm that can create images of the data directly from the RBF representation. Next we discuss a parallel, stochastic algorithm for measuring the surface area of volumetric representations of molecules. The algorithm is suitable for implementation on a GPU and is also progressive, allowing it to return a rough answer almost immediately and refine the answer over time to the desired level of accuracy. After this we discuss the concept of Confluent Visualization, which allows the visualization of the interaction between a pair of volumetric data sets. The interaction is visualized through volume rendering, which is well suited to implementation on parallel architectures. Finally we discuss a parallel, stochastic algorithm for classifying stem cells as having been grown on a surface that induces differentiation or on a surface that does not induce differentiation. The algorithm takes as input 3D volumetric models of the cells generated from confocal microscopy. This algorithm builds on our algorithm for surface area measurement and, like that algorithm, this algorithm is also suitable for implementation on a GPU and is progressive.
  • Thumbnail Image
    Item
    Nucleic Acid Extraction and Detection Across Two-Dimensional Tissue Samples
    (2010) Armani, Michael Daniel; Shapiro, Benjamin; Smela, Elisabeth; Bioengineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Visualizing genetic changes throughout tissues can explain basic biological functions and molecular pathways in disease. However, over 90% of mammalian messenger RNA (mRNA) is in low abundance (<15 copies per cell) making them hard to see with existing techniques, such as in-situ hybridization (ISH). In the example of diagnosing cancer, a disease caused by genetic mutations, only a few cancer-associated mRNAs can be visualized in the clinic due to the poor sensitivity of ISH. To improve the detection of low-abundance mRNA, many researchers combine the cells across a tissue sample by taking a scrape. Mixing cells provides only one data point and masks the inherent heterogeneity of tissues. To address these challenges, we invented a sensitive method for mapping nucleic acids across tissues called 2D-PCR. 2D-PCR transfers a tissue section into an array of wells, confining and separating the tissue into subregions. Chemical steps are then used to free nucleic acids from the tissues subregions. If the freed genetic material is mRNA, a purification step is also performed. One or more nucleic acids are then amplified using PCR and detected across the tissue to produce a map. As an initial proof of concept, a DNA map was made from a frozen tissue section using 2D-PCR at the resolution of 1.6 mm per well. The technique was improved to perform the more challenging task of mapping three mRNA molecules from a frozen tissue section. Because the majority of clinical tissues are stored using formalin fixation and not freezing, 2D-PCR was improved once more to detect up to 24 mRNAs from formalin-fixed tissue microarrays. This last approach was used to validate genetic profiles in human normal and tumor prostate samples faster than with existing techniques. In conclusion, 2D-PCR is a robust method for detecting genetic changes across tissues or from many tissue samples. 2D-PCR can be used today for studying differences in nucleic acids between tumor and normal specimens or differences in subregions of the brain.
  • Thumbnail Image
    Item
    PREDICTION OF HEAT TRANSFER AND PRESSURE DROP OF CONDENSING REFRIGERANT FLOW IN A HIGH ASPECT RATIO MICRO-CHANNELS
    (2009) Al-Hajri, Ebrahim Saeed Abdulla; Ohadi, Michael; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis presents a detailed study of parametric characterization of two-phase condensing flow of two selected refrigerants R134a and R-245fa in a single water-cooled micro-channel of 0.4 mm X 2.8 mm cross-section (0.7 mm hydraulic diameter and 7:1 aspect ratio) and 190 mm in length. To avoid flow mal-distribution associated with typical micro-channel tube banks, a single micro-channel was fabricated utilizing an innovative approach and used for the present study experiments. The study investigated the effects of variations in saturation temperature ranging from 30 oC to 70 oC, mass flux from 50 to 500 kg/m2s, and inlet super heat from 0 oC to 15 oC on the average heat transfer and overall pressure drop coefficient of the micro-channel condenser. In all cases the inlet vapor quality was kept at 100% quality (saturated vapor) and the outlet condition was always kept at 0% quality (saturated liquid). Accuracy of the fabricated channel geometry with careful design and choice of instrumentation of the test setup resulted in energy balance and average heat transfer coefficient uncertainties within +/-11% and +/-12%, respectively. It is observed that saturation temperature and mass flux have a significant effect on both heat transfer coefficient and overall pressure drop coefficient, where as the inlet super heat has little effect. This study provides further understanding of the potential micro-scale effects on the condensation phenomenon for the tube geometry and the dimensions investigated in the present study combined with flow visualization study. No previous study has addressed the unique single micro-channel geometry being investigated in the present work combined with the two-phase flow visualization of the flow regimes in the present micro-channel geometry. The letter was a major undertaking of the present work and represents one of the main contributions of the present work. The results of the present work shall prove useful in contributing to better understanding of any micro-scale effects on the condensation flow of the two selected refrigerants (one commonly used high pressure refrigerant, R134a) and the other a new low pressure refrigerant (R245fa). It is also expected that the results of this study will lead to future work in this area, realizing the fast penetration of the micro-channel technology in various compact/ultra compact heat exchangers, including refrigeration, petrochemical, electronics, transportation, and process industries.
  • Thumbnail Image
    Item
    New Algorithmic Techniques for Large Scale Volumetric Data Visualization on Parallel Architectures
    (2008-07-16) Wang, Qin; JaJa, Joseph; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Volume visualization is widely used as an effective approach for the visual exploration, computational analysis, and manipulation of volumetric datasets. Due to the dramatic advances in imaging instruments and computing technologies, such datasets are now appearing at a very fast rate with increasingly larger sizes in many engineering, science and medical applications. Isosurface and direct volume rendering(DVR) are two of the most widely used techniques to render such datasets. This dissertation introduces novel techniques for rendering isosurfaces and volumes, and extends these techniques to multiprocessor architectures. We first focus on cluster-based techniques for isosurface extraction and rendering using polygonal approximation. We present a new simple indexing scheme and data layout approach, which enable scalable and efficient isosurface generation. This algorithm is the first known parallel algorithm to achieve provable load balancing on multiprocessor systems. We also develop an algorithm to generate isosurfaces using ray-casting on multi-core processors. Our method is based on a hybrid strategy that begins with an object order traversal of the data followed by ray-casting on ordered sets of an adaptive number of subcubes, one set for each small group of pixels on the image. We develop a multithreaded implementation, which uses new dynamic load balancing techniques that start with an image partitioning for the initial stage and then perform dynamic allocation of groups of ray-casting tasks among the different threads. The strategy ensures almost equal loads among the cores while maintaining spatial data locality. This scheme is extended to perform direct volume rendering and is shown to achieve similar improvements in terms of overall performance, load balancing, and scalability. We conduct a large number of tests for all our algorithms on the University of Maryland Visualization Cluster and on the 8-core Clovertown platform using a wide variety of datasets such as Richtmyer-Meshkov Instability dataset (7.5GB for each time step) and Visible Human dataset (~1GB). We obtain results that consistently validate the efficiency and the scalability of our algorithms. In particular, the overall performance of our hybrid ray-casting scheme achieves an interactive rendering rate on high resolution (1024x1024) screens for all the datasets tested.