Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
3 results
Search Results
Item VISUALIZATION, DATA QUALITY, AND SCALE IN COMPOSITE BATHYMETRIC DATA GENERALIZATION(2024) Dyer, Noel Matarazza; De Floriani, Leila; Geography; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Contemporary bathymetric data collection techniques are capable of collecting sub-meterresolution data to ensure full seafloor-bottom coverage for safe navigation as well as to support other various scientific uses of the data. Moreover, bathymetry data are becoming increasingly available. Datasets are compiled from these sources and used to update Electronic Navigational Charts (ENCs), the primary medium for visualizing the seafloor for navigation purposes, whose usage is mandatory on Safety Of Life At Sea (SOLAS) regulated vessels. However, these high resolution data must be generalized for products at scale, an active research area in automated cartography. Algorithms that can provide consistent results while reducing production time and costs are increasingly valuable to organizations operating in time-sensitive environments. This is particularly the case in digital nautical cartography, where updates to bathymetry and locations of dangers to navigation need to be disseminated as quickly as possible. Therefore, this dissertation covers the development of cartographic constraint-based generalization algorithms operating on both Digital Surface Model (DSM) and Digital Cartographic Model (DCM) representations of multi-source composite bathymetric data to produce navigationally-ready datasets for use at scale. Similarly, many coastal data analysis applications utilize unstructured meshes for representing terrains due to the adaptability, which allows for better conformity to the shoreline and bathymetry. Finer resolution along narrow geometric features, steep gradients, and submerged channels, and coarser resolution in other areas, reduces the size of the mesh while maintaining a comparable accuracy in subsequent processing. Generally, the mesh is constructed a priori for the given domain and elevations are interpolated to the nodes of the mesh from a predefined digital elevation model. These methods can also include refinement procedures to produce geometrically correct meshes for the application. Mesh simplification is a technique used in computer graphics to reduce the complexity of a mesh or surface model while preserving features such as shape, topology, and geometry. This technique can be used to mitigate issues related to processing performance by reducing the number of elements composing the mesh, thus increasing efficiency. The primary challenge is finding a balance between the level of generalization, preservation of specific characteristics relevant to the intended use of the mesh, and computational efficiency. Despite the potential usefulness of mesh simplification for reducing mesh size and complexity while retaining morphological details, there has been little investigation regarding the application of these techniques specifically to Bathymetric Surface Models (BSMs), where additional information such as vertical uncertainty can help guide the process. Toward this effort, this dissertation also introduces a set of experiments that were designed to explore the effects of BSM mesh simplification on a coastal ocean model forced by tides in New York Harbor. Candidate vertices for elimination are identified using a given local maximum distance between the original vertices of the mesh and the simplified surface. Vertex removal and re-triangulation operations are used to simplify the mesh and are paired with an optional maximum triangle area constraint, which prevents the creation of new triangles over a specified area. A tidal simulation is then performed across the domain of both the original (un-simplified) and simplified meshes, while comparing current velocities, velocity magnitudes, and water levels over time at twelve representative locations in the Harbor. It was demonstrated that the simplified mesh derived from using even the strictest parameters for the mesh simplification was able to reduce the overall mesh size by approximately 26.81%, which resulted in a 26.38% speed improvement percentage compared to the un-simplified mesh. Reduction of the overall mesh size was dependent on the parameters for simplification and the speed improvement percentage was relative to the number of resulting elements composing the simplified mesh.Item Towards Fast and Efficient Representation Learning(2018) Li, Hao; Samet, Hanan; Goldstein, Thomas; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization? To reduce the computation cost for fast inference, we propose to prune filters from CNNs that are identified as having a small effect on the prediction accuracy. By removing filters with small norms together with their connected feature maps, the computation cost can be reduced accordingly without using special software or hardware. We show that simple filter pruning approach can reduce the inference cost while regaining close to the original accuracy by retraining the networks. To further reduce the inference cost, quantizing model parameters with low-precision representations has shown significant speedup, especially for edge devices that have limited computing resources, memory capacity, and power consumption. To enable on-device learning on lower-power systems, removing the dependency of full-precision model during training is the key challenge. We study various quantized training methods with the goal of understanding the differences in behavior, and reasons for success or failure. We address the issue of why algorithms that maintain floating-point representations work so well, while fully quantized training methods stall before training is complete. We show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic. Finally, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. We introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. The sharpness of minimizers correlates well with generalization error when this visualization is used. Then, using a variety of visualizations, we explore how training hyper-parameters affect the shape of minimizers, and how network architecture affects the loss landscape.Item Factors Affecting the Generalization of "wh-" Question Answering by Children with Autism(2007-04-25) Barthold, Christine Hoffner; Egel, Andrew L; Special Education; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The purpose of this study was to examine whether Relational Frame Theory (Hayes, Barnes-Holmes, & Roche, 2001b) could be applied to increase generalization of "wh-" question answering (e.g., what, why, how) by children with autism. Students (N=6) from two self-contained classrooms for children with autism were taught to answer "wh-" questions in the presence of magazine pictures, pictures from storybooks, and actions in the natural context depicting a scenario related to the question asked. Generalization to novel questions was then assessed. If students were not able to answer generalized "wh-" questions to criterion, a matching-to-sample procedure with exclusion was used to increase associations between stimuli. A multiple probe design across subjects was used for this study. A baseline of "wh-" question answering, matching to sample, and receptive identification of answers to questions were conducted prior to training. In addition, students were observed in the classroom environment prior to training. A descriptive analysis of their verbal behavior, in which antecedents, student responses, and consequences were recorded, was conducted to determine the students' verbal behavior ability in the absence of a particular training program. Two students, one in each school, were able to generalize to novel "wh-" questions after training. Both of these students were able to spontaneously tact items and had a higher number of tacts in relation to mands in the descriptive analysis. Individuals who did not generalize did not acquire relations using a matching to sample with exclusion procedure. They also emitted either an equal number of tacts and mands during the descriptive analysis or more mands than tacts. Implications for practice include the consideration of waiting to include "wh-" question answering until students are able to emit a high number of spontaneous tacts and possibly early intraverbal behavior such as greetings, the elimination of visual stimuli when teaching "wh-" questions, and the expansion of matching-to-sample goals in behavioral curricula. Suggestions for future research include the continued research into the development of verbal behavior in children with autism, refinement of matching techniques to teach relations, and expansion of the descriptive analysis of verbal behavior.