UMD Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/3
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
3 results
Search Results
Item Ordering Non-Linear Subspaces for Airfoil Design and Optimization via Rotation Augmented Gradients(2023) Van Slooten, Alec; Fuge, Mark; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Airfoil optimization is critical to the design of turbine blades and aerial vehicle wings, among other aerodynamic applications. This design process is often constrained by the computational time required to perform CFD simulations on different design options, or the availability of adjoint solvers. A common method to mitigate some of this computational expense in nongradient optimization is to perform dimensionality reduction on the data and optimize the design within this smaller subspace. Although learning these low-dimensional airfoil manifolds often facilitates aerodynamic optimization, these subspaces are often still computationally expensive to explore. Moreover, the complex data organization of many current nonlinear models make it difficult to reduce dimensionality without model retraining. Inducing orderings of latent components restructures the data, reduces dimensionality reduction information loss, and shows promise in providing near-optimal representations in various dimensions while only requiring the model to be trained once. Exploring the response of airfoil manifolds to data and model selection and inducing latent component orderings have potential to expedite airfoil design and optimization processes. This thesis first investigates airfoil manifolds by testing the performance of linear and nonlinear dimensionality reduction models, examining if optimized geometries occupy lower dimensional manifolds than non-optimized geometries, and by testing if the learned representation can be improved by using target optimization conditions as data set features. We find that autoencoders, although often suffering from stability issues, have increased performance over linear methods such as PCA in low dimensional representations of airfoil databases. We also find that the use of optimized geometry and the addition of performance parameters have little effect on the intrinsic dimensionality of the data. This thesis then explores a recently proposed approach for inducing latent space orderings called Rotation Augmented Gradient (RAG) [1]. We extend their algorithm to nonlinear models to evaluate its efficacy at creating easily-navigable latent spaces with reduced training, increased stability, and improved design space preconditioning. Our extension of the RAG algorithm to nonlinear models has potential to expedite dimensional analyses in cases with near-zero gradients and long training times by eliminating the need to retrain the model for different dimensional subspacesItem NON-LINEAR AND SPARSE REPRESENTATIONS FOR MULTI-MODAL RECOGNITION(2013) Nguyen, Hien Van; Nguyen, Hien V; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In the first part of this dissertation, we address the problem of representing 2D and 3D shapes. In particular, we introduce a novel implicit shape representation based on Support Vector Machine (SVM) theory. Each shape is represented by an analytic decision function obtained by training an SVM, with a Radial Basis Function (RBF) kernel, so that the interior shape points are given higher values. This empowers support vector shape (SVS) with multifold advantages. First, the representation uses a sparse subset of feature points determined by the support vectors, which significantly improves the discriminative power against noise, fragmentation and other artifacts that often come with the data. Second, the use of the RBF kernel provides scale, rotation, and translation invariant features, and allows a shape to be represented accurately regardless of its complexity. Finally, the decision function can be used to select reliable feature points. These features are described using gradients computed from highly consistent decision functions instead of conventional edges. Our experiments on 2D and 3D shapes demonstrate promising results. The availability of inexpensive 3D sensors like Kinect necessitates the design of new representation for this type of data. We present a 3D feature descriptor that represents local topologies within a set of folded concentric rings by distances from local points to a projection plane. This feature, called as Concentric Ring Signature (CORS), possesses similar computational advantages to point signatures yet provides more accurate matches. CORS produces compact and discriminative descriptors, which makes it more robust to noise and occlusions. It is also well-known to computer vision researchers that there is no universal representation that is optimal for all types of data or tasks. Sparsity has proved to be a good criterion for working with natural images. This motivates us to develop efficient sparse and non-linear learning techniques for automatically extracting useful information from visual data. Specifically, we present dictionary learning methods for sparse and redundant representations in a high-dimensional feature space. Using the kernel method, we describe how the well-known dictionary learning approaches such as the method of optimal directions and KSVD can be made non-linear. We analyse their kernel constructions and demonstrate their effectiveness through several experiments on classification problems. It is shown that non-linear dictionary learning approaches can provide significantly better discrimination compared to their linear counterparts and kernel PCA, especially when the data is corrupted by different types of degradations. Visual descriptors are often high dimensional. This results in high computational complexity for sparse learning algorithms. Motivated by this observation, we introduce a novel framework, called sparse embedding (SE), for simultaneous dimensionality reduction and dictionary learning. We formulate an optimization problem for learning a transformation from the original signal domain to a lower-dimensional one in a way that preserves the sparse structure of data. We propose an efficient optimization algorithm and present its non-linear extension based on the kernel methods. One of the key features of our method is that it is computationally efficient as the learning is done in the lower-dimensional space and it discards the irrelevant part of the signal that derails the dictionary learning process. Various experiments show that our method is able to capture the meaningful structure of data and can perform significantly better than many competitive algorithms on signal recovery and object classification tasks. In many practical applications, we are often confronted with the situation where the data that we use to train our models are different from that presented during the testing. In the final part of this dissertation, we present a novel framework for domain adaptation using a sparse and hierarchical network (DASH-N), which makes use of the old data to improve the performance of a system operating on a new domain. Our network jointly learns a hierarchy of features together with transformations that rectify the mismatch between different domains. The building block of DASH-N is the latent sparse representation. It employs a dimensionality reduction step that can prevent the data dimension from increasing too fast as traversing deeper into the hierarchy. Experimental results show that our method consistently outperforms the current state-of-the-art by a significant margin. Moreover, we found that a multi-layer {DASH-N} has an edge over the single-layer DASH-N.Item Discriminative Interlingual Representations(2013) Jagarlamudi, Jagadeesh; Jagarlamudi, Jagadeesh; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The language barrier in many multilingual natural language processing (NLP) tasks can be overcome by mapping objects from different languages (“views”) into a common low-dimensional subspace. For example, the name transliteration task involves mapping bilingual names and word translation mining involves mapping bilingual words into a common low-dimensional subspace. Multi-view models learn such a low-dimensional subspace using a training corpus of paired objects, e.g., names written in different languages, represented as feature vectors. The central idea of my dissertation is to learn low-dimensional subspaces (or interlingual representations) that are effective for various multilingual and monolingual NLP tasks. First, I demonstrate the effectiveness of interlingual representations in mining bilingual word translations, and then proceed to developing models for diverse situations that often arise in NLP tasks. In particular, I design models for the following problem settings: 1) when there are more than two views but we only have training data from a single pivot view into each of the remaining views 2) when an object from one view is associated with a ranked list of objects from another view, and finally 3) when the underlying objects have rich structure, such as a tree. These problem settings arise often in real world applications. I choose a canonical task for each of the settings and compare my model with existing state-of-the-art baseline systems. I provide empirical evidence for the first two models on multilingual name transliteration and reranking for the part-of-speech tagging tasks, espectively. For the third problem setting, I experiment with the task of re-scoring target language word translations based on the source word's context. The model roposed for this problem builds on the ideas proposed in the previous models and, hence, leads to a natural conclusion.