A. James Clark School of Engineering

Permanent URI for this communityhttp://hdl.handle.net/1903/1654

The collections in this community comprise faculty research works, as well as graduate theses and dissertations.

Browse

Search Results

Now showing 1 - 10 of 82
  • Thumbnail Image
    Item
    Combinatorial Methods in Coding Theory
    (2011) Mazumdar, Arya; Barg, Alexander; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis is devoted to a range of questions in applied mathematics and signal processing motivated by applications in error correction, compressed sensing, and writing on non-volatile memories. The underlying thread of our results is the use of diverse combinatorial methods originating in coding theory and computer science. The thesis addresses three groups of problems. The first of them is aimed at the construction and analysis of codes for error correction. Here we examine properties of codes that are constructed using random and structured graphs and hypergraphs, with the main purpose of devising new decoding algorithms as well as estimating the distribution of Hamming weights in the resulting codes. Some of the results obtained give the best known estimates of the number of correctable errors for codes whose decoding relies on local operations on the graph. In the second part we address the question of constructing sampling operators for the compressed sensing problem. This topic has been the subject of a large body of works in the literature. We propose general constructions of sampling matrices based on ideas from coding theory that act as near-isometric maps on almost all sparse signal. This matrices can be used for dimensionality reduction and compressed sensing. In the third part we study the problem of reliable storage of information in non-volatile memories such as flash drives. This problem gives rise to a writing scheme that relies on relative magnitudes of neighboring cells, known as rank modulation. We establish the exact asymptotic behavior of the size of codes for rank modulation and suggest a number of new general constructions of such codes based on properties of finite fields as well as combinatorial considerations.
  • Thumbnail Image
    Item
    IP Geolocation in Metropolitan Areas
    (2011) Singh, Satinder Pal; Shayman, Mark; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this thesis, we propose a robust methodology to geolocate a target IP Address in a metropolitan area. We model the problem as a Pattern Recognition problem and present algorithms that can extract patterns and match them for inferring the geographic location of target's IP Address. The first algorithm is a relatively non-invasive method called Pattern Based Geolocation (PBG) which models the distribution of Round Trip Times (RTTs) to a target and matches them to that of the nearby landmarks to deduce the target's location. PBG builds Probability Mass Functions (PMFs) to model the distribution of RTTs. For comparing PMFs, we propose a novel `Shifted Symmetrized Divergence' distance metric which is a modified form of Kullback-Leibler divergence. It is symmetric as well as invariant to shifts. PBG algorithm works in almost stealth mode and leaves almost undetectable signature in network traffic. The second algorithm, Perturbation Augmented PBG (PAPBG), gives a higher resolution in the location estimate using additional perturbation traffic. The goal of this algorithm is to induce a stronger signature of background traffic in the vicinity of the target, and then detect it in the RTT sequences collected. At the cost of being intrusive, this algorithm improves the resolution of PBG by approximately 20-40%. We evaluate the performance of PBG and PAPBG on real data collected from 20 machines distributed over 700 square miles large Washington-Baltimore metropolitan area. We compare the performance of the proposed algorithms with existing measurement based geolocation techniques. Our experiments show that PBG shows marked improvements over current techniques and can geolocate a target IP address to within 2-4 miles of its actual location. And by sending an additional traffic in the network PAPBG improves the resolution to within 1-3 miles.
  • Thumbnail Image
    Item
    Runtime Enforcement of Memory Safety for the C Programming Language
    (2011) Simpson, Matthew Stephen; Barua, Rajeev; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Memory access violations are a leading source of unreliability in C programs. Although the low-level features of the C programming language, like unchecked pointer arithmetic and explicit memory management, make it a desirable language for many programming tasks, their use often results in hard-to-detect memory errors. As evidence of this problem, a variety of methods exist for retrofitting C with software checks to detect memory errors at runtime. However, these techniques generally suffer from one or more practical drawbacks that have thus far limited their adoption. These weaknesses include the inability to detect all spatial and temporal violations, the use of incompatible metadata, the need for manual code modifications, and the tremendous runtime cost of providing complete safety. This dissertation introduces MemSafe, a compiler analysis and transformation for ensuring the memory safety of C programs at runtime while avoiding the above drawbacks. MemSafe makes several novel contributions that improve upon previous work and lower the runtime cost of achieving memory safety. These include (1) a method for modeling temporal errors as spatial errors, (2) a hybrid metadata representation that combines the most salient features of both object- and pointer-based approaches, and (3) a data-flow representation that simplifies optimizations for removing unneeded checks and unused metadata. Experimental results indicate that MemSafe is capable of detecting memory safety violations in real-world programs with lower runtime overhead than previous methods. Results show that MemSafe detects all known memory errors in multiple versions of two large and widely-used open source applications as well as six programs from a benchmark suite specifically designed for the evaluation of error detection tools. MemSafe enforces complete safety with an average overhead of 88% on 30 widely-used performance evaluation benchmarks. In comparison with previous work, MemSafe's average runtime overhead for one common benchmark suite (29%) is a fraction of that associated with the previous technique (133%) that, until now, had the lowest overhead among all existing complete and automatic methods that are capable of detecting both spatial and temporal violations.
  • Thumbnail Image
    Item
    Modeling and Experimental Techniques to Demonstrate Nanomanipulation With Optical Tweezers
    (2011) Balijepalli, Arvind K.; Gupta, Satyandra K; LeBrun, Thomas W; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The development of truly three-dimensional nanodevices is currently impeded by the absence of effective prototyping tools at the nanoscale. Optical trapping is well established for flexible three-dimensional manipulation of components at the microscale. However, it has so far not been demonstrated to confine nanoparticles, for long enough time to be useful in nanoassembly applications. Therefore, as part of this work we demonstrate new techniques that successfully extend optical trapping to nanoscale manipulation. In order to extend optical trapping to the nanoscale, we must overcome certain challenges. For the same incident beam power, the optical binding forces acting on a nanoparticle within an optical trap are very weak, in comparison with forces acting on microscale particles. Consequently, due to Brownian motion, the nanoparticle often exits the trap in a very short period of time. We improve the performance of optical traps at the nanoscale by using closed-loop control. Furthermore, we show through laboratory experiments that we are able to localize nanoparticles to the trap using control systems, for sufficient time to be useful in nanoassembly applications, conditions under which a static trap set to the same power as the controller is unable to confine a same-sized particle. Before controlled optical trapping can be demonstrated in the laboratory, key tools must first be developed. We implement Langevin dynamics simulations to model the interaction of nanoparticles with an optical trap. Physically accurate simulations provide a robust platform to test new methods to characterize and improve the performance of optical tweezers at the nanoscale, but depend on accurate trapping force models. Therefore, we have also developed two new laboratory-based force measurement techniques that overcome the drawbacks of conventional force measurements, which do not accurately account for the weak interaction of nanoparticles in an optical trap. Finally, we use numerical simulations to develop new control algorithms that demonstrate significantly enhanced trapping of nanoparticles and implement these techniques in the laboratory. The algorithms and characterization tools developed as part of this work will allow the development of optical trapping instruments that can confine nanoparticles for longer periods of time than is currently possible, for a given beam power. Furthermore, the low average power achieved by the controller makes this technique especially suitable to manipulate biological specimens, but is also generally beneficial to nanoscale prototyping applications. Therefore, capabilities developed as part of this work, and the technology that results from it may enable the prototyping of three-dimensional nanodevices, critically required in many applications.
  • Thumbnail Image
    Item
    INFORMATION THEORETIC SECRET KEY GENERATION: STRUCTURED CODES AND TREE PACKING
    (2010) NITINAWARAT, SIRIN; Narayan, Prakash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This dissertation deals with a multiterminal source model for secret key generation by multiple network terminals with prior and privileged access to a set of correlated signals complemented by public discussion among themselves. Emphasis is placed on a characterization of secret key capacity, i.e., the largest rate of an achievable secret key, and on algorithms for key construction. Various information theoretic security requirements of increasing stringency: weak, strong and perfect secrecy, as well as different types of sources: finite-valued and continuous, are studied. Specifically, three different models are investigated. First, we consider strong secrecy generation for a discrete multiterminal source model. We discover a connection between secret key capacity and a new source coding concept of ``minimum information rate for signal dissemination,'' that is of independent interest in multiterminal data compression. Our main contribution is to show for this discrete model that structured linear codes suffice to generate a strong secret key of the best rate. Second, strong secrecy generation is considered for models with continuous observations, in particular jointly Gaussian signals. In the absence of suitable analogs of source coding notions for the previous discrete model, new techniques are required for a characterization of secret key capacity as well as for the design of algorithms for secret key generation. Our proof of the secret key capacity result, in particular the converse proof, as well as our capacity-achieving algorithms for secret key construction based on structured codes and quantization for a model with two terminals, constitute the two main contributions for this second model. Last, we turn our attention to perfect secrecy generation for fixed signal observation lengths as well as for their asymptotic limits. In contrast with the analysis of the previous two models that relies on probabilistic techniques, perfect secret key generation bears the essence of ``zero-error information theory,'' and accordingly, we rely on mathematical techniques of a combinatorial nature. The model under consideration is the ``Pairwise Independent Network'' (PIN) model in which every pair of terminals share a random binary string, with the strings shared by distinct pairs of terminals being mutually independent. This model, which is motivated by practical aspects of a wireless communication network in which terminals communicate on the same frequency, results in three main contributions. First, the concept of perfect omniscience in data compression leads to a single-letter formula for the perfect secret key capacity of the PIN model; moreover, this capacity is shown to be achieved by linear noninteractive public communication, and coincides with strong secret key capacity. Second, taking advantage of a multigraph representation of the PIN model, we put forth an efficient algorithm for perfect secret key generation based on a combinatorial concept of maximal packing of Steiner trees of the multigraph. When all the terminals seek to share perfect secrecy, the algorithm is shown to achieve capacity. When only a subset of terminals wish to share perfect secrecy, the algorithm is shown to achieve at least half of it. Additionally, we obtain nonasymptotic and asymptotic bounds on the size and rate of the best perfect secret key generated by the algorithm. These bounds are of independent interest from a purely graph theoretic viewpoint as they constitute new estimates for the maximum size and rate of Steiner tree packing of a given multigraph. Third, a particular configuration of the PIN model arises when a lone ``helper'' terminal aids all the other ``user'' terminals generate perfect secrecy. This model has special features that enable us to obtain necessary and sufficient conditions for Steiner tree packing to achieve perfect secret key capacity.
  • Thumbnail Image
    Item
    TOPOLOGY CONTROL ALGORITHMS FOR RULE-BASED ROUTING
    (2010) Somasundaram, Kiran Kumar; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this dissertation, we introduce a new topology control problem for rule- based link-state routing in autonomous networks. In this context, topology control is a mechanism to reduce the broadcast storm problem associated with link-state broadcasts. We focus on a class of topology control mechanisms called local-pruning mechanisms. Topology control by local pruning is an interesting multi-agent graph optimization problem, where every agent/router/station has access to only its local neighborhood information. Every agent selects a subset of its incident link-state in- formation for broadcast. This constitutes the pruned link-state information (pruned graph) for routing. The objective for every agent is to select a minimal subset of the local link-state information while guaranteeing that the pruned graph preserves desired paths for routing. In topology control for rule-based link-state routing, the pruned link-state information must preserve desired paths that satisfy the rules of routing. The non- triviality in these problems arises from the fact that the pruning agents have access to only their local link-state information. Consequently, rules of routing must have some property, which allows specifying the global properties of the routes from the local properties of the graph. In this dissertation, we illustrate that rules described as algebraic path problem in idempotent semirings have these necessary properties. The primary contribution of this dissertation is identifying a policy for pruning, which depends only on the local neighborhood, but guarantees that required global routing paths are preserved in the pruned graph. We show that for this local policy to ensure loop-free pruning, it is sufficient to have what is called an inflatory arc composition property. To prove the sufficiency, we prove a version of Bellman's optimality principle that extends to path-sets and minimal elements of partially ordered sets. As a motivating example, we present a stable path topology control mecha- nism, which ensures that the stable paths for routing are preserved after pruning. We show, using other examples, that the generic pruning works for many other rules of routing that are suitably described using idempotent semirings.
  • Thumbnail Image
    Item
    Cellular Pattern Quantication and Automatic Bench-marking Data-set Generation on confocal microscopy images
    (2010) Cui, Chi; JaJa, Joseph; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The distribution, directionality and motility of the actin fibers control cell shape, affect cell function and are different in cancer versus normal cells. Quantification of actin structural changes is important for further understanding differences between cell types and for elucidation the effects and dynamics of drug interactions. We propose an image analysis framework to quantify the F-actin organization patterns in response to different pharmaceutical treatments.The main problems addressed include which features to quantify and what quantification measurements to compute when dealing with unlabeled confocal microscopy images. The resultant numerical features are very effective to profile the functional mechanism and facilitate the comparison of different drugs. The analysis software is originally implemented in Matlab and more recently the most time consuming part in the feature extraction stage is implemented onto the NVIDIA GPU using CUDA where we obtain 15 to 20 speedups for different sizes of image. We also propose a computational framework for generating synthetic images for validation purposes. The validation for the feature extraction is done by visual inspection and the validation for quantification is done by comparing them with well-known biological facts. Future studies will further validate the algorithms, and elucidate the molecular pathways and kinetics underlying the F-actin changes. This is the first study quantifying different structural formations of the same protein in intact cells. Since many anti-cancer drugs target the cytoskeleton, we believe that the quantitative image analysis method reported here will have broad applications to understanding the mechanisms of candidate pharmaceutical.
  • Thumbnail Image
    Item
    Long-term Information Preservation and Access
    (2010) Song, Sang Chul; JaJa, Joseph F; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    An unprecedented amount of information encompassing almost every facet of human activities across the world is generated daily in the form of zeros and ones, and that is often the only form in which such information is recorded. A good fraction of this information needs to be preserved for periods of time ranging from a few years to centuries. Consequently, the problem of preserving digital information over a long-term has attracted the attention of many organizations, including libraries, government agencies, scientific communities, and individual researchers. In this dissertation, we address three issues that are critical to ensure long-term information preservation and access. The first concerns the core requirement of how to guarantee the integrity of preserved contents. Digital information is in general very fragile because of the many ways errors can be introduced, such as errors introduced because of hardware and media degradation, hardware and software malfunction, operational errors, security breaches, and malicious alterations. To address this problem, we develop a new approach based on efficient and rigorous cryptographic techniques, which will guarantee the integrity of preserved contents with extremely high probability even in the presence of malicious attacks. Our prototype implementation of this approach has been deployed and actively used in the past years in several organizations, including the San Diego Super Computer Center, the Chronopolis Consortium, North Carolina State University, and more recently the Government Printing Office. Second, we consider another crucial component in any preservation system - searching and locating information. The ever-growing size of a long-term archive and the temporality of each preserved item introduce a new set of challenges to providing a fast retrieval of content based on a temporal query. The widely-used cataloguing scheme has serious scalability problems. The standard full-text search approach has serious limitations since it does not deal appropriately with the temporal dimension, and, in particular, is incapable of performing relevancy scoring according to the temporal context. To address these problems, we introduce two types of indexing schemes - a location indexing scheme, and a full-text search indexing scheme. Our location indexing scheme provides optimal operations for inserting and locating a specific version of a preserved item given an item ID and a time point, and our full-text search indexing scheme efficiently handles the scalability problem, supporting relevancy scoring within the temporal context at the same time. Finally, we address the problem of organizing inter-related data, so that future accesses and data exploration can be quickly performed. We, in particular, consider web contents, where we combine a link-analysis scheme with a graph partitioning scheme to put together more closely related contents in the same standard web archive container. We conduct experiments that simulate random browsing of preserved contents, and show that our data organization scheme greatly minimizes the number of containers needed to be accessed for a random browsing session. Our schemes have been tested against real-world data of significant scale, and validated through extensive empirical evaluations.
  • Thumbnail Image
    Item
    Activity Representation from Video Using Statistical Models on Shape Manifolds
    (2010) Abdelkader, Mohamed F.; Chellappa, Rama; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Activity recognition from video data is a key computer vision problem with applications in surveillance, elderly care, etc. This problem is associated with modeling a representative shape which contains significant information about the underlying activity. In this dissertation, we represent several approaches for view-invariant activity recognition via modeling shapes on various shape spaces and Riemannian manifolds. The first two parts of this dissertation deal with activity modeling and recognition using tracks of landmark feature points. The motion trajectories of points extracted from objects involved in the activity are used to build deformation shape models for each activity, and these models are used for classification and detection of unusual activities. In the first part of the dissertation, these models are represented by the recovered 3D deformation basis shapes corresponding to the activity using a non-rigid structure from motion formulation. We use a theory for estimating the amount of deformation for these models from the visual data. We study the special case of ground plane activities in detail because of its importance in video surveillance applications. In the second part of the dissertation, we propose to model the activity by learning an affine invariant deformation subspace representation that captures the space of possible body poses associated with the activity. These subspaces can be viewed as points on a Grassmann manifold. We propose several statistical classification models on Grassmann manifold that capture the statistical variations of the shape data while following the intrinsic Riemannian geometry of these manifolds. The last part of this dissertation addresses the problem of recognizing human gestures from silhouette images. We represent a human gesture as a temporal sequence of human poses, each characterized by a contour of the associated human silhouette. The shape of a contour is viewed as a point on the shape space of closed curves and, hence, each gesture is characterized and modeled as a trajectory on this shape space. We utilize the Riemannian geometry of this space to propose a template-based and a graphical-based approaches for modeling these trajectories. The two models are designed in such a way to account for the different invariance requirements in gesture recognition, and also capture the statistical variations associated with the contour data.
  • Thumbnail Image
    Item
    Recognizing Objects And Reasoning About Their Interactions
    (2010) Kembhavi, Aniruddha; Davis, Larry S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The task of scene understanding involves recognizing the different objects present in the scene, segmenting the scene into meaningful regions, as well as obtaining a holistic understanding of the activities taking place in the scene. Each of these problems has received considerable interest within the computer vision community. We present contributions to two aspects of visual scene understanding. First we explore multiple methods of feature selection for the problem of object detection. We demonstrate the use of Principal Component Analysis to detect avifauna in field observation videos. We improve on existing approaches by making robust decisions based on regional features and by a feature selection strategy that chooses different features in different parts of the image. We then demonstrate the use of Partial Least Squares to detect vehicles in aerial and satellite imagery. We propose two new feature sets; Color Probability Maps are used to capture the color statistics of vehicles and their surroundings, and Pairs of Pixels are used to capture captures the structural characteristics of objects. A powerful feature selection analysis based on Partial Least Squares is employed to deal with the resulting high dimensional feature space (almost 70,000 dimensions). We also propose an Incremental Multiple Kernel Learning (IMKL) scheme to detect vehicles in a traffic surveillance scenario. Obtaining task and scene specific datasets of visual categories is far more tedious than obtaining a generic dataset of the same classes. Our IMKL approach initializes on a generic training database and then tunes itself to the classification task at hand. Second, we develop a video understanding system for scene elements, such as bus stops, crosswalks, and intersections, that are characterized more by qualitative activities and geometry than by intrinsic appearance. The domain models for scene elements are not learned from a corpus of video, but instead, naturally elicited by humans, and represented as probabilistic logic rules within a Markov Logic Network framework. Human elicited models, however, represent object interactions as they occur in the 3D world rather than describing their appearance projection in some specific 2D image plane. We bridge this gap by recovering qualitative scene geometry to analyze object interactions in the 3D world and then reasoning about scene geometry, occlusions and common sense domain knowledge using a set of meta-rules.