A. James Clark School of Engineering
Permanent URI for this communityhttp://hdl.handle.net/1903/1654
The collections in this community comprise faculty research works, as well as graduate theses and dissertations.
Browse
8 results
Search Results
Item INFORMATION THEORETIC SECRET KEY GENERATION: STRUCTURED CODES AND TREE PACKING(2010) NITINAWARAT, SIRIN; Narayan, Prakash; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)This dissertation deals with a multiterminal source model for secret key generation by multiple network terminals with prior and privileged access to a set of correlated signals complemented by public discussion among themselves. Emphasis is placed on a characterization of secret key capacity, i.e., the largest rate of an achievable secret key, and on algorithms for key construction. Various information theoretic security requirements of increasing stringency: weak, strong and perfect secrecy, as well as different types of sources: finite-valued and continuous, are studied. Specifically, three different models are investigated. First, we consider strong secrecy generation for a discrete multiterminal source model. We discover a connection between secret key capacity and a new source coding concept of ``minimum information rate for signal dissemination,'' that is of independent interest in multiterminal data compression. Our main contribution is to show for this discrete model that structured linear codes suffice to generate a strong secret key of the best rate. Second, strong secrecy generation is considered for models with continuous observations, in particular jointly Gaussian signals. In the absence of suitable analogs of source coding notions for the previous discrete model, new techniques are required for a characterization of secret key capacity as well as for the design of algorithms for secret key generation. Our proof of the secret key capacity result, in particular the converse proof, as well as our capacity-achieving algorithms for secret key construction based on structured codes and quantization for a model with two terminals, constitute the two main contributions for this second model. Last, we turn our attention to perfect secrecy generation for fixed signal observation lengths as well as for their asymptotic limits. In contrast with the analysis of the previous two models that relies on probabilistic techniques, perfect secret key generation bears the essence of ``zero-error information theory,'' and accordingly, we rely on mathematical techniques of a combinatorial nature. The model under consideration is the ``Pairwise Independent Network'' (PIN) model in which every pair of terminals share a random binary string, with the strings shared by distinct pairs of terminals being mutually independent. This model, which is motivated by practical aspects of a wireless communication network in which terminals communicate on the same frequency, results in three main contributions. First, the concept of perfect omniscience in data compression leads to a single-letter formula for the perfect secret key capacity of the PIN model; moreover, this capacity is shown to be achieved by linear noninteractive public communication, and coincides with strong secret key capacity. Second, taking advantage of a multigraph representation of the PIN model, we put forth an efficient algorithm for perfect secret key generation based on a combinatorial concept of maximal packing of Steiner trees of the multigraph. When all the terminals seek to share perfect secrecy, the algorithm is shown to achieve capacity. When only a subset of terminals wish to share perfect secrecy, the algorithm is shown to achieve at least half of it. Additionally, we obtain nonasymptotic and asymptotic bounds on the size and rate of the best perfect secret key generated by the algorithm. These bounds are of independent interest from a purely graph theoretic viewpoint as they constitute new estimates for the maximum size and rate of Steiner tree packing of a given multigraph. Third, a particular configuration of the PIN model arises when a lone ``helper'' terminal aids all the other ``user'' terminals generate perfect secrecy. This model has special features that enable us to obtain necessary and sufficient conditions for Steiner tree packing to achieve perfect secret key capacity.Item TOPOLOGY CONTROL ALGORITHMS FOR RULE-BASED ROUTING(2010) Somasundaram, Kiran Kumar; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this dissertation, we introduce a new topology control problem for rule- based link-state routing in autonomous networks. In this context, topology control is a mechanism to reduce the broadcast storm problem associated with link-state broadcasts. We focus on a class of topology control mechanisms called local-pruning mechanisms. Topology control by local pruning is an interesting multi-agent graph optimization problem, where every agent/router/station has access to only its local neighborhood information. Every agent selects a subset of its incident link-state in- formation for broadcast. This constitutes the pruned link-state information (pruned graph) for routing. The objective for every agent is to select a minimal subset of the local link-state information while guaranteeing that the pruned graph preserves desired paths for routing. In topology control for rule-based link-state routing, the pruned link-state information must preserve desired paths that satisfy the rules of routing. The non- triviality in these problems arises from the fact that the pruning agents have access to only their local link-state information. Consequently, rules of routing must have some property, which allows specifying the global properties of the routes from the local properties of the graph. In this dissertation, we illustrate that rules described as algebraic path problem in idempotent semirings have these necessary properties. The primary contribution of this dissertation is identifying a policy for pruning, which depends only on the local neighborhood, but guarantees that required global routing paths are preserved in the pruned graph. We show that for this local policy to ensure loop-free pruning, it is sufficient to have what is called an inflatory arc composition property. To prove the sufficiency, we prove a version of Bellman's optimality principle that extends to path-sets and minimal elements of partially ordered sets. As a motivating example, we present a stable path topology control mecha- nism, which ensures that the stable paths for routing are preserved after pruning. We show, using other examples, that the generic pruning works for many other rules of routing that are suitably described using idempotent semirings.Item Direct Numerical Simulation of Non-Premixed Flame Extinction Phenomena(2010) Narayanan, Praveen; Trouve, Arnaud C; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Non-premixed flame extinction phenomena are relevant in a variety of com- busting environments, including but hardly limited to diesel engines, pool fires, and fire suppression scenarios. These disparate phenomena are controlled by various parameters that contain information on flame stretch, heat losses, composition of the fuel and oxidizer supply streams, etc. Direct Numerical Simulation (DNS) is used in the present study to provide fundamental insight on diffusion flame extinction under non-adiabatic combustion conditions. The list of DNS configurations include: (C1) counterflow laminar flames with soot formation and thermal radiation transport; (C2) coflow turbulent flames with soot formation and thermal radiation transport; (C3) counterflow laminar and turbulent flames interacting with a mist-like water spray. Configurations C1 and C2 use single-step chemistry while configuration C3 uses detailed chemistry (all cases correspond to ethylene-air combustion). Configuration C1 is also treated using large Activation Energy Asymptotics (AEA). The AEA analysis is based on a classical formulation that is extended to include thermal radiation transport with both emission and absorption effects; the analysis also includes soot dynamics. The AEA analysis provides a flame extinction criterion in the form of a critical Damköhler number criterion. The DNS results are used to test the validity of this flame extinction criterion. In configuration C1, the flame extinction occurs as a result of flame stretch or radiative cooling; a variation of configuration C1 is considered in which the oxidizer stream contains a variable amount of soot mass. In configuration C1, flame weakening occurs as a result of radiative cooling; this effect is magnified by artificially increasing the mean Planck soot absorption coefficient. In configuration C3, flame extinction occurs as a result of flame stretch and evaporative cooling. In all studied cases, the critical Damkohler number criterion successfully predicts transition to extinction; this result supports the unifying concept of a flame Damköhler number Da and the idea that different extinction phenomena may be described by a single critical value of Da.Item Sparse and Redundant Representations for Inverse Problems and Recognition(2010) Patel, Vishal M.; Chellappa, Rama; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.Item Primal-dual interior-point algorithms for linear programs with many inequality constraints(2010) Winternitz, Luke; Tits, Andre L; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Linear programs (LPs) are one of the most basic and important classes of constrained optimization problems, involving the optimization of linear objective functions over sets defined by linear equality and inequality constraints. LPs have applications to a broad range of problems in engineering and operations research, and often arise as subproblems for algorithms that solve more complex optimization problems. ``Unbalanced'' inequality-constrained LPs with many more inequality constraints than variables are an important subclass of LPs. Under a basic non-degeneracy assumption, only a small number of the constraints can be active at the solution--it is only this active set that is critical to the problem description. On the other hand, the additional constraints make the problem harder to solve. While modern ``interior-point'' algorithms have become recognized as some of the best methods for solving large-scale LPs, they may not be recommended for unbalanced problems, because their per-iteration work does not scale well with the number of constraints. In this dissertation, we investigate "constraint-reduced'' interior-point algorithms designed to efficiently solve unbalanced LPs. At each iteration, these methods construct search directions based only on a small working set of constraints, while ignoring the rest. In this way, they significantly reduce their per-iteration work and, hopefully, their overall running time. In particular, we focus on constraint-reduction methods for the highly efficient primal-dual interior-point (PDIP) algorithms. We propose and analyze a convergent constraint-reduced variant of Mehrotra's predictor-corrector PDIP algorithm, the algorithm implemented in virtually every interior-point software package for linear (and convex-conic) programming. We prove global and local quadratic convergence of this algorithm under a very general class of constraint selection rules and under minimal assumptions. We also propose and analyze two regularized constraint-reduced PDIP algorithms (with similar convergence properties) designed to deal directly with a type of degeneracy that constraint-reduced interior-point algorithms are often subject to. Prior schemes for dealing with this degeneracy could end up negating the benefit of constraint-reduction. Finally, we investigate the performance of our algorithms by applying them to several test and application problems, and show that our algorithms often outperform alternative approaches.Item COMPUTATIONAL STUDIES ON DROPLET DYNAMICS AT INTERSECTING FLOWS IN MICROFLUIDIC JUNCTIONS(2010) Mamidi, Sai Kishore Reddy; Dimitrakopoulos, Panagiotis; Chemical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The current thesis involves a computational study of drop dynamics in microfluidic junctions, at the moderate capillary number of Ca = 0.1. We utilize a three-dimensional Spectral Boundary Element algorithm to determine the drop motion in the presence of intersecting lateral flows in microfluidic, T-junctions and cross-junctions, and analyze the effect on drop deformation and motion with varying shear rates in the channels leading to the junctions, and for viscosity ratios of 0.2 and 20.0 between the drop and the surrounding fluid. We find that the presence of intersecting flows, drastically affects the transient behavior at the junctions, and the drop reaches steady state further away, both up- stream and downstream of these junctions. The time taken to reach steady state in the T-junctions was found to be significantly greater than that in the cross-junction, under identical conditions. Drop velocities were found to be a linear function of the effective shear rate in the channel, and length scale fluctuations as high as 30 percent were observed in the junction region for the cases studied in the thesis. We observed that the excess presure drop with respect to the flow of a single phase fluid was strongly related to the length of the droplet at a given spatial coordinate. The peak surface area of the drop in the junction was found to be a slighly non-linear function of the flow rates in the lateral channels, and almost all the surface area increase was occurring at the head of the drop, in the direction of the flow. Velocity was found to be a weak, inverse function of the viscosity ratio, the increase in drop surface area was found to be greater in drops with lower viscosity. It was found that the junction bend radius/smoothness had a more significant effect on the dynamics of the drop in a T-junction, compared to that in a cross-junction.Item End-of-Life and Constant Rate Reliability Modeling for Semiconductor Packages Using Knowledge-Based Test Approaches(2009) Yang, Liyu; Bernstein, Joseph B; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)End-of-life and constant rate reliability modeling for semiconductor packages are the focuses of this dissertation. Knowledge-based testing approaches are applied and the test-to-failure approach is approved to be a reliable approach. First of all, the end-of-life AF models for solder joint reliability are studied. The research results show using one universal AF model for all packages is flawed approach. An assessment matrix is generated to guide the application of AF models. The AF models chosen should be either assessed based on available data or validated through accelerated stress tests. A common model can be applied if the packages have similar structures and materials. The studies show that different AF models will be required for SnPb solder joints and SAC lead-free solder joints. Second, solder bumps under power cycling conditions are found to follow constant rate reliability models due to variations of the operating conditions. Case studies demonstrate that a constant rate reliability model is appropriate to describe non solder joint related semiconductor package failures as well. Third, the dissertation describes the rate models using Chi-square approach cannot correlate well with the expected failure mechanisms in field applications. The estimation of the upper bound using a Chi-square value from zero failure is flawed. The dissertation emphasizes that the failure data is required for the failure rate estimation. A simple but tighter approach is proposed and provides much tighter bounds in comparison of other approaches available. Last, the reliability of solder bumps in flip chip packages under power cycling conditions is studied. The bump materials and underfill materials will significantly influence the reliability of the solder bumps. A set of comparable bump materials and the underfill materials will dramatically improve the end-of-life solder bumps under power cycling loads, and bump materials are one of the most significant factors. Comparing to the field failure data obtained, the end-of-life model does not predict the failures in the field, which is more close to an approximately constant failure rate. In addition, the studies find an improper underfill material could change the failure location from solder bump cracking to ILD cracking or BGA solder joint failures.Item Statistical and Geometric Modeling of Spatio-Temporal Patterns for Video Understanding(2009) Turaga, Pavan; Chellappa, Ramalingam; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Spatio-temporal patterns abound in the real world, and understanding them computationally holds the promise of enabling a large class of applications such as video surveillance, biometrics, computer graphics and animation. In this dissertation, we study models and algorithms to describe complex spatio-temporal patterns in videos for a wide range of applications. The spatio-temporal pattern recognition problem involves recognizing an input video as an instance of a known class. For this problem, we show that a first order Gauss-Markov process is an appropriate model to describe the space of primitives. We then show that the space of primitives is not a Euclidean space but a Riemannian manifold. We use the geometric properties of this manifold to define distances and statistics. This then paves the way to model temporal variations of the primitives. We then show applications of these techniques in the problem of activity recognition and pattern discovery from long videos. The pattern discovery problem on the other hand, requires uncovering patterns from large datasets in an unsupervised manner for applications such as automatic indexing and tagging. Most state-of-the-art techniques index videos according to the global content in the scene such as color, texture and brightness. In this dissertation, we discuss the problem of activity based indexing of videos. We examine the various issues involved in such an effort and describe a general framework to address the problem. We then design a cascade of dynamical systems model for clustering videos based on their dynamics. We augment the traditional dynamical systems model in two ways. Firstly, we describe activities as a cascade of dynamical systems. This significantly enhances the expressive power of the model while retaining many of the computational advantages of using dynamical models. Secondly, we also derive methods to incorporate view and rate-invariance into these models so that similar actions are clustered together irrespective of the viewpoint or the rate of execution of the activity. We also derive algorithms to learn the model parameters from a video stream and demonstrate how a given video sequence may be segmented into different clusters where each cluster represents an activity. Finally, we show the broader impact of the algorithms and tools developed in this dissertation for several image-based recognition problems that involve statistical inference over non-Euclidean spaces. We demonstrate how an understanding of the geometry of the underlying space leads to methods that are more accurate than traditional approaches. We present examples in shape analysis, object recognition, video-based face recognition, and age-estimation from facial features to demonstrate these ideas.