UMD Theses and Dissertations

Permanent URI for this collectionhttp://hdl.handle.net/1903/3

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a given thesis/dissertation in DRUM.

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 10 of 479
  • Thumbnail Image
    Item
    A SCANNING SQUID MICROSCOPE FOR IMAGING HIGH-FREQUENCY MAGNETIC FIELDS
    (2009) Vlahacos, Constantine Peter; Wellstood, Frederick C.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis examines the design and operation of a large-bandwidth scanning SQUID microscope for spatially imaging high frequency magnetic fields. Towards this end, I present results on a cryo-cooled 4.2 K scanning SQUID microscope with a bandwidth of dc to 2 GHz and a sensitivity of about 52.4 nT per sample. By using a thin-film hysteretic Nb dc-SQUID and a pulsed sampling technique, rather than a non-hysteretic SQUID and a flux-locked loop, the bandwidth limitation of existing scanning SQUID microscopes is overcome. The microscope allows for non-contact images of time-varying magnetic field to be taken of room-temperature samples with time steps down to 50 ps and spatial resolution ultimately limited by the size of the SQUID to about 10 micrometers. The new readout scheme involves repeatedly pulsing the bias current to the dc SQUID while the voltage across the SQUID is monitored. Using a fixed pulse amplitude and applying a fixed dc magnetic flux allows the SQUID to measure the applied magnetic flux with a sampling time set by the pulse length of about 400 ps. To demonstrate the capabilities of the microscope, I imaged magnetic fields from 0 Hz (static fields) up to 4 GHz. Samples included a magnetic loop, microstrip transmission lines, and microstrip lines with a break in order to identify and isolate electrical opens in circuits. Finally, I discuss the operation and modeling of the SQUID and how to further increase the bandwidth of the microscope to allow bandwidth of upwards of 10 GHz.
  • Thumbnail Image
    Item
    Flow Control in Wireless Ad-hoc Networks
    (2009) Papageorgiou, Georgios; Baras, John S.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    We are interested in maximizing the Transmission Control Protocol (TCP) throughput between two nodes in a single cell wireless ad-hoc network. For this, we follow a cross-layer approach by first developing an analytical model that captures the effect of the wireless channel and the MAC layer to TCP. The analytical model gives the time evolution of the TCP window size which is described by a stochastic differential equation driven by a point process. The point process represents the arrival of acknowledgments sent by the TCP receiver to the sender as part of the self-regulating mechanism of the flow control protocol. Through this point process we achieve a cross-layer integration between the physical layer, the MAC layer and TCP. The intervals between successive points describe how the packet drops at the wireless channel and the delays because of retransmission at the MAC layer affect the window size at the TCP layer. We fully describe the statistical behavior of the point process by computing first the p.d.f. for the inter-arrival intervals and then the compensator and the intensity of the process parametrized by the quantities that describe the MAC layer and the wireless channel. To achieve analytical tractability we concentrate on the pure (unslotted) Aloha for the MAC layer and the Gilbert-Elliott model for the channel. Although the Aloha protocol is simpler than the more popular IEEE 802.11 protocol, it still exhibits the same exponential backoff mechanism which is a key factor for the performance of TCP in a wireless network. Moreover, another reason to study the Aloha protocol is that the protocol and its variants gain popularity as they are used in many of today's wireless networks. Using the analytical model for the TCP window size evolution, we try to increase the TCP throughput between two nodes in a single cell network. We want to achieve this by implicitly informing the TCP sender of the network conditions. We impose this additional constraint so we can achieve compatibility between the standard TCP and the optimized version. This allows the operation of both protocol stacks in the same network. We pose the optimization problem as an optimal stopping problem. For each packet transmitted by the TCP sender to the network, an optimal time instance has to be computed in the absence of an acknowledgment for this packet. This time instance indicates when a timeout has to be declared for the packet. In the absence of an acknowledgment, if the sender waits long for declaring a timeout, the network is underutilized. If the sender declares a timeout soon, it minimizes the transmission rate. Because of the analytical intractability of the optimal stopping time problem, we follow a Markov chain approximation method to solve the problem numerically.
  • Thumbnail Image
    Item
    Statistical/Geometric Techniques for Object Representation and Recognition
    (2009) Biswas, Soma; Chellappa, Rama; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Object modeling and recognition are key areas of research in computer vision and graphics with wide range of applications. Though research in these areas is not new, traditionally most of it has focused on analyzing problems under controlled environments. The challenges posed by real life applications demand for more general and robust solutions. The wide variety of objects with large intra-class variability makes the task very challenging. The difficulty in modeling and matching objects also vary depending on the input modality. In addition, the easy availability of sensors and storage have resulted in tremendous increase in the amount of data that needs to be processed which requires efficient algorithms suitable for large-size databases. In this dissertation, we address some of the challenges involved in modeling and matching of objects in realistic scenarios. Object matching in images require accounting for large variability in the appearance due to changes in illumination and view point. Any real world object is characterized by its underlying shape and albedo, which unlike the image intensity are insensitive to changes in illumination conditions. We propose a stochastic filtering framework for estimating object albedo from a single intensity image by formulating the albedo estimation as an image estimation problem. We also show how this albedo estimate can be used for illumination insensitive object matching and for more accurate shape recovery from a single image using standard shape from shading formulation. We start with the simpler problem where the pose of the object is known and only the illumination varies. We then extend the proposed approach to handle unknown pose in addition to illumination variations. We also use the estimated albedo maps for another important application, which is recognizing faces across age progression. Many approaches which address the problem of modeling and recognizing objects from images assume that the underlying objects are of diffused texture. But most real world objects exhibit a combination of diffused and specular properties. We propose an approach for separating the diffused and specular reflectance from a given color image so that the algorithms proposed for objects of diffused texture become applicable to a much wider range of real world objects. Representing and matching the 2D and 3D geometry of objects is also an integral part of object matching with applications in gesture recognition, activity classification, trademark and logo recognition, etc. The challenge in matching 2D/3D shapes lies in accounting for the different rigid and non-rigid deformations, large intra-class variability, noise and outliers. In addition, since shapes are usually represented as a collection of landmark points, the shape matching algorithm also has to deal with the challenges of missing or unknown correspondence across these data points. We propose an efficient shape indexing approach where the different feature vectors representing the shape are mapped to a hash table. For a query shape, we show how the similar shapes in the database can be efficiently retrieved without the need for establishing correspondence making the algorithm extremely fast and scalable. We also propose an approach for matching and registration of 3D point cloud data across unknown or missing correspondence using an implicit surface representation. Finally, we discuss possible future directions of this research.
  • Thumbnail Image
    Item
    Parallelization of Non-Rigid Image Registration
    (2008) Philip, Mathew; Shekhar, Raj; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Non-rigid image registration finds use in a wide range of medical applications ranging from diagnostics to minimally invasive image-guided interventions. Automatic non-rigid image registration algorithms are computationally intensive in that they can take hours to register two images. Although hierarchical volume subdivision-based algorithms are inherently faster than other non-rigid registration algorithms, they can still take a long time to register two images. We show a parallel implementation of one such previously reported and well tested algorithm on a cluster of thirty two processors which reduces the registration time from hours to a few minutes. Mutual information (MI) is one of the most commonly used image similarity measures used in medical image registration and also in the mentioned algorithm. In addition to parallel implementation, we propose a new concept based on bit-slicing to accelerate computation of MI on the cluster and, more generally, on any parallel computing platform such as the Graphics processor units (GPUs). GPUs are becoming increasingly common for general purpose computing in the area of medical imaging as they can execute algorithms faster by leveraging the parallel processing power they offer. However, the standard implementation of MI does not map well to the GPU architecture, leading earlier investigators to compute only an inexact version of MI on the GPU to achieve speedup. The bit-slicing technique we have proposed enables us to demonstrate an exact implementation of MI on the GPU without adversely affecting the speedup.
  • Thumbnail Image
    Item
    Localizing the Effects of Link Flooding Attacks in the Internet
    (2009) Lee, Soo Bum; Gligor, Virgil D.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Malware-contaminated hosts organized as a ``bot network'' can target and flood network links (e.g., routers). Yet, none of the countermeasures to link flooding proposed to date have provided dependable link access (i.e., link access guarantees) for legitimate traffic during such attacks. Network-layer capabilities offer strong protection against link flooding by authorizing individual flows with unforgeable credentials (i.e., capabilities). However, network-layer capabilities are insufficient for dependable link access, for several reasons: (1) the capability-setup channel is vulnerable to flooding attacks that prevent legitimate clients from acquiring capabilities; i.e., Denial of Capability (DoC) attacks, (2) compromised attack sources that have acquired capabilities in a legitimate way can flood the privileged channel reserved for capability carrying packets, and (3) the global effects of flooding attacks are still unavoidable with ``per-flow'' based capabilities. In this dissertation, we present a router-level design that confines the effects of link flooding attacks to specified locales or neighborhoods (e.g., one or more administrative domains of the Internet) based on network-layer capabilities. Our design provides differential guarantees for access to network links that favor packets from uncontaminated domains by attack sources (e.g., bots) and yet do not deny access to packets from contaminated domains. For connection-request packets (i.e., capability requests), differential access guarantees are defined as the probabilistic lower bounds for link access: requests from uncontaminated domains have higher probabilistic lower bounds for link access than those from contaminated domains. For all other packets, differential access guarantees are defined in terms of the the bandwidth allocated to packet flows; i.e., flows of malware-uncontaminated domains receive higher bandwidth guarantees than flows of contaminated ones, and legitimate flows of contaminated domains are guaranteed substantially higher bandwidth than attack flows. Potential side-effects of attack flows (e.g., multiple congested links) are mitigated by a differential routing scheme, whereby flows of malware-uncontaminated domains are routed through less congested paths while those of contaminated domains are routed through the ``pinned'' default paths. We present analytical models for the proposed notions of dependable link access, and evaluate our router design both by comprehensive simulations under different attack scenarios and by comparisons with other flooding-defense schemes.
  • Thumbnail Image
    Item
    Spectrotemporal Modulation Sensitivity in Hearing-Impaired Listeners
    (2009) Mehraei, Golbarg; Shamma, Shihab; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Speech is characterized by temporal and spectral modulations. Hearing-impaired (HI) listeners may have reduced spectrotemporal modulation (STM) sensitivity, which could affect their speech understanding. This study examined effects of hearing loss and absolute frequency on STM sensitivity and their relationship to speech intelligibility, frequency selectivity and temporal fine-structure (TFS) sensitivity. Sensitivity to STM applied to four-octave or one-octave noise carriers were measured for normal-hearing and HI listeners as a function of spectral modulation, temporal modulation and absolute frequency. Across-frequency variation in STM sensitivity suggests that broadband measurements do not sufficiently characterize performance. Results were simulated with a cortical STM-sensitivity model. No correlation was found between the reduced frequency selectivity required in the model to explain the HI STM data and more direct notched-noise estimates. Correlations between low-frequency and broadband STM performance, speech intelligibility and frequency-modulation sensitivity suggest that speech and STM processing may depend on the ability to use TFS.
  • Thumbnail Image
    Item
    VISUAL TRACKING AND ILLUMINATION RECOVERY VIA SPARSE REPRESENTATION
    (2009) Mei, Xue; Jacobs, David W; Qu, Gang; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Compressive sensing, or sparse representation, has played a fundamental role in many fields of science. It shows that the signals and images can be reconstructed from far fewer measurements than what is usually considered to be necessary. Sparsity leads to efficient estimation, efficient compression, dimensionality reduction, and efficient modeling. Recently, there has been a growing interest in compressive sensing in computer vision and it has been successfully applied to face recognition, background subtraction, object tracking and other problems. Sparsity can be achieved by solving the compressive sensing problem using L1 minimization. In this dissertation, we present the results of a study of applying sparse representation to illumination recovery, object tracking, and simultaneous tracking and recognition. Illumination recovery, also known as inverse lighting, is the problem of recovering an illumination distribution in a scene from the appearance of objects located in the scene. It is used for Augmented Reality, where the virtual objects match the existing image and cast convincing shadows on the real scene rendered with the recovered illumination. Shadows in a scene are caused by the occlusion of incoming light, and thus contain information about the lighting of the scene. Although shadows have been used in determining the 3D shape of the object that casts shadows onto the scene, few studies have focused on the illumination information provided by the shadows. In this dissertation, we recover the illumination of a scene from a single image with cast shadows given the geometry of the scene. The images with cast shadows can be quite complex and therefore cannot be well approximated by low-dimensional linear subspaces. However, in this study we show that the set of images produced by a Lambertian scene with cast shadows can be efficiently represented by a sparse set of images generated by directional light sources. We first model an image with cast shadows as composed of a diffusive part (without cast shadows) and a residual part that captures cast shadows. Then, we express the problem in an L1-regularized least squares formulation, with nonnegativity constraints (as light has to be nonnegative at any point in space). This sparse representation enjoys an effective and fast solution, thanks to recent advances in compressive sensing. In experiments on both synthetic and real data, our approach performs favorably in comparison to several previously proposed methods. Visual tracking, which consistently infers the motion of a desired target in a video sequence, has been an active and fruitful research topic in computer vision for decades. It has many practical applications such as surveillance, human computer interaction, medical imaging and so on. Many challenges to design a robust tracking algorithm come from the enormous unpredictable variations in the target, such as deformations, fast motion, occlusions, background clutter, and lighting changes. To tackle the challenges posed by tracking, we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, noise and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an L1-regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Three additional components further improve the robustness of our approach: 1) a velocity incorporated motion model that helps concentrate the samples on the true target location in the next frame, 2) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 3) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on many challenging sequences involving heavy occlusions, drastic illumination changes, large scale changes, non-rigid object movement, out-of-plane rotation, and large pose variations. The proposed approach shows excellent performance in comparison with four previously proposed trackers. We also extend the work to simultaneous tracking and recognition in vehicle classification in IR video sequences. We attempt to resolve the uncertainties in tracking and recognition at the same time by introducing a static template set that stores target images in various conditions such as different poses, lighting, and so on. The recognition results at each frame are propagated to produce the final result for the whole video. The tracking result is evaluated at each frame and low confidence in tracking performance initiates a new cycle of tracking and classification. We demonstrate the robustness of the proposed method on vehicle tracking and classification using outdoor IR video sequences.
  • Thumbnail Image
    Item
    LEAD-FREE ELECTRONICS USE AND REPAIR DYNAMIC SIMULATION
    (2009) Chaloupka, Andrew Charles; Sandborn, Peter A; Mechanical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The conversion from tin-lead to lead-free electronics has increased concern amongst engineers about the reliability of electronic assemblies. In order to communicate the impact of the conversion in terms of cost and availability, a simulation of electronic systems at the LRU level to and through a repair facility was created. The model includes the effects of repair prioritization, multiple possible failure mechanisms, no-fault-founds, and un-repairable units. Example analyses were performed on electronic assemblies that use SAC and SnPb solder using a repair process modeled after a NSWC Crane Aviation Repair Process. The case studies revealed that LRUs exposed to usage profiles characteristic of aerospace and high performance applications, high thermal cycling temperatures with short dwell times, SAC exhibited significantly increased repair costs when compared to tin-lead. Prioritizing LRUs and increasing the rate of deployment had no significant impact on the cost or availability metrics for the cases considered.
  • Thumbnail Image
    Item
    SEMICONDUCTOR WAVEGUIDES FOR NONLINEAR OPTICAL SIGNAL PROCESSING
    (2009) Apiratikul, Paveen; Murphy, Thomas E.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis investigates nonlinear effects in semiconductor waveguides for optical signal processing. Two semiconductor waveguides are studied : nanoporous silicon waveguide and GaAs/AlGaAs waveguide. First, nonlinear optical properties of nanoporous silicon waveguides including two-photon absorption, self-phase modulation and free-carrier effects are characterized and compared with similar measurements conducted on conventional silicon-on-insulator waveguides. Then, we experimentally demonstrate 10-Gb/s wavelength conversion using cross-amplitude modulation, cross-phase modulation and four-wave-mixing in GaAs/AlGaAs waveguides. Finally, we propose an ultrafast optical sampling system based on non-degenerate two-photon absorption in a GaAs photodiode. Using this technique, we successfully demonstrate a background-suppressed measurement of quasi 4-Tb/s eye diagrams.
  • Thumbnail Image
    Item
    Cooperative Detection and Network Coding in Wireless Networks
    (2009) Baidas, Mohammed Wael; Ray Liu, Prof. K. J.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In cooperative communication systems, multiple terminals in wireless networks share their antennas and resources for information exchange and processing. Recently, cooperative communications have been shown to achieve significant performance improvements in terms of transmission reliability, coverage area extension, and network throughput, with respect to existing classical communication systems. This dissertation is focused on two important applications of cooperative communications, namely: (i) cooperative distributed detection in wireless sensor networks, and (ii) many-to-many communications via cooperative space-time network coding. The first application of cooperative communications presented in this dissertation is concerned with the analysis and modeling of the deployment of cooperative relay nodes in wireless sensor networks. Particularly, in dense wireless sensor networks, sensor nodes continuously observe and collect measurements of a physical phenomenon. Such observations can be highly correlated, depending on the spatial separation between the sensor nodes as well as how the physical properties of the phenomenon are evolving over time. This unique characteristic of wireless sensor networks can be effectively exploited with cooperative communications and relays deployment such that the distributed detection performance is significantly improved as well as the energy efficiency. In particular, this dissertation studies the Amplify-and-Forward (AF) relays deployment as a function of the correlation of the observations and analyzes the achievable spatial diversity gains as compared with the classical wireless sensor networks. Moreover, it is demonstrated that the gains of cooperation can be further leveraged to alleviate bandwidth utilization inefficiencies in current sensor networks. Specifically, the deployment of cognitive AF cooperative relays to exploit empty/under-utilized time-slots and the resulting energy savings are studied, quantified and compared. The multiple terminal communication and information exchange form the second application of cooperative communications in this dissertation. Specifically, the novel concept of Space-Time-Network Coding (STNC) that is concerned with formulation of the many-to-many cooperative communications over Decode-and-Forward (DF) nodes is studied and analyzed. Moreover, the exact theoretical analysis as well as upper-bounds on the network symbol error rate performance are derived. In addition, the tradeoff between the number of communicating nodes and the timing synchronization errors is analyzed and provided as a network design guideline. With STNC, it is illustrated that cooperative diversity gains are fully exploited per node and significant performance improvements are achieved. It is concluded that the STNC scheme serves as a potential many-to-many cooperative communications scheme and that its scope goes much further beyond the generic source-relay-destination communications.