Electrical & Computer Engineering Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2765
Browse
8 results
Search Results
Item Spectral Methods for Neural Network Designs(2022) Su, Jiahao; Huang, Furong; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Neural networks are general-purpose function approximators. Given a problem, engineers or scientists select a hypothesis space of functions with specific properties by designing the network architecture. However, mainstream designs are often ad-hoc, which could suffer from numerous undesired properties. Most prominently, the network architectures are gigantic, where most parameters are redundant while consuming computational resources. Furthermore, the learned networks are sensitive to adversarial perturbation and tend to underestimate the predictive uncertainty. We aim to understand and address these problems using spectral methods --- while these undesired properties are hard to interpret from network parameters in the original domain, we could establish their relationship when we represent the parameters in a spectral domain. These relationships allow us to design networks with certified properties via the spectral representation of parameters.Item Radio Analytics for Human Computer Interaction(2021) Regani, Sai Deepika; Liu, K.J. Ray; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)WiFi, as we know it, is no more a mere means of communication. Recent advances in research and industry have unleashed the sensing potential of wireless signals. With the constantly expanding availability of the radio frequency spectrum for WiFi, we now envision a future where wireless communication and sensing systems co-exist and continue to facilitate human lives. Radio signals are currently being used to ``sense" or monitor various human activities and vital signs. As Human-Computer Interaction (HCI) continues to form a considerable part of daily activities, it is interesting to investigate the potential of wireless sensing in designing practical HCI applications. This dissertation aims to study and design three different HCI applications, namely, (i) In-car driver authentication, (ii) Device-free gesture recognition through the wall, and (iii) Handwriting tracking by leveraging the radio signals. In the first part of this dissertation, we introduce the idea of in-car driver authentication using wireless sensing and develop a system that can recognize drivers automatically. The proposed system can recognize humans by identifying the unique radio biometric information embedded in the wireless channel state information (CSI) through multipath propagation. However, since the environmental information is also captured in the CSI, radio biometric recognition performance may be degraded by the changing physical environment. To this end, we address the problem of ``in-car changing environments” where the existing wireless sensing-based human identification system fails. We build a long-term driver radio biometric database consisting of radio biometrics of multiple people collected over two months. Machine learning (ML) models built using this database make the proposed system adaptive to new in-car environments. The performance of the in-car driver authentication system is shown to improve with extending multi-antenna and frequency diversities. Long-term experiments demonstrate the feasibility and accuracy of the proposed system. The accuracy achieved in the two-driver scenario is up to 99.13% for the best case compared to 87.7% achieved with the previous work. In the second part, we propose GWrite, a device-free gesture recognition system that can work in a through-the-wall scenario. The sequence of physical perturbations induced by the hand movement influences the multipath propagation and reflects in the CSI time series corresponding to the gesture. Leveraging the statistical properties of the EM wave propagation, we derive a relationship between the similarity of CSIs within the time series and the relative distance moved by the hand. Feature extraction modules are built on this relation to extract features characteristic of the gesture shapes. We built a prototype of GWrite on commercial WiFi devices and achieved a classification accuracy of 90.1\% on a set of 15 gesture shapes consisting of the uppercase English alphabets. We demonstrate that a broader set of gestures could be defined and classified using GWrite as opposed to the existing systems that operate over a limited gesture set. In the final part of this dissertation, we present mmWrite, the first high-precision passive handwriting tracking system using a single commodity millimeter wave (mmWave) radio. Leveraging the short wavelength and large bandwidth of 60 GHz signals and the radar-like capabilities enabled by the large phased array, mmWrite transforms any flat region into an interactive writing surface that supports handwriting tracking at millimeter accuracy. mmWrite employs an end-to-end pipeline of signal processing to enhance the range and spatial resolution limited by the hardware, boost the coverage, and suppress interference from backgrounds and irrelevant objects. Experiments using a commodity 60 GHz device show that mmWrite can track a finger/pen with a median error of 2.8 mm close to the device and thus can reproduce handwritten characters as small as 1 cm X 1 cm, with a coverage of up to 8 m^2 supported. With minimal infrastructure needed, mmWrite promises ubiquitous handwriting tracking for new applications in HCI.Item EXPERIMENTAL CHARACTERIZATION OF ATMOSPHERIC TURBULENCE SUPPORTED BY ADVANCED PHASE SCREEN SIMULATIONS(2020) PAULSON, DANIEL A; Davis, Christopher C; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Characterization of optical propagation through the low turbulent atmosphere has been a topic of scientific investigation for decades, and has important engineering applications in the fields of free space optical communications, remote sensing, and directed energy. Traditional theories, starting with early radio science, have flowed down from the assumption of three dimensional statistical symmetry of so-called fully developed, isotropic turbulence. More recent experimental results have demonstrated that anisotropy and irregular frequency domain characteristics are regularly observed near boundaries of the atmosphere, and similar findings have been reported in computational fluid dynamics literature. We have used a multi-aperture transmissometer in field testing to characterize atmospheric transparency, refractive index structure functions, and turbulence anisotropy near atmospheric boundaries. Additionally, we have fielded arrays of resistive temperature detector probes alongside optical propagation paths to provide direct measurements of temperature and refractive index statistics supporting optical turbulence observations. We are backing up these experimental observations with a modified algorithm for modeling optical propagation through atmospheric turbulence. Our new phase screen approach utilizes a randomized spectral sampling algorithm to emulate the turbulence energy spectrum and improve modeling of low frequency fluctuations and improve convergence with theory. We have used the new algorithm to investigate open theoretical topics, such as the behavior of beam statistics in the strong fluctuation regime as functions of anisotropy parameters, and energy spectrum power law behavior. These results have to be leveraged in order to develop new approaches for characterization of atmospheric optical turbulence.Item PROFILE- AND INSTRUMENTATION- DRIVEN METHODS FOR EMBEDDED SIGNAL PROCESSING(2015) Chukhman, Ilya; Bhattacharyya, Shuvra; Petrov, Peter; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Modern embedded systems for digital signal processing (DSP) run increasingly sophisticated applications that require expansive performance resources, while simultaneously requiring better power utilization to prolong battery-life. Achieving such conflicting objectives requires innovative software/hardware design space exploration spanning a wide-array of techniques and technologies that offer trade-offs among performance, cost, power utilization, and overall system design complexity. To save on non-recurring engineering (NRE) costs and in order to meet shorter time-to-market requirements, designers are increasingly using an iterative design cycle and adopting model-based computer-aided design (CAD) tools to facilitate analysis, debugging, profiling, and design optimization. In this dissertation, we present several profile- and instrumentation-based techniques that facilitate design and maintenance of embedded signal processing systems: 1. We propose and develop a novel, translation lookaside buffer (TLB) preloading technique. This technique, called context-aware TLB preloading (CTP), uses a synergistic relationship between the (1) compiler for application specific analysis of a task's context, and (2) operating system (OS), for run-time introspection of the context and efficient identification of TLB entries for current and future usage. CTP works by (1) identifying application hotspots using compiler-enabled (or manual) profiling, and (2) exploiting well-understood memory access patterns, typical in signal processing applications, to preload the TLB at context switch time. The benefits of CTP in eliminating inter-task TLB interference and preemptively allocating TLB entries during context-switch are demonstrated through extensive experimental results with signal processing kernels. 2. We develop an instrumentation-driven approach to facilitate the conversion of legacy systems, not designed as dataflow-based applications, to dataflow semantics by automatically identifying the behavior of the core actors as instances of well-known dataflow models. This enables the application of powerful dataflow-based analysis and optimization methods to systems to which these methods have previously been unavailable. We introduce a generic method for instrumenting dataflow graphs that can be used to profile and analyze actors, and we use this instrumentation facility to instrument legacy designs being converted and then automatically detect the dataflow models of the core functions. We also present an iterative actor partitioning process that can be used to partition complex actors into simpler entities that are more prone to analysis. We demonstrate the utility of our proposed new instrumentation-driven dataflow approach with several DSP-based case studies. 3. We extend the instrumentation technique discussed in (2) to introduce a novel tool for model-based design validation called dataflow validation framework (DVF). DVF addresses the problem of ensuring consistency between (1) dataflow properties that are declared or otherwise assumed as part of dataflow-based application models, and (2) the dataflow behavior that is exhibited by implementations that are derived from the models. The ability of DVF to identify disparities between an application's formal dataflow representation and its implementation is demonstrated through several signal processing application development case studies.Item Determination of Network Connectivity: Algorithms and Applications in Organ Systems(2011) Niruntasukrat, Aimaschana; Newcomb, Robert W; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In this dissertation, we develop methods to determine the existence and types of connections between network nodes when the available data are only states (or signals) of such nodes. The proposed method determines the types of connections by examining the nodes' interaction. Determination inaccuracy caused by different reactive time scales between nodes is addressed by using wavelet analysis. After the network signals are decomposed into signals at different scales, the most prominent scale of each signal is obtained through energy comparison, and then is utilized to calculate connectivity indices which determine the existence and types of connections between nodes. The algorithm developed from the method is tested by applying to a simulated ErbB signaling cascade model.Item TRANSVERSE CHARACTERIZATION AND CONTROL OF BEAMS WITH SPACE CHARGE(2013) Poor Rezaie, Kamal; O'Shea, Patrick G.; Kishek, Rami A.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The characterization of the transverse phase space of beams is a fundamental requirement for particle accelerators. As accelerators shift toward higher intensity beam regimes, the transverse dynamics of beams becomes more influenced by interparticle forces known as the space charge forces. Therefore, it is increasingly important to take space charge into account in studying the beam dynamics. In this thesis, two novel approaches are presented for measurement of transverse emittance for beams with space charge, an important quality indicator of transverse phase space. It is also discussed the experimental work on orbit characterization and control done for space charge dominated beams of the University of Maryland Electron Ring (UMER). The first method developed for measuring the emittance, utilizes a lens-drift-screen setup similar to that of a conventional quadrupole scan emittance measurement. Measurements of radius and divergence that can be obtained from beam produced radiation, e.g. optical transition, are used to calculate the beam cross correlation term and therefore the rms emittance. A linear space charge model is used in the envelope equations; hence the errors in the measurement relate to the non-uniformity of the beam distribution. The emittance obtained with our method shows small deviation from those obtained by WARP simulations for beams with high space charge, in contrast to other techniques. In addition, a second method is presented for determining emittance that works for beams with intense space charge and, theoretically, does not require an a priori assumption about the beam distribution. In this method, the same lens-drift-screen setup as the previous method is used, except that the beam size and divergence are scanned to find the minimum of product of the measured quantities. Such minimum is shown to be equal to the rms emittance under specific conditions that usually can be satisfied by adjusting the experiment parameters such as the drift length. The result of numerical analysis of the method done for a realistic accelerator confirms the applicability of method for intense beams with nonuniform distribution. Finally, the experimental work for characterization and control of beam centroid motions in UMER are discussed. Such work is important because at high space charge intensities, the nonlinearities of the lenses impose stricter constraints on the swing of beam centroid in the pipe. On the characterization side, we show new methods for more accurate measurements of the average orbit of particles, including inside the quadrupoles where there is no monitor. Based on this more precise orbit information, the beam orbit is corrected and its result is presented.Item Resiliency Assessment and Enhancement of Intrinsic Fingerprinting(2012) Chuang, Wei-Hong; Wu, Min; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Intrinsic fingerprinting is a class of digital forensic technology that can detect traces left in digital multimedia data in order to reveal data processing history and determine data integrity. Many existing intrinsic fingerprinting schemes have implicitly assumed favorable operating conditions whose validity may become uncertain in reality. In order to establish intrinsic fingerprinting as a credible approach to digital multimedia authentication, it is important to understand and enhance its resiliency under unfavorable scenarios. This dissertation addresses various resiliency aspects that can appear in a broad range of intrinsic fingerprints. The first aspect concerns intrinsic fingerprints that are designed to identify a particular component in the processing chain. Such fingerprints are potentially subject to changes due to input content variations and/or post-processing, and it is desirable to ensure their identifiability in such situations. Taking an image-based intrinsic fingerprinting technique for source camera model identification as a representative example, our investigations reveal that the fingerprints have a substantial dependency on image content. Such dependency limits the achievable identification accuracy, which is penalized by a mismatch between training and testing image content. To mitigate such a mismatch, we propose schemes to incorporate image content into training image selection and significantly improve the identification performance. We also consider the effect of post-processing against intrinsic fingerprinting, and study source camera identification based on imaging noise extracted from low-bit-rate compressed videos. While such compression reduces the fingerprint quality, we exploit different compression levels within the same video to achieve more efficient and accurate identification. The second aspect of resiliency addresses anti-forensics, namely, adversarial actions that intentionally manipulate intrinsic fingerprints. We investigate the cost-effectiveness of anti-forensic operations that counteract color interpolation identification. Our analysis pinpoints the inherent vulnerabilities of color interpolation identification, and motivates countermeasures and refined anti-forensic strategies. We also study the anti-forensics of an emerging space-time localization technique for digital recordings based on electrical network frequency analysis. Detection schemes against anti-forensic operations are devised under a mathematical framework. For both problems, game-theoretic approaches are employed to characterize the interplay between forensic analysts and adversaries and to derive optimal strategies. The third aspect regards the resilient and robust representation of intrinsic fingerprints for multiple forensic identification tasks. We propose to use the empirical frequency response as a generic type of intrinsic fingerprint that can facilitate the identification of various linear and shift-invariant (LSI) and non-LSI operations.Item Private Communication Detection via Side-Channel Attacks(2012) Jong, Chang-Han; Gligor, Virgil D; Qu, Gang; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Private communication detection (PCD) enables an ordinary network user to discover communication patterns (e.g., call time, length, frequency, and initiator) between two or more private parties. Analysis of communication patterns between private parties has historically been a powerful tool used by intelligence, military, law-enforcement and business organizations because it can reveal the strength of tie between these parties. Ordinary users are assumed to have neither eavesdropping capabilities (e.g., the network may employ strong anonymity measures) nor the legal authority (e.g. no ability to issue a warrant to network providers) to collect private-communication records. We show that PCD is possible by ordinary users merely by sending packets to various network end-nodes and analyzing the responses. Three approaches for PCD are proposed based on a new type of side channels caused by resource contention, and defenses are proposed. The Resource-Saturation PCD exploits the resource contention (e.g., a fixed-size buffer) by sending carefully designed packets and monitoring different responses. Its effectiveness has been demonstrated on three commercial closed-source VoIP phones. The Stochastic PCD shows that timing side channels in the form of probing responses, which are caused by distinct resource-contention responses when different applications run in end nodes, enable effective PCD despite network and proxy-generated noise (e.g., jitter, delays). It was applied to WiFi and Instant Messaging for resource contention in the radio channel and the keyboard, respectively. Similar analysis enables practical Sybil node detection. Finally, the Service-Priority PCD utilizes the fact that 3G/2G mobile communication systems give higher priority to voice service than data service. This allows detection of the busy status of smartphones, and then discovery of their call records by correlating the busy status. This approach was successfully applied to iPhone and Android phones in AT&T's network. An additional, unanticipated finding was that an Internet user could disable a 2G phone's voice service by probing it with short enough intervals (e.g., 1 second). PCD defenses can be traditional side-channel countermeasures or PCD-specific ones, e.g., monitoring and blocking suspicious periodic network traffic.