Electrical & Computer Engineering

Permanent URI for this communityhttp://hdl.handle.net/1903/2234

Browse

Search Results

Now showing 1 - 10 of 507
  • Item
    The crosstalk between EGF, IGF, and Insulin cell signaling pathways - computational and experimental analysis
    (Springer Nature, 2009-09-04) Zielinski, Rafal; Przytycki, Pawel F; Zheng, Jie; Zhang, David; Przytycka, Teresa M; Capala, Jacek
    Cellular response to external stimuli requires propagation of corresponding signals through molecular signaling pathways. However, signaling pathways are not isolated information highways, but rather interact in a number of ways forming sophisticated signaling networks. Since defects in signaling pathways are associated with many serious diseases, understanding of the crosstalk between them is fundamental for designing molecularly targeted therapy. Unfortunately, we still lack technology that would allow high throughput detailed measurement of activity of individual signaling molecules and their interactions. This necessitates developing methods to prioritize selection of the molecules such that measuring their activity would be most informative for understanding the crosstalk. Furthermore, absence of the reaction coefficients necessary for detailed modeling of signal propagation raises the question whether simple parameter-free models could provide useful information about such pathways. We study the combined signaling network of three major pro-survival signaling pathways: E pidermal G rowth F actor R eceptor (EGFR), I nsulin-like G rowth F actor-1 R eceptor (IGF-1R), and I nsulin R eceptor (IR). Our study involves static analysis and dynamic modeling of this network, as well as an experimental verification of the model by measuring the response of selected signaling molecules to differential stimulation of EGF, IGF and insulin receptors. We introduced two novel measures of the importance of a node in the context of such crosstalk. Based on these measures several molecules, namely Erk1/2, Akt1, Jnk, p70S6K, were selected for monitoring in the network simulation and for experimental studies. Our simulation method relies on the Boolean network model combined with stochastic propagation of the signal. Most (although not all) trends suggested by the simulations have been confirmed by experiments. The simple model implemented in this paper provides a valuable first step in modeling signaling networks. However, to obtain a fully predictive model, a more detailed knowledge regarding parameters of individual interactions might be necessary.
  • Item
    Multi-User Security: A Signal Processing and Networking Perspective
    (2002) Trappe, Wade; Liu, K.J. Ray; Electrical & Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)
    The advancements in communication and multimedia technologies have paved the way for a new suite of multi-user applications that will allow users to interact. Although the new communication infrastructure makes it easier to reach the end user, it also makes it easier for adversaries to mount attacks against security measures intended to protect data. Thus, there must be mechanisms in place that guarantee the confidentiality and rights of both the customer and the service provider during the delivery of content across future communication networks. This thesis examines security issues related to communications involving more than two participants or adversaries. We approach the problem of multi-user security by developing security measures at different stages of the content distribution process, ranging from the establishment of initial keying information before transmission, to key management while delivering through networks, and finally to content protection and collusion prevention/tracing after delivery. We address the issue of establishing a group key prior to content delivery by introducing the butterfly scheme and a conference keying scheme that addresses user heterogeneity. These schemes employ the two-party Diffie-Hellman scheme in conjunction with an underlying algorithmic tree called the conference tree. In order to address client heterogeneity, we design the conference tree using source coding techniques to account for the different user cost and budget profiles. We also introduce the PESKY performance measure, which quantifies the likelihood that a conference key can be established in a heterogeneous environment. We then consider the problem of managing keys during content delivery by proposing a multicast key management system that uses a composite message format with member join and departure operations. Compared with the traditional format of the rekeying messages used in tree-based multicast key management, our composite message format reduces the amount of header information, while maintaining the same payload size. Finally, we address the issue of protecting the digital rights of multimedia content after it has left the protected or encrypted domain. Since traditional multimedia fingerprints are susceptible to collusion attacks made by a coalition of adversaries, we develop fingerprints for multimedia that are based upon code modulation and able to identify groups of colluders.
  • Item
    A Comparative Study of Branch Predictors
    (2007) Kotha, Aparna; Barua, Rajeev
  • Item
    Preference Based Fair Allocation of Limitted Resources
    (2009) Vakili Pourtaklo, Nasim; Marcus, Steven; Ball, Michael; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The fair division of scarce resources among agents is a challenging issue across a range of applications, especially when there is competition among agents. One application of resource division is in Air Traffic Management (ATM). This dissertation is motivated by the fairness issues that arise in the resource allocation procedures that have been introduced under Collaborative Decision Making (CDM). Fair rationing and allocation of available en-route time slots are two major challenges that we address in this research. The first challenge, fair rationing, is about how to compute a fair share of available resources among agents, when the available resources fall below the total demand. Since the demand, (flights), are time dependent, we introduce a new rationing method that includes the time dependency of demand. The new procedure gives every flight that is disrupted by an AFP a share of available resources. This is in contrast to Ration-By-Schedule (RBS), the allocation method currently in use, where later scheduled flights do not receive any slots. We will discuss and prove the fairness properties of our novel rationing procedure. The second challenge, allocation of en-route resources, is about how to allocate resources among competitive agents, (flight operators), when each agent has different preferences over resources, (time slots). We design four randomized procedures for allocating scarce resources when the airlines' preferences are included. These procedures use an exogenous fair share, which can be computed using the method described above, as a fairness standard for the allocation of slots among airlines. The first two procedures, Preference Based Proportional Random Allocation (PBPRA) and Modified-PBPRA, implicity assume equal weight for each time slot. Compared to RBS, PBPRA and M-PBPRA reduce the total internal cost of airlines and also assign each airline a number of slots close (in expectation) to their fair share. The fairness, efficiency and incentive properties of PBPRA and M-PBPRA are evaluated. The value (or cost of delay) an airline associates with a particular flight may vary substantially from flight to flight. Airlines who wish to receive priority for certain flights usually are willing to pay more for specific time slots. To address the need to express varying priorities, we propose two procedures, Dual Price Proportional Random Allocation (DP-PRA) and Modified-DP-PRA (MDP-PRA) , that assign dual prices to resources, i.e. time slots, in order to capture the airlines' preferences over delays, rerouting and cancelations. We explore the fairness, efficiency and incentive properties of DP-PRA and MDP-PRA.
  • Item
    A SCANNING SQUID MICROSCOPE FOR IMAGING HIGH-FREQUENCY MAGNETIC FIELDS
    (2009) Vlahacos, Constantine Peter; Wellstood, Frederick C.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis examines the design and operation of a large-bandwidth scanning SQUID microscope for spatially imaging high frequency magnetic fields. Towards this end, I present results on a cryo-cooled 4.2 K scanning SQUID microscope with a bandwidth of dc to 2 GHz and a sensitivity of about 52.4 nT per sample. By using a thin-film hysteretic Nb dc-SQUID and a pulsed sampling technique, rather than a non-hysteretic SQUID and a flux-locked loop, the bandwidth limitation of existing scanning SQUID microscopes is overcome. The microscope allows for non-contact images of time-varying magnetic field to be taken of room-temperature samples with time steps down to 50 ps and spatial resolution ultimately limited by the size of the SQUID to about 10 micrometers. The new readout scheme involves repeatedly pulsing the bias current to the dc SQUID while the voltage across the SQUID is monitored. Using a fixed pulse amplitude and applying a fixed dc magnetic flux allows the SQUID to measure the applied magnetic flux with a sampling time set by the pulse length of about 400 ps. To demonstrate the capabilities of the microscope, I imaged magnetic fields from 0 Hz (static fields) up to 4 GHz. Samples included a magnetic loop, microstrip transmission lines, and microstrip lines with a break in order to identify and isolate electrical opens in circuits. Finally, I discuss the operation and modeling of the SQUID and how to further increase the bandwidth of the microscope to allow bandwidth of upwards of 10 GHz.
  • Item
    Flow Control in Wireless Ad-hoc Networks
    (2009) Papageorgiou, Georgios; Baras, John S.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    We are interested in maximizing the Transmission Control Protocol (TCP) throughput between two nodes in a single cell wireless ad-hoc network. For this, we follow a cross-layer approach by first developing an analytical model that captures the effect of the wireless channel and the MAC layer to TCP. The analytical model gives the time evolution of the TCP window size which is described by a stochastic differential equation driven by a point process. The point process represents the arrival of acknowledgments sent by the TCP receiver to the sender as part of the self-regulating mechanism of the flow control protocol. Through this point process we achieve a cross-layer integration between the physical layer, the MAC layer and TCP. The intervals between successive points describe how the packet drops at the wireless channel and the delays because of retransmission at the MAC layer affect the window size at the TCP layer. We fully describe the statistical behavior of the point process by computing first the p.d.f. for the inter-arrival intervals and then the compensator and the intensity of the process parametrized by the quantities that describe the MAC layer and the wireless channel. To achieve analytical tractability we concentrate on the pure (unslotted) Aloha for the MAC layer and the Gilbert-Elliott model for the channel. Although the Aloha protocol is simpler than the more popular IEEE 802.11 protocol, it still exhibits the same exponential backoff mechanism which is a key factor for the performance of TCP in a wireless network. Moreover, another reason to study the Aloha protocol is that the protocol and its variants gain popularity as they are used in many of today's wireless networks. Using the analytical model for the TCP window size evolution, we try to increase the TCP throughput between two nodes in a single cell network. We want to achieve this by implicitly informing the TCP sender of the network conditions. We impose this additional constraint so we can achieve compatibility between the standard TCP and the optimized version. This allows the operation of both protocol stacks in the same network. We pose the optimization problem as an optimal stopping problem. For each packet transmitted by the TCP sender to the network, an optimal time instance has to be computed in the absence of an acknowledgment for this packet. This time instance indicates when a timeout has to be declared for the packet. In the absence of an acknowledgment, if the sender waits long for declaring a timeout, the network is underutilized. If the sender declares a timeout soon, it minimizes the transmission rate. Because of the analytical intractability of the optimal stopping time problem, we follow a Markov chain approximation method to solve the problem numerically.
  • Item
    Statistical/Geometric Techniques for Object Representation and Recognition
    (2009) Biswas, Soma; Chellappa, Rama; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Object modeling and recognition are key areas of research in computer vision and graphics with wide range of applications. Though research in these areas is not new, traditionally most of it has focused on analyzing problems under controlled environments. The challenges posed by real life applications demand for more general and robust solutions. The wide variety of objects with large intra-class variability makes the task very challenging. The difficulty in modeling and matching objects also vary depending on the input modality. In addition, the easy availability of sensors and storage have resulted in tremendous increase in the amount of data that needs to be processed which requires efficient algorithms suitable for large-size databases. In this dissertation, we address some of the challenges involved in modeling and matching of objects in realistic scenarios. Object matching in images require accounting for large variability in the appearance due to changes in illumination and view point. Any real world object is characterized by its underlying shape and albedo, which unlike the image intensity are insensitive to changes in illumination conditions. We propose a stochastic filtering framework for estimating object albedo from a single intensity image by formulating the albedo estimation as an image estimation problem. We also show how this albedo estimate can be used for illumination insensitive object matching and for more accurate shape recovery from a single image using standard shape from shading formulation. We start with the simpler problem where the pose of the object is known and only the illumination varies. We then extend the proposed approach to handle unknown pose in addition to illumination variations. We also use the estimated albedo maps for another important application, which is recognizing faces across age progression. Many approaches which address the problem of modeling and recognizing objects from images assume that the underlying objects are of diffused texture. But most real world objects exhibit a combination of diffused and specular properties. We propose an approach for separating the diffused and specular reflectance from a given color image so that the algorithms proposed for objects of diffused texture become applicable to a much wider range of real world objects. Representing and matching the 2D and 3D geometry of objects is also an integral part of object matching with applications in gesture recognition, activity classification, trademark and logo recognition, etc. The challenge in matching 2D/3D shapes lies in accounting for the different rigid and non-rigid deformations, large intra-class variability, noise and outliers. In addition, since shapes are usually represented as a collection of landmark points, the shape matching algorithm also has to deal with the challenges of missing or unknown correspondence across these data points. We propose an efficient shape indexing approach where the different feature vectors representing the shape are mapped to a hash table. For a query shape, we show how the similar shapes in the database can be efficiently retrieved without the need for establishing correspondence making the algorithm extremely fast and scalable. We also propose an approach for matching and registration of 3D point cloud data across unknown or missing correspondence using an implicit surface representation. Finally, we discuss possible future directions of this research.
  • Item
    Content Recognition and Context Modeling for Document Analysis and Retrieval
    (2009) Zhu, Guangyu; Chellappa, Rama; Doermann, David S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval. Meanwhile, the processing of documents with unconstrained layouts and complex formatting often requires effective leveraging of broad contextual knowledge. In this dissertation, we first present a novel approach for document image content categorization, using a lexicon of shape features. Each lexical word corresponds to a scale and rotation invariant local shape feature that is generic enough to be detected repeatably and is segmentation free. A concise, structurally indexed shape lexicon is learned by clustering and partitioning feature types through graph cuts. Our idea finds successful application in several challenging tasks, including content recognition of diverse web images and language identification on documents composed of mixed machine printed text and handwriting. Second, we address two fundamental problems in signature-based document image retrieval. Facing continually increasing volumes of documents, detecting and recognizing unique, evidentiary visual entities (\eg, signatures and logos) provides a practical and reliable supplement to the OCR recognition of printed text. We propose a novel multi-scale framework to detect and segment signatures jointly from document images, based on the structural saliency under a signature production model. We formulate the problem of signature retrieval in the unconstrained setting of geometry-invariant deformable shape matching and demonstrate state-of-the-art performance in signature matching and verification. Third, we present a model-based approach for extracting relevant named entities from unstructured documents. In a wide range of applications that require structured information from diverse, unstructured document images, processing OCR text does not give satisfactory results due to the absence of linguistic context. Our approach enables learning of inference rules collectively based on contextual information from both page layout and text features. Finally, we demonstrate the importance of mining general web user behavior data for improving document ranking and other web search experience. The context of web user activities reveals their preferences and intents, and we emphasize the analysis of individual user sessions for creating aggregate models. We introduce a novel algorithm for estimating web page and web site importance, and discuss its theoretical foundation based on an intentional surfer model. We demonstrate that our approach significantly improves large-scale document retrieval performance.
  • Item
    Parallelization of Non-Rigid Image Registration
    (2008) Philip, Mathew; Shekhar, Raj; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Non-rigid image registration finds use in a wide range of medical applications ranging from diagnostics to minimally invasive image-guided interventions. Automatic non-rigid image registration algorithms are computationally intensive in that they can take hours to register two images. Although hierarchical volume subdivision-based algorithms are inherently faster than other non-rigid registration algorithms, they can still take a long time to register two images. We show a parallel implementation of one such previously reported and well tested algorithm on a cluster of thirty two processors which reduces the registration time from hours to a few minutes. Mutual information (MI) is one of the most commonly used image similarity measures used in medical image registration and also in the mentioned algorithm. In addition to parallel implementation, we propose a new concept based on bit-slicing to accelerate computation of MI on the cluster and, more generally, on any parallel computing platform such as the Graphics processor units (GPUs). GPUs are becoming increasingly common for general purpose computing in the area of medical imaging as they can execute algorithms faster by leveraging the parallel processing power they offer. However, the standard implementation of MI does not map well to the GPU architecture, leading earlier investigators to compute only an inexact version of MI on the GPU to achieve speedup. The bit-slicing technique we have proposed enables us to demonstrate an exact implementation of MI on the GPU without adversely affecting the speedup.
  • Item
    Localizing the Effects of Link Flooding Attacks in the Internet
    (2009) Lee, Soo Bum; Gligor, Virgil D.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Malware-contaminated hosts organized as a ``bot network'' can target and flood network links (e.g., routers). Yet, none of the countermeasures to link flooding proposed to date have provided dependable link access (i.e., link access guarantees) for legitimate traffic during such attacks. Network-layer capabilities offer strong protection against link flooding by authorizing individual flows with unforgeable credentials (i.e., capabilities). However, network-layer capabilities are insufficient for dependable link access, for several reasons: (1) the capability-setup channel is vulnerable to flooding attacks that prevent legitimate clients from acquiring capabilities; i.e., Denial of Capability (DoC) attacks, (2) compromised attack sources that have acquired capabilities in a legitimate way can flood the privileged channel reserved for capability carrying packets, and (3) the global effects of flooding attacks are still unavoidable with ``per-flow'' based capabilities. In this dissertation, we present a router-level design that confines the effects of link flooding attacks to specified locales or neighborhoods (e.g., one or more administrative domains of the Internet) based on network-layer capabilities. Our design provides differential guarantees for access to network links that favor packets from uncontaminated domains by attack sources (e.g., bots) and yet do not deny access to packets from contaminated domains. For connection-request packets (i.e., capability requests), differential access guarantees are defined as the probabilistic lower bounds for link access: requests from uncontaminated domains have higher probabilistic lower bounds for link access than those from contaminated domains. For all other packets, differential access guarantees are defined in terms of the the bandwidth allocated to packet flows; i.e., flows of malware-uncontaminated domains receive higher bandwidth guarantees than flows of contaminated ones, and legitimate flows of contaminated domains are guaranteed substantially higher bandwidth than attack flows. Potential side-effects of attack flows (e.g., multiple congested links) are mitigated by a differential routing scheme, whereby flows of malware-uncontaminated domains are routed through less congested paths while those of contaminated domains are routed through the ``pinned'' default paths. We present analytical models for the proposed notions of dependable link access, and evaluate our router design both by comprehensive simulations under different attack scenarios and by comparisons with other flooding-defense schemes.