Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 6 of 6
  • Thumbnail Image
    Item
    QUANTIFYING AND PREDICTING USER REPUTATION IN A NETWORK SECURITY CONTEXT
    (2019) Gratian, Margaret Stephanie; Cukier, Michel; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Reputation has long been an important factor for establishing trust and evaluating the character of others. Though subjective by definition, it recently emerged in the field of cybersecurity as a metric to quantify and predict the nature of domain names, IP addresses, files, and more. Implicit in the use of reputation to enhance cybersecurity is the assumption that past behaviors and opinions of others provides insight into the expected future behavior of an entity, which can be used to proactively identify potential threats to cybersecurity. Despite the plethora of work in industry and academia on reputation in cyberspace, proposed methods are often presented as black boxes and lack scientific rigor, reproducibility, and validation. Moreover, despite widespread recognition that cybersecurity solutions must consider the human user, there is limited work focusing on user reputation in a security context. This dissertation presents a mathematical interpretation of user cyber reputation and a methodology for evaluating reputation in a network security context. A user’s cyber reputation is defined as the most likely probability the user demonstrates a specific characteristic on the network, based on evidence. The methodology for evaluating user reputation is presented in three phases: characteristic definition and evidence collection; reputation quantification and prediction; and reputation model validation and refinement. The methodology is illustrated through a case study on a large university network, where network traffic data is used as evidence to determine the likelihood a user becomes infected or remains uninfected on the network. A separate case study explores social media as an alternate source of data for evaluating user reputation. User-reported account compromise data is collected from Twitter and used to predict if a user will self-report compromise. This case study uncovers user cybersecurity experiences and victimization trends and emphasizes the feasibility of using social media to enhance understandings of users from a security perspective. Overall, this dissertation presents an exploration into the complicated space of cyber identity. As new threats to security, user privacy, and information integrity continue to manifest, the need for reputation systems and techniques to evaluate and validate online identities will continue to grow.
  • Thumbnail Image
    Item
    Security, trust and cooperation in wireless sensor networks
    (2011) Zheng, Shanshan; Baras, John S.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Wireless sensor networks are a promising technology for many real-world applications such as critical infrastructure monitoring, scientific data gathering, smart buildings, etc.. However, given the typically unattended and potentially unsecured operation environment, there has been an increased number of security threats to sensor networks. In addition, sensor networks have very constrained resources, such as limited energy, memory, computational power, and communication bandwidth. These unique challenges call for new security mechanisms and algorithms. In this dissertation, we propose novel algorithms and models to address some important and challenging security problems in wireless sensor networks. The first part of the dissertation focuses on data trust in sensor networks. Since sensor networks are mainly deployed to monitor events and report data, the quality of received data must be ensured in order to make meaningful inferences from sensor data. We first study a false data injection attack in the distributed state estimation problem and propose a distributed Bayesian detection algorithm, which could maintain correct estimation results when less than one half of the sensors are compromised. To deal with the situation where more than one half of the sensors may be compromised, we introduce a special class of sensor nodes called \textit{trusted cores}. We then design a secure distributed trust aggregation algorithm that can utilize the trusted cores to improve network robustness. We show that as long as there exist some paths that can connect each regular node to one of these trusted cores, the network can not be subverted by attackers. The second part of the dissertation focuses on sensor network monitoring and anomaly detection. A sensor network may suffer from system failures due to loss of links and nodes, or malicious intrusions. Therefore, it is critical to continuously monitor the overall state of the network and locate performance anomalies. The network monitoring and probe selection problem is formulated as a budgeted coverage problem and a Markov decision process. Efficient probing strategies are designed to achieve a flexible tradeoff between inference accuracy and probing overhead. Based on the probing results on traffic measurements, anomaly detection can be conducted. To capture the highly dynamic network traffic, we develop a detection scheme based on multi-scale analysis of the traffic using wavelet transforms and hidden Markov models. The performance of the probing strategy and of the detection scheme are extensively evaluated in malicious scenarios using the NS-2 network simulator. Lastly, to better understand the role of trust in sensor networks, a game theoretic model is formulated to mathematically analyze the relation between trust and cooperation. Given the trust relations, the interactions among nodes are modeled as a network game on a trust-weighted graph. We then propose an efficient heuristic method that explores network heterogeneity to improve Nash equilibrium efficiency.
  • Thumbnail Image
    Item
    Advanced Honeypot Architecture for Network Threats Quantification
    (2009) Berthier, Robin G; Cukier, Michel; Reliability Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Today's world is increasingly relying on computer networks. The increase in the use of network resources is followed by a rising volume of security problems. New threats and vulnerabilities are discovered everyday and affect users and companies at critical levels, from privacy issues to financial losses. Monitoring network activity is a mandatory step for researchers and security analysts to understand these threats and to build better protections. Honeypots were introduced to monitor unused IP spaces to learn about attackers. The advantage of honeypots over other monitoring solutions is to collect only suspicious activity. However, current honeypots are expensive to deploy and complex to administrate especially in the context of large organization networks. This study addresses the challenge of improving the scalability and flexibility of honeypots by introducing a novel hybrid honeypot architecture. This architecture is based on a Decision Engine and a Redirection Engine that automatically filter attacks and save resources by reducing the size of the attack data collection and allow researchers to actively specify the type of attack they want to collect. For a better integration into the organization network, this architecture was combined with network flows collected at the border of the production network. By offering an exhaustive view of all communications between internal and external hosts of the organization, network flows can 1) assist the configuration of honeypots, and 2) extend the scope of honeypot data analysis by providing a comprehensive profile of network activity to track attackers in the organization network. These capabilities were made possible through the development of a passive scanner and server discovery algorithm working on top of network flows. This algorithm and the hybrid honeypot architecture were deployed and evaluated at the University of Maryland, which represents a network of 40,000 computers. This study marks a major step toward leveraging honeypots into a powerful security solution. The contributions of this study will enable security analysts and network operators to make a precise assessment of the malicious activity targeting their network.
  • Thumbnail Image
    Item
    Improving Network Performance, Security and Robustness in Hybrid Wireless Networks Using a Satellite Overlay
    (2008-11-24) Roy-Chowdhury, Ayan; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    In this thesis we propose that the addition of a satellite overlay to large or dense wireless networks will result in improvement in application performance and network reliability, and also enable efficient security solutions that are well-suited for wireless nodes with limited resources. We term the combined network as a hybrid wireless network. Through analysis, network modeling and simulation, we quantify the improvement in end-to-end performance in such networks, compared to flat wireless networks. We also propose a new analytical method for modeling and estimating the performance of hybrid wireless networks. We create a loss network model for hybrid networks using the hierarchical reduced loss network model, adapted for packet-switched networks. Applying a fixed point approximation method on the set of relations modeling the hierarchical loss network, we derive a solution that converges to a fixed point for the parameter set. We analyze the sensitivity of the performance metric to variations in the network parameters by applying Automatic Differentiation to the performance model. We thus develop a method for parameter optimization and sensitivity analysis of protocols for designing hybrid networks. We investigate how the satellite overlay can help to implement better solutions for secure group communications in hybrid wireless networks. We propose a source authentication protocol for multicast communications that makes intelligent use of the satellite overlay, by modifying and extending TESLA certificates. We also propose a probabilistic non-repudiation technique that uses the satellite as a proxy node. We describe how the authentication protocol can be integrated with a topology-aware hierarchical multicast routing protocol to design a secure multicast routing protocol that is robust to active attacks. Lastly, we examine how the end-to-end delay is adversely affected when IP Security protocol (IPSEC) and Secure Socket Layer protocol (SSL) are applied to unicast communications in hybrid networks. For network-layer security with low delay, we propose the use of the Layered IPSEC protocol, with a modified Internet Key Exchange protocol. For secure web browsing with low delay, we propose the Dual-mode SSL protocol. We present simulation results to quantify the performance improvement with our proposed protocols, compared to the traditional solutions.
  • Thumbnail Image
    Item
    Dynamic Reconfiguration with Virtual Services
    (2005-05-18) Savarese, Daniel F.; Purtilo, James M; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    We present a new architecture (virtual services) and accompanying implementation for dynamically adapting and reconfiguring the behavior of network services. Virtual services are a compositional middleware system that transparently interposes itself between a service and a client, overlaying new functionality with configurations of modules organized into processing chains. Virtual services allow programmers and system administrators to extend, modify, and reconfigure dynamically the behavior of existing services for which source code, object code, and administrative control are not available. Virtual service module processing chains are instantiated on a per connection or invocation basis, thereby enabling the reconfiguration of individual connections to a service without affecting other connections to the same service. To validate our architecture, we have implemented a virtual services software development toolkit and middleware server. Our experiments demonstrate that virtual services can modularize concerns that cut across network services. We show that we can reconfigure and enhance the security properties of services implemented as either TCP client-server systems, such as an HTTP server, or as remotely invocable objects, such as a Web service. We demonstrate that virtual services can reconfigure the following security properties and abilities: authentication, access control, secrecy/encryption, connection monitoring, security breach detection, adaptive response to security breaches, concurrent and dynamically mutable implementation of multiple security policies for different clients.
  • Thumbnail Image
    Item
    On-line Adaptive IDS Scheme for Detecting Unknown Network Attacks using HMM Models
    (2005-05-04) Bojanic, Irena; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    An important problem in designing IDS schemes is an optimal trade-off between good detection and false alarm rate. Specifically, in order to detect unknown network attacks, existing IDS schemes use anomaly detection which introduces a high false alarm rate. In this thesis we propose an IDS scheme based on overall behavior of the network. We capture the behavior with probabilistic models (HMM) and use only limited logic information about attacks. Once we set the detection rate to be high, we filter out false positives through stages. The key idea is to use probabilistic models so that even an unknown attack can be detected, as well as a variation of a previously known attack. The scheme is adaptive and real-time. Simulation study showed that we can have a perfect detection of both known and unknown attacks while maintaining a very low false alarm rate.