Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 7 of 7
  • Thumbnail Image
    Item
    SYMMETRIC-KEY CRYPTOGRAPHY AND QUERY COMPLEXITY IN THE QUANTUM WORLD
    (2024) Bai, Chen; Katz, Jonathan; Alagic, Gorjan; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Quantum computers are likely to have a significant impact on cryptography. Many commonly used cryptosystems will be completely broken once large quantum computers are available. Since quantum computers can solve the factoring problem in polynomial time, the security of RSA would not hold against quantum computers. For symmetric-key cryptosystems, the primary quantum attack is key recovery via Grover search, which provides a quadratic speedup. One way to address this is to double the key length. However, recent results have shown that doubling the key length may not be sufficient in all cases. Therefore, it is crucial to understand the security of various symmetric-key constructions against quantum attackers. In this thesis, we give the first proof of post-quantum security for certain symmetric primitives. We begin with a fundamental block cipher, the Even-Mansour cipher, and the tweakable Even-Mansour construction. Our research shows that both are secure in a realistic quantum attack model. For example, we prove that 2^{n/3} quantum queries are necessary to break the Even-Mansour cipher. We also consider the practical applications that our work implies. Using our framework, we derive post-quantum security proofs for three concrete symmetric-key schemes: Elephant (an Authenticated Encryption (AE) finalist of NIST’s lightweight cryptography standardization effort), Chaskey (an ISO-standardized Message Authentication Code), and Minalpher (an AE second-round candidate of the CAESAR competition). In addition, we consider the two-sided permutation inversion problem in the quantum query model. In this problem, given an image y and quantum oracle access to a permutation P (and its inverse oracle), the goal is to find its pre-image x such that P(x)=y. We prove an optimal lower bound \Omega(\sqrt{2^n}) for this problem against an adaptive quantum adversary. Moreover, we apply our lower bound above to show that a natural encryption scheme constructed from random permutations is secure against quantum attacks.
  • Thumbnail Image
    Item
    Directed Graphs: Fixed-Parameter Tractability & Beyond
    (2014) Chitnis, Rajesh; Hajiaghayi, Mohammad; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Most interesting optimization problems on graphs are NP-hard, implying that (unless P=NP) there is no polynomial time algorithm that solves all the instances of an NP-hard problem exactly. However, classical complexity measures the running time as a function of only the overall input size. The paradigm of parameterized complexity was introduced by Downey and Fellows to allow for a more refined multivariate analysis of the running time. In parameterized complexity, each problem comes along with a secondary measure k which is called the parameter. The goal of parameterized complexity is to design efficient algorithms for NP-hard problems when the parameter k is small, even if the input size is large. Formally, we say that a parameterized problem is fixed-parameter tractable (FPT) if instances of size n and parameter k can be solved in f(k).nO(1) time, where f is a computable function which does not depend on n. A parameterized problem belongs to the class XP if instances of size n and parameter k can be solved in f(k).nO(g(k)) time, where f and g are both computable functions. In this thesis we focus on the parameterized complexity of transversal and connectivity problems on directed graphs. This research direction has been hitherto relatively unexplored: usually the directed version of the problems require significantly different and more involved ideas than the ones for the undirected version. Furthermore, for directed graphs there are no known algorithmic meta-techniques: for example, there is no known algorithmic analogue of the Graph Minor Theory of Robertson and Seymour for directed graphs. As a result, the fixed-parameter tractability status of the directed versions of several fundamental problems such as Multiway Cut, Multicut, Subset Feedback Vertex Set, Odd Cycle Transversal, etc. was open. In the first part of the thesis, we develop the framework of shadowless solutions for a general class of transversal problems in directed graphs. For this class of problems, we reduce the problem of finding a solution in FPT time to that of finding a shadowless solution. Since shadowless solutions have a good (problem-specific) structure, this provides an important first step in the design of FPT algorithms for problems on directed graphs. By understanding the structure of shadowless solutions, we are able to design the first FPT algorithms for the Directed Multiway Cut problem and the Directed Subset Feedback Vertex Set problem. In the second part of the thesis, we present tight bounds on the parameterized complexity of well-studied directed connectivity problems such as Strongly Connected Steiner Subgraph and Directed Steiner Forest when parameterized by the number of terminals/terminal pairs. We design new optimal XP algorithms for the aforementioned problems, and also prove matching lower bounds for existing XP algorithms. Most of our hardness results hold even if the underlying undirected graph is planar. Finally, we conclude with some open problems regarding the parameterized complexity of transversal and connectivity problems on directed graphs.
  • Thumbnail Image
    Item
    Search Complexities for HTN Planning
    (2013) Alford, Ronald Wayne; Nau, Dana S; Kuter, Ugur; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Hierarchical Task Network (HTN) planning is the problem of decomposing an initial task into a sequence of executable steps. Often viewed as just a way to encode human knowledge to solve classical planning problems faster, HTN planning is more expressive than classical planning, even to the point of being undecidable in the general case. However, HTN planning is not just a way to solve planning problems faster, but is itself a search problem that can benefit from its own distinct search algorithms and heuristics. The dissertation examines the complexities of various HTN planning problem classes in order to motivate the development of heuristic search algorithms for HTN planning which are guaranteed to terminate on a large class of syntactically identifiable problems, as well as domain independent heuristics for those algorithms to use. This will allow HTN planning to be used in a number of areas where the solvability of a problem is in question, including during the initial development of a domain and for use in policy generation in non-deterministic planning environments. In particular, this dissertation analyzes two commonly used algorithms for HTN planning and describes the subsets of HTN problems that these algorithms terminate on. This allows us to discuss the run-times of these algorithms and com- pare the expressivity of the classes of problems they decide. We provide two new HTN algorithms which terminate on a strictly broader and more expressive set of HTN problems. We also analyze the complexity of delete-free HTN planning, an analogue to delete-free classical planning which is the base of many classical planning heuristics. We show that delete-free HTN planning is NP-complete, putting the existence of strict-semantics delete-relaxation-based HTN heuristics out of reach for practical purposes. Finally, we provide a translation of a large subset of HTN planning to classical planning, which allows us to use a classical planner as a surrogate for a heuristic HTN planner. Our experiments show that even small amounts and incomplete amounts of HTN knowledge, when translated into PDDL using our algorithm, can greatly improve a classical planner's performance.
  • Thumbnail Image
    Item
    A Complexity-Based Approach to Intra-Organizational Team Selection
    (2013) Hsu, Shu-Chien; Cui, Qingbin; Skibniewski, Miroslaw; Civil Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Early studies recognized the significance of team's work capacity and suggested the selection of team members based on individual skills and performance in alignment with task characteristics. The equitable team selection method, for example, assigns people to different tasks with even skill distributions for the best overall performance. Recent advancement in organization science also identifies the importance of contextual skills. However, work teams are complex adaptive systems with interdependence between workers and social environment, and exhibit surprising, nonlinear behavior. Optimizing individual stages without taking organizational complexity into account is unlikely to yield a high performing new combination of teams. The objectives of this study can be stated as: a) Utilizing complex system theory to better understand the processes of team selection including forming teams with considering worker's interdependence and replacing the unsuitable members through a time frame; b) Comparing different team selection methods, including random selection, equity method, using knowledge of interdependence in different economic conditions through simulation; c) Comparing different policies of replacing members of teams. This study utilizes a computational model to understand the complexity of project team selection and to examine how diversity of capability and interdependence between workers to effect team performance in different economic conditions. The NK model, a widely used theory for complex systems is utilized here to illustrate the worker's interdependence and fed into an Agent-Based Model. This study uses a small design firm as a case implementation to examine the performance of a variety of team selection approaches and replacement policies. Project data, task assignment, and individual and team performance information were collected for the period of 2009-2011. The simulation results show that while the equity selection method can increase the diversity of capabilities of teams, the net performance is often worse than optimizing worker interdependencies. This study suggests that managers should protect their higher-performing workers with minimal interdependence disruption when they considered team selection. Thus taking the advantages and disadvantages of all three policies into account, transferring low contributors or least supported members are recommended to be enacted before hiring new workers to avoid this last policy's especially large additional costs.
  • Thumbnail Image
    Item
    Characterization of gradient estimators for stochastic activity networks
    (2011) Manterola, Renato Mauricio; Fu, Michael C; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This thesis aims to characterize the statistical properties of Monte Carlo simulation-based gradient estimation techniques for performance measures in stochastic activity networks (SANs) using the estimators' variance as the comparison criterion. When analyzing SANs, both performance measures and their sensitivities (gradient, Hessian) are important. This thesis focuses on analyzing three direct gradient estimation techniques: infinitesimal perturbation analysis, the score function or likelihood ratio method, and weak derivatives. To investigate how statistical properties of the different gradient estimation techniques depend on characteristics of the SAN, we carry out both theoretical analyses and numerical experiments. The objective of these studies is to provide guidelines for selecting which technique to use for particular classes of SANs based on features such as complexity, size, shape and interconnectivity. The results reveal that a specific weak derivatives-based method with common random numbers outperforms the other direct techniques in nearly every network configuration tested.
  • Thumbnail Image
    Item
    COMPLEXITY IN DISASTERS: A CASE STUDY OF THE HAITIAN EARTHQUAKE RESPONSE
    (2011) Connor, David J.; Toth, Elizabeth; Communication; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    This case study explores the development of an international crisis response from the perspective of the United States Coast Guard (USCG). Crisis managers, responders, and communicators from the USCG and from partner agencies were interviewed, as well as representatives from the Haitian publics of the response. The resulting narrative was used to test the previously untested Situational Theory of Problem Solving (STPS) and complexity theory, which had not previously been applied to international disaster response. Findings validated both theories and demonstrated the importance of cultural translators in effecting international disaster response. This study served as a preliminary test of STPS, and a first international application of complexity theory. Practical implications include guidance for crisis managers on how to respond to crises in a complex world, as well as how to harness cultural awareness when responding internationally.
  • Thumbnail Image
    Item
    Nonlinear Complexity of Boolean Permutations
    (2009) Draper, Thomas Gordon; Washington, Lawrence C; Mathematics; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    We introduce the concept of nonlinear complexity, where the complexity of a function is determined by the number of nonlinear building blocks required for construction. We group functions by linear equivalence, and induce a complexity hierarchy for the affine equivalent double cosets. We prove multiple invariants of double cosets over the affine general linear group, and develop a specialized double coset equivalence test. This is used to classify the 16! permutations over 4 bits into 302 equivalence classes, which have a maximal nonlinear depth of 6. In addition, we present a new complexity class defined in terms of nonlinearity.