Institute for Systems Research Technical Reports

Permanent URI for this collectionhttp://hdl.handle.net/1903/4376

This archive contains a collection of reports generated by the faculty and students of the Institute for Systems Research (ISR), a permanent, interdisciplinary research unit in the A. James Clark School of Engineering at the University of Maryland. ISR-based projects are conducted through partnerships with industry and government, bringing together faculty and students from multiple academic departments and colleges across the university.

Browse

Search Results

Now showing 1 - 10 of 17
  • Thumbnail Image
    Item
    Nonlinear Programming Methods for Distributed Optimization
    (2015-01) Matei, Ion; Baras, John
    In this paper we investigate how standard nonlinear programming algorithms can be used to solve constrained optimization problems in a distributed manner. The optimization setup consists of a set of agents interacting through a communication graph that have as common goal the minimization of a function expressed as a sum of (possibly non-convex) differentiable functions. Each function in the sum corresponds to an agent and each agent has associated an equality constraint. By re-casting the distributed optimization problem into an equivalent, augmented centralized problem, we show that distributed algorithms result naturally from applying standard nonlinear programming tech- niques. Due to the distributed formulation, the standard assumptions and convergence results no longer hold. We emphasize what changes are necessary for convergence to still be achieved for three algorithms: two algorithms based on Lagrangian methods, and an algorithm based the method of multipliers. The changes in the convergence results are necessary mainly due to the fact that the (local) minimizers of the lifted optimization problem are not regular, as a results of the distributed formulation. Unlike the standard algorithm based on the method of multipliers, for the distributed version we cannot show that the theoretical super-linear convergence rate can be achieved.
  • Thumbnail Image
    Item
    Stability by Fixed Point Theory in Consensus Dynamics
    (2014-08-22) Somarakis, Christoforos; Baras, John; Paraskevas, Evripidis; Baras, John
    We study the stability of linear time invariant distributed consensus dynamics in the presence of multiple propagation and processing delays. We employ fixed point theory (FPT) methods and derive sufficient conditions for asymptotic convergence to a common value while the emphasis is given in estimating the rate of convergence. We argue that this approach is novel in the field of networked dynamics as it is also flexible and thus capable of analyzing a wide variety of consensus based algorithms for which conventional Lyapunov methods are either too restrictive or unsuccessful.
  • Thumbnail Image
    Item
    On the dynamics of linear functional differential equations with asymptotically constant solutions
    (2014-08-22) Somarakis, Christoforos; Baras, John; Paraskevas, Evripidis; Baras, John
    We discuss the dynamics of general linear functional differential equations with solutions that exhibit asymptotic constancy. We apply fixed point theory methods to study the stability of these solutions and we provide sufficient conditions of asymptotic stability with emphasis on the rate of convergence. Several examples are provided to illustrate the claim that the derived results generalize, unify and in some cases improve the existing ones.
  • Thumbnail Image
    Item
    Distributed Opportunistic Scheduling for Wireless Ad-Hoc Networks with Block-Fading Model
    (2013) Chen, Hua; Baras, John
    In this paper, we study a distributed opportunistic scheduling problem to exploit the channel fluctuations in wireless ad-hoc networks. In this problem, channel probing is followed by a transmission scheduling procedure executed independently within each link in the network. We study this problem for the popular block-fading channel model, where channel dependencies are inevitable between different time instances during the channel probing phase. Different from existing works, we explicitly consider this type of channel dependencies and its impact on the transmission scheduling and hence the system performance. We use optimal stopping theory to formulate this problem, but at carefully chosen time instances at which effective decisions are made. The problem can then be solved by a new stopping rule problem where the observations are independent between different time instances. Since the stopping rule problem has an implicit horizon determined by the network size, we first characterize the system performance using backward induction. We develop one recursive approach to solve the problem and show that the computational complexity is linear with respect to network size. Due to its computational complexity, we present an approximated approach for performance analysis and develop a metric to check how good the approximation is. We characterize the achievable system performance if we ignore the finite horizon constraint and design stopping rules based on the infinite horizon analysis nevertheless. We present an improved protocol to reduce the probing costs which requires no additional cost. We characterize the performance improvement and the energy savings in terms of the probing signals. We show numerical results based on our mathematical analysis with various settings of parameters.
  • Thumbnail Image
    Item
    A Distributed Learning Algorithm with Bit-valued Communications for Multi-agent Welfare Optimization
    (2013) Menon, Anup; Baras, John
    A multi-agent system comprising N agents, each picking actions from a finite set and receiving a payoff that depends on the action of the whole, is considered. The exact form of the payoffs are unknown and only their values can be measured by the respective agents. A decentralized algorithm was proposed by Marden et. al. [1] and in the authors’ earlier work [2] that, in this setting, leads to the agents picking welfare optimizing actions under some restrictive assumptions on the payoff structure. This algorithm is modified in this paper to incorporate exchange of certain bit-valued information between the agents over a directed communication graph. The notion of an interaction graph is then introduced to encode known interaction in the system. Restrictions on the payoff structure are eliminated and conditions that guarantee convergence to welfare minimizing actions w.p. 1 are derived under the assumption that the union of the interaction graph and communication graph is strongly connected.
  • Thumbnail Image
    Item
    Distributed algorithms for optimization problems with equality constraints
    (2013-02-28) Matei, Ion; Baras, John
    In this paper we introduce two discrete-time, distributed optimization algorithms executed by a set of agents whose interactions are subject to a communication graph. The algorithms can be applied to optimization problems where the cost function is expressed as a sum of functions, and where each function is associated to an agent. In addition, the agents can have equality constraints as well. The algorithms are not consensus-based and can be applied to non-convex optimization problems with equality constraints. We demonstrate that the first distributed algorithm results naturally from applying a first order method to solve the first order necessary conditions of a lifted optimization problem with equality constraints; optimization problem whose solution embeds the solution of our original problem. We show that, provided the agents’ initial values are sufficiently close to a local minimizer, and the step-size is sufficiently small, under standard conditions on the cost and constraint functions, each agent converges to the local minimizer at a linear rate. Next, we use an augmented Lagrangian idea to derive a second distributed algorithm whose local convergence requires weaker sufficient conditions than in the case of the first algorithm.
  • Thumbnail Image
    Item
    A non-consensus based distributed optimization algorithm
    (2013-02-27) Matei, Ion; Baras, John
    In this paper we introduce a discrete-time, distributed optimization algorithm executed by a set of agents whose interactions are subject to a communication graph. The algorithm can be applied to optimization costs that are expressed as sums of functions, where each function is associated to an agent. The algorithm can be applied to continuously differentiable cost functions, it is not consensus-based and is derived naturally by solving the first order necessary conditions of a lifted optimization problem with equality constraints. We show that, provided the agents’ initial values are sufficiently closed to a local minimizer and the step-size is sufficiently small, each agent converges to the local minimizer at a linear rate. In addition, we revisit two popular consensus-based distributed optimization algorithms and give sufficient conditions so that there use is extended to non-convex functions as well. We take a closer look at their rate of convergence and also show that unlike our algorithm, for a constant step-size, the consensus-based algorithms do not converge to a local minimizer even though the agents start close enough to the local minimizer.
  • Thumbnail Image
    Item
    A non-heuristic distributed algorithm for non-convex constrained optimization
    (2013-02-15) Matei, Ion; Baras, John
    In this paper we introduce a discrete-time, distributed optimization algorithm executed by a set of agents whose interactions are subject to a communication graph. The algorithm can be applied to optimization problems where the cost function is expressed as a sum of functions, and where each function is associated to an agent. In addition, the agents can have equality constraints as well. The algorithm can be applied to non-convex optimization problems with equality constraints, it is not consensus-based and it is not an heuristic. We demonstrate that the distributed algorithm results naturally from applying a first order method to solve the first order necessary conditions of an augmented optimization problem with equality constraints; optimization problem whose solution embeds the solution of our original problem. We show that, provided the agents’ initial values are sufficiently close to a local minimum, and the step-size is sufficiently small, under standard conditions on the cost and constraint functions, each agent converges to the local minimum at a linear rate.
  • Thumbnail Image
    Item
    A Fixed Point Theory Approach to Multi-Agent Consensus Dynamics With Delays
    (2013-01-01) Somarakis, Christoforos; Baras, John
    The classic linear time-invariant multi-agent consensus scheme is revisited in the presence of constant and distributed bounded delays. We create a fixed point argument and prove exponential convergence with specific rate that depends both on the topology of the communication graph and the upper bound of the allowed delay.
  • Thumbnail Image
    Item
    Compositional Analysis of Dynamic Bayesian Networks and Applications to CPS
    (2012) Yang, ShahAn; Zhou, Yuchen; Baras, John
    Dynamic Bayesian networks (DBNs) can be effectively used to model various problems in complex dynamic systems. We perform an empirical investigation on compositional analysis of DBNs using abstraction. In static systems and hidden Markov models, computation of a metric called treewidth induces a tree decomposition that can be used to perform logical or probabilistic inference and max+ optimizations in time exponential in treewidth and linear in overall system size. Intuitively, the linear scaling means that very large systems can be analyzed as long as they are sufficiently sparse and well structured. In these simple cases, summary propagation, which uses two operations, summation (projection) and product (composition), suffices to perform the inference or optimization. Here, we begin an extension of this to structured networks of communicating dynamic systems. We define generalizations of projection and composition operators that treat labeled Markov chains as primitive objects. The projection operation, corresponding to summation, is implemented as label deletion followed by exact state reduction for Markov chains, similar to Hopcroft’s DFA minimization algorithm, with O(n log m) complexity. The composition operation is the product of state machines. We use canonical MDDs, similar to BDDs, to capture logical dependencies symbolically. Combining symbolic representations with Markov chain lumping algorithms is a novel contribution. Using this approach, we have created a tool leveraging model based systems engineering technologies. The communicating Markov chains are specified using UML Statecharts via Papyrus extended using an ANTLR parsed domain specific language (DSL). The tool reduces the number of states in networks of Markov chains by several orders of magnitude. In one example, a network having a product state space of more than 600 million states is reduced to about 500 states. A feature of this technique is that the state space is examined incrementally, meaning that the full state space is never explicitly represented, even as an input to the reduction algorithm. The primary reduction appears to come from symmetry which is surprising because the technique includes no explicit symmetry handling. We note that composition is efficient at least for systems with high symmetry. We describe applications to a hospital intensive care unit (ICU) from a systems engineering perspective.