Institute for Systems Research

Permanent URI for this communityhttp://hdl.handle.net/1903/4375

Browse

Search Results

Now showing 1 - 10 of 11
  • Thumbnail Image
    Item
    Nonlinear Programming Methods for Distributed Optimization
    (2015-01) Matei, Ion; Baras, John
    In this paper we investigate how standard nonlinear programming algorithms can be used to solve constrained optimization problems in a distributed manner. The optimization setup consists of a set of agents interacting through a communication graph that have as common goal the minimization of a function expressed as a sum of (possibly non-convex) differentiable functions. Each function in the sum corresponds to an agent and each agent has associated an equality constraint. By re-casting the distributed optimization problem into an equivalent, augmented centralized problem, we show that distributed algorithms result naturally from applying standard nonlinear programming tech- niques. Due to the distributed formulation, the standard assumptions and convergence results no longer hold. We emphasize what changes are necessary for convergence to still be achieved for three algorithms: two algorithms based on Lagrangian methods, and an algorithm based the method of multipliers. The changes in the convergence results are necessary mainly due to the fact that the (local) minimizers of the lifted optimization problem are not regular, as a results of the distributed formulation. Unlike the standard algorithm based on the method of multipliers, for the distributed version we cannot show that the theoretical super-linear convergence rate can be achieved.
  • Thumbnail Image
    Item
    Distributed algorithms for optimization problems with equality constraints
    (2013-02-28) Matei, Ion; Baras, John
    In this paper we introduce two discrete-time, distributed optimization algorithms executed by a set of agents whose interactions are subject to a communication graph. The algorithms can be applied to optimization problems where the cost function is expressed as a sum of functions, and where each function is associated to an agent. In addition, the agents can have equality constraints as well. The algorithms are not consensus-based and can be applied to non-convex optimization problems with equality constraints. We demonstrate that the first distributed algorithm results naturally from applying a first order method to solve the first order necessary conditions of a lifted optimization problem with equality constraints; optimization problem whose solution embeds the solution of our original problem. We show that, provided the agents’ initial values are sufficiently close to a local minimizer, and the step-size is sufficiently small, under standard conditions on the cost and constraint functions, each agent converges to the local minimizer at a linear rate. Next, we use an augmented Lagrangian idea to derive a second distributed algorithm whose local convergence requires weaker sufficient conditions than in the case of the first algorithm.
  • Thumbnail Image
    Item
    A non-consensus based distributed optimization algorithm
    (2013-02-27) Matei, Ion; Baras, John
    In this paper we introduce a discrete-time, distributed optimization algorithm executed by a set of agents whose interactions are subject to a communication graph. The algorithm can be applied to optimization costs that are expressed as sums of functions, where each function is associated to an agent. The algorithm can be applied to continuously differentiable cost functions, it is not consensus-based and is derived naturally by solving the first order necessary conditions of a lifted optimization problem with equality constraints. We show that, provided the agents’ initial values are sufficiently closed to a local minimizer and the step-size is sufficiently small, each agent converges to the local minimizer at a linear rate. In addition, we revisit two popular consensus-based distributed optimization algorithms and give sufficient conditions so that there use is extended to non-convex functions as well. We take a closer look at their rate of convergence and also show that unlike our algorithm, for a constant step-size, the consensus-based algorithms do not converge to a local minimizer even though the agents start close enough to the local minimizer.
  • Thumbnail Image
    Item
    A non-heuristic distributed algorithm for non-convex constrained optimization
    (2013-02-15) Matei, Ion; Baras, John
    In this paper we introduce a discrete-time, distributed optimization algorithm executed by a set of agents whose interactions are subject to a communication graph. The algorithm can be applied to optimization problems where the cost function is expressed as a sum of functions, and where each function is associated to an agent. In addition, the agents can have equality constraints as well. The algorithm can be applied to non-convex optimization problems with equality constraints, it is not consensus-based and it is not an heuristic. We demonstrate that the distributed algorithm results naturally from applying a first order method to solve the first order necessary conditions of an augmented optimization problem with equality constraints; optimization problem whose solution embeds the solution of our original problem. We show that, provided the agents’ initial values are sufficiently close to a local minimum, and the step-size is sufficiently small, under standard conditions on the cost and constraint functions, each agent converges to the local minimum at a linear rate.
  • Thumbnail Image
    Item
    A performance comparison between two consensus-based distributed optimization algorithms
    (2012-05-04) Matei, Ion; Baras, John
    In this paper we address the problem of multi-agent optimization for convex functions expressible as sums of convex functions. Each agent has access to only one function in the sum and can use only local information to update its current estimate of the optimal solution. We consider two consensus-based iterative algorithms, based on a combination between a consensus step and a subgradient decent update. The main difference between the two algorithms is the order in which the consensus-step and the subgradient descent update are performed. We show that updating first the current estimate in the direction of a subgradient and then executing the consensus step ensures better performance than executing the steps in reversed order. In support of our analytical results, we give some numerical simulations of the algorithms as well.
  • Thumbnail Image
    Item
    Flow Control in Time-Varying, Random Supply Chains
    (2012-02) Matei, Ion; Gueye, Assane; Baras, John
    Today’s supply chains are more and more complex. They depend on a network of independent, yet interconnected moving parts. They rely on critical infrastructures and experience a lot of time variability and randomness. Designing strategies that deal with such constantly changing supply chains is necessary in this increasingly globalized economy where supply chain disruptions have impacts that propagate not only locally but also globally. In this paper we propose a randomized flow control algorithm for a time varying, random supply chain network. We formulate a constrained stochastic optimization problem that maximizes the profit function in terms of the long-run, time-average rates of the flows in the supply chain. We show that our algorithm, which is based on queueing theory and stochastic analysis concepts, can get arbitrarily close to the solution of the aforementioned optimization problem. In addition, we describe how the flow control algorithm can be extended to a multiple firms supply chain setup and present numerical simulations of our algorithm for different supply chain topologies.
  • Thumbnail Image
    Item
    A randomized gossip consensus algorithm on convex metric spaces
    (2012-02-20) Matei, Ion; Somarakis, Christoforos; Baras, John
    A consensus problem consists of a group of dynamic agents who seek to agree upon certain quantities of interest. This problem can be generalized in the context of convex metric spaces that extend the standard notion of convexity. In this paper we introduce and analyze a randomized gossip algorithm for solving the generalized consensus problem on convex metric spaces. We study the convergence properties of the algorithm using stochastic differential equations theory. We show that the dynamics of the distances between the states of the agents can be upper bounded by the dynamics of a stochastic differential equation driven by Poisson counters. In addition, we introduce instances of the generalized consensus algorithm for several examples of convex metric spaces together with numerical simulations.
  • Thumbnail Image
    Item
    Consensus-Based Distributed Filtering
    (2010-03) Matei, Ion; Baras, John; Baras, John
    We address the consensus-based distributed linear filtering problem, where a discrete time, linear stochastic process is observed by a network of sensors. We assume that the consensus weights are known and we first provide sufficient conditions under which the stochastic process is detectable, i.e. for a specific choice of consensus weights there exists a set of filtering gains such that the dynamics of the estimation errors (without noise) is asymptotically stable. Next, we provide a distributed sub-optimal filtering scheme based on optimizing an upper bound on a quadratic filtering cost. In the stationary case, we provide sufficient conditions under which this scheme converges; conditions expressed in terms of the convergence properties of a set of coupled Riccati equations. We continue with presenting a connection between the consensus-based distributed linear filter and the optimal linear filter of a Markovian jump linear system, appropriately defined. More specifically, we show that if the Markovian jump linear system is (mean square) detectable, then the stochastic process is detectable under the consensus-based distributed linear filtering scheme. We also show that the state estimate given by the optimal linear filter of a Markovian jump linear system appropriately defined can be seen as an approximation of the optimal average state estimate obtained using the consensus-based linear filtering scheme.
  • Thumbnail Image
    Item
    The asymptotic consensus problem on convex metric spaces
    (2010-03) Matei, Ion; Baras, John; Baras, John
    A consensus problem consists of a group of dynamic agents who seek to agree upon certain quantities of interest. The agents exchange information according to a communication network modeled as a directed time-varying graph. A convex metric space is a metric space on which we define a convex structure. Using this convex structure we define convex sets and in particular the convex hull of a (finite) set. In this paper we generalize the asymptotic consensus problem to convex metric spaces. Under minimal connectivity assumptions, we show that if at each iteration an agent updates its state by choosing a point from a particular subset of the convex hull of the agent's current state and the ones of his neighbors, the asymptotic agreement is achieved. As application example, we use this framework to introduce an iterative algorithm for reaching consensus of opinion. In this example, the agents take values in the space of discrete random variable on which we define an appropriate metric and convex structure. In addition, we provide a more detail analysis of the convex hull of a finite set for this particular convex metric space.
  • Thumbnail Image
    Item
    Convergence results for the linear consensus problem under Markovian random graphs
    (2009) Matei, Ion; John, Baras; Baras, John
    This note discusses the linear discrete and continuous time consensus problem for a network of dynamic agents with directed information flows and random switching topologies. The switching is determined by a Markov chain, each topology corresponding to a state of the Markov chain. We show that, under doubly stochastic assumption on the matrices involved in the linear consensus scheme, average consensus is achieved in the mean square sense and almost surely if and only if the graph resulted from the union of graphs corresponding to the states of the Markov chain is strongly connected. The aim of this note is to show how techniques from Markovian jump linear systems theory, in conjunction with results inspired by matrix and graph theory, can be used to prove convergence results for stochastic consensus problems. I