Institute for Systems Research

Permanent URI for this communityhttp://hdl.handle.net/1903/4375

Browse

Search Results

Now showing 1 - 10 of 13
  • Thumbnail Image
    Item
    A Primal-Dual Interior-Point Method for Nonconvex Optimization with Multiple Logarithmic Barrier Parameters and with Strong Convergence Properties
    (1998) Urban, T.; Tits, A.L.; Lawrence, Craig T.; ISR
    It is observed that an algorithm proposed in the 1980s for thesolution of nonconvex constrained optimization problems is in fact aprimal-dual logarithmic barrier interior-point method closely related tomethods under current investigation in the research community. Its maindistinguishing features are judicious selection and update of the multiplebarrier parameters (one per constraint), use of the objective function asmerit function, and careful bending of the search direction. As a payoff,global convergence and fast local convergence ensue. The purpose of thisshort note is to describe the algorithm in the interior-point framework andlanguage and to provide a preliminary numerical evaluation. The latter showsthat the method compares well with algorithms recently proposed by otherresearch groups.
  • Thumbnail Image
    Item
    User's Guide for ADIFFSQP Version 0.91
    (1997) Liu, Mingyan D.; Tits, A.L.; ISR
    ADIFFSQP is a utility program that allows to user of the FFSQP[1] constrained nonlinear optimization routines to invoke the computational differentiation (or automatic differentiation:AD) preprocessor ADIFOR2.0 [2] conveniently
  • Thumbnail Image
    Item
    Feasible Sequential Quadratic Programming for Finely Discretized Problems from SIP
    (1997) Lawrence, Craig T.; Tits, A.L.; ISR
    A sequential Quadratic Programming algorithm designed to efficiently solve nonlinear optimization problems with many inequality constraints, e.g. problems arising from finely discretized Semi-Infinite Programming, is described and analyzed. The key features of the algorithm are (i) that only a few of the constraints are used in the QP sub-problems at each iteration, and (ii) that every iterate satisfies all constraints.
  • Thumbnail Image
    Item
    Multiobjective Optimization of a Leg Mechanism with Various Spring Configurations for Force Reduction
    (1996) Shieh, W-B.; Azarm, Shapour; Tsai, L-W.; Tits, A.L.; ISR
    In this paper, the design of a two degree-of-freedom leg mechanism is accomplished by a two-stage optimization process. In the first stage, leg dimensions are optimized with respect to three design objectives: minimize (i) leg size, (ii) vertical actuating force, and (iii) peak crank torque for an entire walking cycle. Following the optimization of leg dimensions, in the second stage, spring elements with various placement configurations are considered for further reduction of the actuating force and crank torque. Several tradeoff solutions are obtained and a comparison between variously spring configurations is made. It is shown that the inclusion of spring elements can significantly reduce the actuating force and crank torque.
  • Thumbnail Image
    Item
    Globally Convergent Algorithms for Robust Pole Assignment by State Feedback
    (1995) Tits, A.L.; Yang, Y.; ISR
    It is observed that an algorithm proposed in 1985 by Kautsky, Nichols and Van Dooren (KNV) amounts to maximize, at each iteration, the determinant of the candidate closed-loop eigenvector matrix X with respect to one of its columns (with unit length constraint), subject to the constraint that it remains a valid closed-loop eigenvector matrix. This interpretation is used to prove convergence of the KNV algorithm. It is then shown that a more efficient algorithm is obtained if det (X) is concurrently maximized with respect to two columns of X, and that such a scheme is easily extended to the case where the eigenvalues to be assigned include complex conjugate pairs. Variations exploiting the availability of multiple processors are suggested. Convergence properties of the proposed algorithms are established. Their superiority is demonstrated numerically.
  • Thumbnail Image
    Item
    User's Guide for CFSQP Version 2.0: A C Code for Solving (Large Scale) Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality Constraints
    (1994) Lawrence, Craig T.; Zhou, J.L.; Tits, A.L.; ISR
    CFSQP is a set of C functions for the minimization of the maximum of a set of smooth objective functions (possibly a single one) subject to general smooth constraints. If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, CFSQP first generates a feasible point for these constraints; subsequently the successive iterates generated by CFSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. When solving problems with many sequentially related constraints (or objectives), such as discretized semi- infinite programming (SIP) problems, CFSQP gives the user the option to use an algorithm that efficiently solves these problems, greatly reducing computational effort. The user has the option of either requiring that the objective function (penalty function if nonlinear equality constraints are present) decrease at each iteration after feasibility for nonlinear inequality and linear constraints has been reached (monotone line search), or requiring a decrease within at most four iterations (nonmonotone line search). He/She must provide functions that define the objective functions and constraint functions and may either provide functions to compute the respective gradients or require that CFSQP estimate them by forward finite differences.

    CFSQP is an implementation of two algorithms based on Sequential Quadratic Programming (SQP), modified so as to generate feasible iterates. In the first one (monotone line search), a certain Armijo type arc search is used with the property that the step of one is eventually accepted, a requirement for superlinear convergence. In the second one the same effect is achieved by means of a "nonmonotone" search along a straight line. The merit function used in both searches is the maximum of the objective functions if there is no nonlinear equality constraints, or an exact penalty function if nonlinear equality constraints are present.

  • Thumbnail Image
    Item
    User's Guide for JAKEFSQP Version 1.0
    (1993) Mathur, P.D.; Tits, A.L.; ISR
    JAKEFSQP is a utility program that allows a user of the FSQP constrains optimization routines to involve the automatic differentiation preprocessor JAKEF with minimal burden.
  • Thumbnail Image
    Item
    An SQP Algorithm for Finely Discretized SIP Problems and Other Problems with Many Constraints
    (1992) Zhou, J.L.; Tits, A.L.; ISR
    A Common strategy for achieving global convergence in the solution of semi-infinite programming (SIP) problems is to (approximately) solve a sequence of discretized problems, with a progressively finer discretization mesh. Finely discretized SIP problems, as well as other problems with many more constraints than variables, call for algorithms in which successive search directions are computed based on a small but significant subset of the constraints, with ensuing reduced computing cost per iteration and decreased risk of numerical difficulties. In this paper, an SQP-type algorithm is proposed that incorporates this idea. The quadratic programming subproblem that yields the search direction involves only a small subset of the constraints. This subset is updated at each iteration in such a way that global convergence is insured. Heuristics are suggested that take advantage of possible close relationship between "adjacent" constraints. Numerical results demonstrate the efficiency of the proposed algorithm.
  • Thumbnail Image
    Item
    User's Guide for FSQP Version 3.0c: A FORTRAN Code for Solving Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality and Linear Constraints
    (1992) Zhou, J.L.; Tits, A.L.; ISR
    FSQP 3.0c is a set of FORTRAN subroutines for the minimization of the maximum of a set of smooth objective functions (possibly a single one) subject to general smooth constraints. If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, FSQP first generates a feasible point for these constraints; subsequently the successive iterates generated by FSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. The user has the option of either requiring that the (modified) objective function decrease at each iteration after feasibility for nonlinear inequality and linear constraints has been reached (monotone line search), or requiring a decrease within at most four iterations (nonmonotone line search). He/She must provide subroutines that define the objective functions and constraint functions and may either provide subroutines to compute the gradients of these functions or require that FSQP estimate them by forward finite differences.

    FSQP 3.0c implements two algorithms based on Sequential Quadratic Programming (SQP), modified so as to generate feasible iterates. In the first one (monotone line search), a certain Armijo type arc search is used with the property that the step of one is eventually accepted, a requirement for superlinear convergence. In the second one the same effect is achieved by means of a (nonmonotone) search along a straight line. The merit function used in both searches is the maximum of the objective functions if there is no nonlinear equality constraint.

  • Thumbnail Image
    Item
    A Measure of Worst-Case HPerformance and of Largest Acceptable Uncertainty
    (1991) Fan, Michael K-H.; Tits, A.L.; ISR
    The structured singular value (SSV or ) is know to be an effective tool for assessing robust performance of linear time- invariant models subject to structured uncertainty. Yet all a single analysis provides is a bound ݠon the uncertainty under which stability as well as Hperformance level of k/ݠare guaranteed, where k is preselectable. In this paper, we introduce a related quantity, denoted by v which provides answers for the following questions: (i) given ݬ determine the smallest with the peoperty that, for any uncertainty bounded by ݬ an H performance level of