Robust Control of Set-Valued Discrete Time Dynamical Systems

Loading...
Thumbnail Image

Files

PhD_95-8.pdf (7.53 MB)
No. of downloads: 629

Publication or External Link

Date

1995

Citation

DRUM DOI

Abstract

This thesis deals with the robust control of nonlinear systems subject to persistent bounded non-additive disturbances. Such disturbances could be due to exogenous signals, or internal to the system as in the case of parametric uncertainty. The problem solved could be viewed as an extension of l1- optimal control to nonlinear systems, however, now under very general non-additive disturbance assumptions. We model such systems as inclusions, and set up an equivalent robust control problem for the now set- valued dynamical system. Due to the fact that inclusions could arise from other considerations as well, we solve the control problem for this generation class of systems. The state feedback problem is solved via a game theoretic approach, wherein the controller plays against the plant. For the output feedback case, the concept of an information state is employed. The information state dynamics define a new infinite dimensional system, and enable us to achieve a separation between estimation and control. This concept is extended to the case of delayed measurements as well. For motivational purposes, we formally derive the information state from a risk-sensitive stochastic control problem via small noise limits. In general, the solution to the output feedback case involves solving an infinite dimensional dynamic programming equation. One way of avoiding this computation in practice is to consider certainty equivalence like controllers. This issue is considered, where we generalize the certainty equivalence controller to obtain other non-optimal, but dissipative output feedback policies. The approach followed yields both necessary and sufficient conditions for the solvability of the problem. We also present some applications of the theory developed.

Notes

Rights