Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000
…
14 pages
1 file
We consider the synthesis of optimal controls for continuous feedback systems by recasting the problem to a hybrid optimal control problem: to synthesize optimal enabling conditions for switching between locations in which the control is constant. An algorithmic solution is obtained by translating the hybrid automaton to a finite automaton using a bisimulation and formulating a dynamic programming problem with extra conditions to ensure non-Zenoness of trajectories.
2001
We consider the synthesis of optimal controls for continuous feedback systems by recasting the problem to a hybrid optimal control problem which is to synthesize optimal enabling conditions for switching between locations in which the control is constant. We provide a single-pass algorithm to solve the dynamic programming problem that arises, with added constraints to ensure non-Zeno trajectories.
2005
We consider the synthesis of optimal controls for continuous feedback systems by recasting the problem to a hybrid optimal control problem: synthesize optimal enabling conditions for switching between locations in which the control is constant. An algorithmic solution is obtained by translating the hybrid automaton to a finite automaton using a bisimulation and formulating a dynamic programming problem with extra conditions to ensure non-Zenoness of trajectories.
We consider the optimal control of switched linear autonomous systems whose switching sequence is determined by a controlled automaton and where the objective is to minimize a quadratic performance index over an infinite time horizon. The quantities to be optimized are the sequence of switching times and the sequence of modes (or "locations"), under the following constraints: the sequence of modes has a finite length; the discrete dynamics of the automaton restricts the possible switches from a given location to an adjacent location, with a cost associated to each switch; the time interval between two consecutive switching times is greater than a fixed quantity. We first show how a state-feedback solution can be computed off-line through a numerical procedure. Then we show how the proposed procedure can be extended to the case of an infinite number of switches.
European Journal of Control, 2011
Fifty years ago, control and computing were part of a broader system science. After a long period of separate development within each discipline, embedded and hybrid systems have challenged us to re-unite the, now sophisticated theories of continuous control and discrete computing on a broader system theoretic basis. In this paper, we present a framework of system approximation that applies to both discrete and continuous systems. We define a hierarchy of approximation metrics between two systems that quantify the quality of the approximation, and capture the established notions in computer science as zero sections. The central notions in this framework are that of approximate simulation and bisimulation relations and their functional characterizations called simulation and bisimulation functions and defined by Lyapunov-type inequalities. In particular, these functions can provide computable upper-bounds on the approximation metrics by solving a static game. Our approximation framework will be illustrated by showing some of its applications in various problems such as reachability analysis of continuous systems and hybrid systems, approximation of continuous and hybrid systems by discrete systems, hierarchical control design, and simulation-based approaches to verification of continuous and hybrid systems.
IEEE Transactions on Automatic Control, 1998
Complex natural and engineered systems typically possess a hierarchical structure, characterized by continuousvariable dynamics at the lowest level and logical decision-making at the highest. Virtually all control systems today-from flight control to the factory floor-perform computer-coded checks and issue logical as well as continuous-variable control commands. The interaction of these different types of dynamics and information leads to a challenging set of "hybrid" control problems. We propose a very general framework that systematizes the notion of a hybrid system, combining differential equations and automata, governed by a hybrid controller that issues continuous-variable commands and makes logical decisions. We first identify the phenomena that arise in real-world hybrid systems. Then, we introduce a mathematical model of hybrid systems as interacting collections of dynamical systems, evolving on continuous-variable state spaces and subject to continuous controls and discrete transitions. The model captures the identified phenomena, subsumes previous models, yet retains enough structure on which to pose and solve meaningful control problems. We develop a theory for synthesizing hybrid controllers for hybrid plants in an optimal control framework. In particular, we demonstrate the existence of optimal (relaxed) and near-optimal (precise) controls and derive "generalized quasi-variational inequalities" that the associated value function satisfies. We summarize algorithms for solving these inequalities based on a generalized Bellman equation, impulse control, and linear programming.
2005
This paper presents a method for optimal control of hybrid systems. An inequality of Bellman type is considered and every solution to this inequality gives a lower bound on the optimal value function. A discretization of this "hybrid Bellman inequality" leads to a convex optimization problem in terms of finitedimensional linear programming. From the solution of the discretized problem, a value function that preserves the lower bound property can be constructed. An approximation of the optimal feedback control law is given and tried on some examples.
Journal of dynamical and control systems, 2003
Hybrid control systems are described by a family of continuous subsystems and a set of logic rules for switching between them. This paper concerns a broad class of optimization problems for hybrid systems, in which the continuous subsystems are modelled as differential inclusions. The formulation allows endpoint constraints and a general objective function that includes "transaction costs" associated with abrupt changes of discrete and continuous states, and terms associated with continuous control action as well as the terminal value of the continuous state. In consequence of the endpoint constraints, the value function may be discontinuous. It is shown that the collection of value functions (associated with all discrete states) is the unique lower semicontinuous solution of a system of generalized Bensoussan-Lions type quasi-variational inequalities, suitably interpreted for nondifferentiable, extended valued functions. It is also shown how optimal strategies and value functions are related. The proof techniques are system theoretic, i.e., based on the construction of state trajectories with suitable properties. A distinctive feature of the analysis is that it permits an infinite set of discrete states.
IEEE Transactions on Automatic Control, 2007
A class of hybrid optimal control problems (HOCP) for systems with controlled and autonomous location transitions is formulated and a set of necessary conditions for hybrid system trajectory optimality is presented which together constitute generalizations of the standard Maximum Principle; these are given for the cases of open bounded control value sets and compact control value sets. The derivations in the paper employ: (i) classical variational and needle variation techniques; and (ii) a local controllability condition which is used to establish the adjoint and Hamiltonian jump conditions in the autonomous switching case. Employing the hybrid minimum principle (HMP) necessary conditions, a class of general HMP based algorithms for hybrid systems optimization are presented and analyzed for the autonomous switchings case and the controlled switchings case. Using results from the theory of penalty function methods and Ekeland's variational principle the convergence of these algorithms is established under reasonable assumptions. The efficacy of the proposed algorithms is illustrated via computational examples.
IEEE Transactions on Automatic Control, 2001
We present a modeling framework for hybrid systems intended to capture the interaction of event-driven and time-driven dynamics. This is motivated by the structure of many manufacturing environments where discrete entities (termed jobs) are processed through a network of workcenters so as to change their physical characteristics. Associated with each job is a temporal state subject to event-driven dynamics and a physical state subject to timedriven dynamics. Based on this framework, we formulate and analyze a class of optimal control problems for single-stage processes. First-order optimality conditions are derived and several properties of optimal state trajectories (sample paths) are identified which significantly simplify the task of obtaining explicit optimal control policies.
Discrete Event Dynamic Systems, 2005
European Journal of Control, 2003
Modeling, Control and Optimization of Complex Systems, 2003
49th IEEE Conference on Decision and Control (CDC), 2010
Control Engineering Practice, 2004
42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), 2003
IEEE Transactions on Automatic Control, 2006
Discrete Event Dynamic Systems, 1998
Nonlinear Analysis: Theory, Methods & Applications, 2006