Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Journal of Differential Equations
…
27 pages
1 file
We consider a control problem where the state must approach asymptotically a target C while paying an integral cost with a non-negative Lagrangian l. The dynamics f is just continuous, and no assumptions are made on the zero level set of the Lagrangian l. Through an inequality involving a positive numberp 0 and a Minimum Restraint Function U = U (x)-a special type of Control Lyapunov Function-we provide a condition implying that (i) the system is asymptotically controllable, and (ii) the value function is bounded by U/p 0. The result has significant consequences for the uniqueness issue of the corresponding Hamilton-Jacobi equation. Furthermore it may be regarded as a first step in the direction of a feedback construction.
Journal of Functional Analysis, 1990
ESAIM: Control, Optimisation and Calculus of Variations, 2013
The paper deals with deterministic optimal control problem with state constraints and non-linear dynamics. It is known for such a problem that the value function is in general discontinuous and its characterization by means of an HJ equation requires some controllability assumptions involving the dynamics and the set of state constraints. Here, we first adopt the viability point of view and look at the value function as its epigraph. Then, we prove that this epigraph can always be described by an auxiliary optimal control problem free of state constraints, and for which the value function is Lipschitz continuous and can be characterized, without any additional assumptions, as the unique viscosity solution of a Hamilton-Jacobi equation. The idea introduced in this paper bypasses the regularity issues on the value function of the constrained control problem and leads to a constructive way to compute its epigraph by a large panel of numerical schemes. Our approach can be extended to more general control problems. We study in this paper the extension to the infinite horizon problem as well as for the two-player game setting. Finally, an illustrative numerical example is given to show the relevance of the approach.
Funkcialaj Ekvacioj, 1994
Applied Mathematics & Optimization, 1991
We present two convergence theorems for Hamilton Jacobi equations and we apply them to the convergence of approximations and perturbations of optimal control problems and of two-players zero-sum differential games. One of our results is, for instance, the following. Let T and T h be the minimal time functions to reach the origin of two control systems y' = f(y, a) and y' = fh(Y, a), both locally controllable in the origin, and let ~ be any compact set of points controllable to the origin. If 11 fhf II ~ < Ch, then IT(x)-Th(x)l < Cgh", for all x ~ o,~, where ~ is the exponent of H61der continuity of T(x).
IFAC Proceedings Volumes, 1992
SIAM Journal on Control and Optimization, 1989
Optimal control problems, with no discount, are studied for systems governed by nonlinear "parabolic" state equations, using a dynamic programming approach. If the dynamics are stabilizable with respect to cost, then the fact that the value function is a generalized viscosity solution of the associated Hamilton-Jacobi equation is proved. This yields the feedback formula. Moreover, uniqueness is obtained under suitable stability assumptions. Key words, optimal control, Hamilton-Jacobi equations, viscosity solutions, evolution equations, unbounded operators AMS(MOS) subject classifications. 49C20, 34G20 *
2008
This lecture will highlight the contributions of optimal control theory to geometry and mechanics. The basic object of study is the reachable sets of families of vector fields parametrized by control functions. We will show how the extremal properties of the reachable sets lead to the Hamiltonians and how these Hamiltonians alter our understanding of the classical calculus of variations in which the Euler-Lagrange equation is a focal point of the subject.
Nonlinear Differential Equations and Applications NoDEA, 2020
We extend the classical concepts of sampling and Euler solutions for control systems associated to discontinuous feedbacks by considering also the corresponding costs. In particular, we introduce the notions of Sample and Euler stabilizability to a closed target set $$\mathbf{C}$$ C with$$({p_{0}},W)$$ ( p 0 , W ) -regulated cost, for some continuous, state-dependent function W and some constant $${p_{0}}>0$$ p 0 > 0 : it roughly means that we require the existence of a stabilizing feedback K such that all the corresponding sampling and Euler solutions starting from a point z have suitably defined finite costs, bounded above by $$W(z)/{p_{0}}$$ W ( z ) / p 0 . Then, we show how the existence of a special, semiconcave Control Lyapunov Function W, called $${p_{0}}$$ p 0 -Minimum Restraint Function, allows us to construct explicitly such a feedback K. When dynamics and Lagrangian are Lipschitz continuous in the state variable, we prove that K as above can be still obtained if the...
2006
In this article, we present and discuss the infinite horizon optimal control problem subject to stability constraints. First, we consider optimality conditions of the Hamilton-Jacobi-Bellman type, and present a method to define a feedback control strategy. Then, we address necessary conditions of optimality in the form of a maximum principle. These are derived from an auxiliary optimal control problem with mixed constraints.
Applied Mathematics and Optimization, 1999
In this paper we extend to completely general nonlinear systems the result stating that the H ∞ suboptimal control problem is solved if and only if the corresponding Hamilton-Jacobi-Isaacs (HJI) equation has a nonnegative (super)solution. This is well known for linear systems, using the Riccati equation instead of the HJI equation. We do this using the theory of differential games and viscosity solutions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
ESAIM: Control, Optimisation and Calculus of Variations, 2011
Systems & Control Letters, 2007
Advances in Differential Equations, 1999
Journal of Mathematical Analysis and Applications, 2000
Journal of Dynamical and Control Systems, 2016
Proceedings of the 18th IFAC World Congress, 2011
Journal of Convex Analysis
Journal of Differential Equations, 2012
Journal of Differential Equations, 1983
Annales de l'Institut Henri Poincaré C, Analyse non linéaire, 2012
SIAM Journal on Control and Optimization, 2001
Nonlinear Analysis: Theory, Methods & Applications, 1981
Differential and Integral Equations, 1999
IMA Journal of Mathematical Control and Information, 2013
Systems & Control Letters, 2000
Communications in Partial Differential Equations, 2015