Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003
Control theory is a young branch of mathematics that has developed mostly in the realm of engineering problems. It is splitted in two major branches; control theory of problems described by partial difierential equa-tions where control are exercized either by boundary terms and/or inhomoge-neous terms and where the objective functionals are mostly quadratic forms; and control theory of problems described by parameter dependent ordinary difierential equations. In this case it is more frequent to deal with non-linear systems and non-quadratic objective functionals [49]. In spite that control theory can be consider part of the general theory of difierential equations, the problems that inspires it and some of the results obtained so far, have configured a theory with a strong and definite personality that is already of-fering interesting returns to its ancestors. For instance, the geometrization of nonlinear a卤ne-input control theory problems by introducing Lie-geometrical methods into...
1982
A general formalism is introduced for the optimal control problem on manifolds. It is based on a general formulation of Lagrange's multiplier theorem and recent definitions of nonlinear control systems. It is shown that we can give Pontryagin's maximum principle in this formalism. We expect that the problem formulation given in this paper is particularly suitable for application of modern results about controllability etc. in nonlinear control systems.
1998
This is the second article in the series that began in . Jacobi curves were defined, computed, and studied in that paper for regular extremals of smooth control systems. Here we do the same for singular extremals. The last section contains a feedback classification and normal forms of generic single-input affine in control systems on a 3-dimensional manifold.
Springer INdAM Series, 2015
This paper is devoted to second-order necessary optimality conditions for the Mayer optimal control problem with an arbitrary closed control set U ⊂ R m. Admissible controls are supposed to be measurable and essentially bounded. Using second order tangents to U , we first show that ifū(•) is an optimal control, then an associated quadratic functional should be nonnegative for all elements in the second order jets to U alongū(•). Then we specify the obtained results in the case when U is given by a finite number of C 2-smooth inequalities with positively independent gradients of active constraints. The novelty of our approach is due, on one hand, to the arbitrariness of U. On the other hand, the proofs we propose are quite straightforward and do not use embedding of the problem into a class of infinite dimensional mathematical programming type problems. As an application we derive new second-order necessary conditions for a free end-time optimal control problem in the case when an optimal control is piecewise Lipschitz.
2007
A geometric setup for control theory is presented. The argument is developed through the study of the extremals of action functionals defined on piecewise differentiable curves, in the presence of differentiable non-holonomic constraints. Special emphasis is put on the tensorial aspects of the theory. To start with, the kinematical foundations, culminating in the so called variational equation, are put on geometrical grounds, via the introduction of the concept of infinitesimal control . On the same basis, the usual classification of the extremals of a variational problem into normal and abnormal ones is also rationalized, showing the existence of a purely kinematical algorithm assigning to each admissible curve a corresponding abnormality index, defined in terms of a suitable linear map. The whole machinery is then applied to constrained variational calculus. The argument provides an interesting revisitation of Pontryagin maximum principle and of the Erdmann-Weierstrass corner conditions, as well as a proof of the classical Lagrange multipliers method and a local interpretation of Pontryagin's equations as dynamical equations for a free (singular) Hamiltonian system. As a final, highly non-trivial topic, a sufficient condition for the existence of finite deformations with fixed endpoints is explicitly stated and proved.
Lecture Notes in Mathematics, 2008
These notes are based on the mini-course given in June 2004 in Cetraro, Italy, in the frame of a C.I.M.E. school. Of course, they contain much more material that I could present in the 6 hours course. The goal was to give an idea of the general variational and dynamical nature of nice and powerful concepts and results mainly known in the narrow framework of Riemannian Geometry. This concerns Jacobi fields, Morse's index formula, Levi Civita connection, Riemannian curvature and related topics. I tried to make the presentation as light as possible: gave more details in smooth regular situations and referred to the literature in more complicated cases. There is an evidence that the results described in the notes and treated in technical papers we refer to are just parts of a united beautiful subject to be discovered on the crossroads of Differential Geometry, Dynamical Systems, and Optimal Control Theory. I will be happy if the course and the notes encourage some young ambitious researchers to take part in the discovery and exploration of this subject. Contents I Lagrange multipliers' geometry 3
Springer INdAM Series, 2015
In this paper, we describe a constrained Lagrangian and Hamiltonian formalism for the optimal control of nonholonomic mechanical systems. In particular, we aim to minimize a cost functional, given initial and final conditions where the controlled dynamics is given by nonholonomic mechanical system. In our paper, the controlled equations are derived using a basis of vector fields adapted to the nonholonomic distribution and the Riemannian metric determined by the kinetic energy. Given a cost function, the optimal control problem is understood as a constrained problem or equivalently, under some mild regularity conditions, as a Hamiltonian problem on the cotangent bundle of the nonholonomic distribution. A suitable Lagrangian submanifold is also shown to lead to the correct dynamics. We demonstrate our techniques in several examples including a continuously variable transmission problem and motion planning for obstacle avoidance problems.
2000
In this Note we establish the existence and uniqueness of solutions for optimal control problems for the 2D Navier-Stokes equations in a 2D domain. Our approach is based on infinite-dimensional optimization; the cost functional is shown to be strictly convex. Generalization to other control problems as well as a gradient algorithm are presented.
Asymptotic Analysis, 2008
We consider an elliptic distributed quadratic optimal control problem with exact controllability constraints on a part of the domain which, in turn, is parametrized by a small parameter ε. The quadratic tracking type functional is defined on the remaining part of the domain. We thus consider a family of optimal control problems with state equality constraints. The purpose of this paper is to study the asymptotic limit of the optimal control problems as the parameter ε tends to zero. The analysis presented is in the spirit of the direct approach of the calculus of variations. This is achieved in the framework of relaxed problems. We finally apply the procedure to an optimal control problem on a perforated domain with holes of critical size. It is shown that a strange term in the terminology of Cioranescu and Murat (Prog. Nonlinear Diff.
2013
In this paper, the multitime optimal control problem consists in devising a control such as to transfer a completely integrable linear PDE system from some given initial state to a specifled target (which may be flxed or moving) in an optimal multitime characterized by a minimum mechanical work. For that we use an appropriate curvilinear integral ac- tion. This kind of problems are based on Hamiltonian 1-forms depending linearly on the controls. They exhibits additional features which we now discuss. Firstly, we underline some historical data of interest for optimal problems with curvilinear integral cost. Secondly, our original results con- centrate on: (1) the existence of multitime optimal controls for problems associated to a curvilinear integral action and a linear m-∞ow type PDE system, (2) some properties of the reachable set, (3) the maximum prin- ciple for linear multitime optimal control problems flxed by a curvilinear integral action and an m-∞ow type PDE system, (4) the ...
1983
In this paper we present a differential geometric approach to the infinite horizon optimal control problem for nonlinear time-invariant control systems. It uses a recently proposed fibre bundle approach for the definition of nonlinear systems. The approach yields a coordinate free first order characterization of optimal curves without a_priori regularity conditions. The usefulness of the approach is motivated and illustrated with the linearquadratic optimal control problem.
Siam Journal on Control and Optimization, 1984
In this paper we present a differential geometric approach to the Lagrange problem and the fixed time optimal control problem for nonlinear time-invariant control systems. We restrict attention to first order conditions for optimality and present a generalized Lagrange multiplier rule for restricted variational problems. Our treatment of the optimal control problem uses a recently proposed fibre bundle approach for the definition of nonlinear systems.
Journal of Mathematical Analysis and Applications, 2003
Dynamic programming identifies the value function of continuous time optimal control with a solution to the Hamilton-Jacobi equation, appropriately defined. This relationship in turn leads to sufficient conditions of global optimality, which have been widely used to confirm the optimality of putative minimisers. In continuous time optimal control, the dynamic programming methodology has been used for problems with state space a vector space. However there are many problems of interest in which it is necessary to regard the state space as a manifold. This paper extends dynamic programming to cover problems in which the state space is a general finite-dimensional C ∞ manifold. It shows that, also in a manifold setting, we can characterise the value function of a free time optimal control problem as a unique lower semicontinuous, lower bounded, generalised solution of the Hamilton-Jacobi equation. The application of these results is illustrated by the investigation of minimum time controllers for a rigid pendulum.
Reports on Mathematical Physics, 2003
A general study of symmetries in optimal control theory is given, starting from the presymplectic description of this kind of system. Then, Noether's theorem, as well as the corresponding reduction procedure (based on the application of the Marsden-Weinstein theorem adapted to the presymplectic case) are stated both in the regular and singular cases, which are previously described.
IFAC-PapersOnLine
The object of this paper is to study the uniqueness of solutions of inverse control problems in the case where the dynamics is given by a control-affine system without drift and the costs are length and energy functionals.
Journal of Dynamical and Control Systems, 1997
The aim of this paper is to adapt the general multitime maximum principle to a Riemannian setting. More precisely, we intend to study geometric optimal control problems constrained by the metric compatibility evolution PDE system; the evolution ("multitime") variables are the local coordinates on a Riemannian manifold, the state variable is a Riemannian structure and the control is a linear connection compatible to the Riemannian metric. We apply the obtained results in order to solve two flow-type optimal control problems on Riemannian setting: firstly, we maximize the total divergence of a fixed vector field; secondly, we optimize the total Laplacian (the gradient flux) of a fixed differentiable function. Each time, the result is a bang-bang-type optimal linear connection. Moreover, we emphasize the possibility of choosing at least two soliton-type optimal (semi-) Riemannian structures. Finally, these theoretical examples help us to conclude about the geometric optimal s...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.