0% found this document useful (0 votes)
95 views20 pages

Understanding Adaptive Control Systems

Adaptive control is a method for controlling systems whose parameters may be unknown or time-varying. It involves combining an online parameter estimator with a control law. There are two main approaches: indirect adaptive control uses estimated parameters to calculate controller parameters, while direct adaptive control directly estimates desired controller parameters. Non-identifier based adaptive control replaces the online parameter estimator with search methods or switching between fixed controllers. Gain scheduling is a non-identifier based method that switches controller gains based on detected operating points.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views20 pages

Understanding Adaptive Control Systems

Adaptive control is a method for controlling systems whose parameters may be unknown or time-varying. It involves combining an online parameter estimator with a control law. There are two main approaches: indirect adaptive control uses estimated parameters to calculate controller parameters, while direct adaptive control directly estimates desired controller parameters. Non-identifier based adaptive control replaces the online parameter estimator with search methods or switching between fixed controllers. Gain scheduling is a non-identifier based method that switches controller gains based on detected operating points.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

ADAPTIVE CONTROL

According to Webster's dictionary, to adapt means to "change (oneself) so that one's behavior will conform to new or changed circumstances." The words adaptive systems and adaptive control have been used as early as 1950 This generic definition of adaptive systems has been used to label approaches and techniques in a variety of areas despite the fact that the problems considered and

approaches followed often have very little in common. The specific definition of adaptive control: Adaptive

control is the combination of a parameter estimator, which generates parameter estimates online, with a control law in order to control classes of plants whose parameters are completely unknown and/or could change with time in an unpredictable manner. The choice of the parameter estimator, the choice of the control law, and

the way they are combined leads to different classes of adaptive control schemes which are covered in this book. Adaptive control as defined above has also been referred to as identifier-based adaptive control in order to

distinguish it from other approaches referred to as nonidentifier-based, where similar control problems are

solved without the use of an online parameters. The design of autopilots for high-performance aircraft was one of the primary motivations for active research in adaptive control in the early 1950s. Aircraft operate over a wide range of speeds and altitudes, and their dynamics are nonlinear and conceptually time varying. For a given operating point, the complex aircraft dynamics can be approximated by a linear model. For example, for an operating point /, the longitudinal dynamics of an aircraft model may be described by a linear system of the standard form of state space representation.

Adaptive Control: Identifier-Based

The class of adaptive control schemes is characterized by the combination of an online parameter estimator, which provides estimates of the unknown parameters at each instant of time, with a control law that is motivated from the known parameter case. The way the parameter estimator, also referred to as adaptive law in this course, is combined with the control law gives rise to two different approaches. In the first approach, referred to as indirect adaptive control, the plant parameters are

estimated online and used to calculate the controller parameters. In other words, at each time t, the estimated plant is formed and treated as if it is the true plant in calculating the controller parameters. This approach has also been referred to as explicit adaptive control, because the controller design is based on an explicit plant model. In the second approach, referred to as direct adaptive control, the plant model is parameterized in terms of the desired controller parameters, which are then estimated

directly without intermediate calculations involving plant parameter estimates. This approach has also been referred to as implicit adaptive control because the design is based on the estimation of an implicit plant model.

Adaptive Control: Non-Identifier-Based


Another class of schemes that fit the generic structure given in Figure 1.1 but do not involve online parameter estimators is referred to as non-identifier-based adaptive control schemes. In this class of schemes, the online parameter estimator is replaced with search methods for finding the controller parameters in the space of possible parameters, or it involves switching between different fixed controllers, assuming that at least one is stabilizing or uses multiple fixed models for the plant covering all possible parametric uncertainties or methods. We briefly describe the main features, advantages, and limitations of these non-identifier-based adaptive control consists of a combination of these

schemes in the following subsections. Since some of these approaches are relatively recent and research is still going on.

Gain Scheduling
Let us consider the dynamic model of a system, where for each operating point N the parameters characterized by system matrices A, B, C & D are known. For each operating point, a feedback controller with constant gains, say Kt, can be designed to meet the performance requirements for the corresponding linear model. This leads to a controller, say C(Kj), with a set of gains K, KI, --N covering all operating points. Once the

operating point, is detected the controller gains can be changed to the appropriate level from the set of computed gain set. Transitions between different operating points that lead to significant parameter changes may be pre-

handled by interpolation or by increasing the number of operating points.

The two elements that are essential in implementing this approach are a lookup table to store the values of Kj and the plant measurements that correlate well with the changes in the operating points. The approach is called gain scheduling. The gain scheduler consists of a lookup table and the appropriate logic for detecting the

operating point and choosing the corresponding value of system parameters from the lookup table. Withthis

approach, plant parameter variations can be compensated by changing the controller gains as functions of the input, output, and auxiliary measurements. The advantage of gain scheduling is that the controller gains can be changed as quickly as the auxiliary measurements respond to parameter changes. Frequent and rapid changes of the controller gains,

Multiple

Models,

Search

Methods,

and

Switching Scheme

A class of non-identifier-based adaptive control schemes emerged over the years which do not explicitly rely on online parameter estimation. These schemes are based on search methods in the controller parameter space until the stabilizing controller is found or the search method is restricted to a finite set of controllers, one of which is assumed to be stabilizing. In some approaches, after a satisfactory controller is found it can be tuned locally using online parameter estimation for better performance. Since the plant parameters are unknown, the parameter space is parameterized with respect to a set of plant models which is used to design a finite set of controllers so that each plant model from the set can be stabilized by at least one controller from the controller set. A switching approach is then developed so that the stabilizing controller is selected online based on the I/O data measurements. Without going into specific details, the general structure of this multiple model adaptive control with switching, as it is often called.

Why Adaptive Control


A number of controllers(N) are used to control a plant whose parameters 0* are unknown or could change with time. In some approaches a priori knowledge about the location of parameters, such as lower and upper bounds, is used to parameterize the plant and generate a finite set of controllers so that for each possible plant there exists at least one stabilizing controller from the set of the N controllers. This by itself could be a difficult task in some practical situations where the plant parameters are unknown or change in an unpredictable manner. Furthermore, since there is an infinite number of plants within any given bound of parametric uncertainty, finding controllers to cover all possible parametric uncertainties may also be challenging. In other approaches, it is assumed that the set of controllers with the property that at least one of them is stabilizing is available. Once the set of controllers with

the stabilizing property is available the problem of finding the stabilizing one using I/O data has to be resolved. This is achieved by the use of a switching logic that differs in detail from one approach to another. While these methods provide another set of tools for dealing with plants with unknown parameters, they cannot replace the identifier-based adaptive control schemes where no

assumptions are made about the location of the plant parameters. One advantage, however, is that once the switching is over, the closed-loop system is LTI, and it is much easier to analyze its robustness and performance properties. This LTI nature of the closed-loop system, at least between switches, allows the use of the well-established and powerful robust control tools for LTI systems controller design. These approaches are still at their infancy and it is not clear how they affect performance, as switching may generate bad transients with adverse effects on for

performance.

Switching may also increase the controller bandwidth and lead to instability in the presence of high-frequency unmodeled dynamics. Guided by data that do not carry sufficient information about the plant model, the wrong controllers could be switched on over periods of time, leading to internal excitation and bad transients before the switching process settles to the right controller. Some of these issues may also exist in classes of identifierbased adaptive control, as such phenomena are

independent of the approach used.

A Brief History
Research in adaptive control has a long history of intense activities that involved debates about the precise

definition of adaptive control, examples of instabilities, stability and robustness proofs, and applications.

Starting in the early 1950s, the design of autopilots


for high-performance aircraft motivated intense research activity in adaptive control. High performance aircraft undergo drastic changes in their dynamics when they

move from one operating point to another, which cannot be handled by constant-gain feedback control. A

sophisticated controller, such as an adaptive controller, that could learn and accommodate changes in the aircraft dynamics was needed. Model reference adaptive control was suggested by

Whitaker and coworkers in to solve the autopilot control problem[1958-61]. Sensitivity methods and the MIT rule were used to design the online estimators or adaptive laws of the various proposed adaptive control schemes. An adaptive pole placement scheme based on the optimal linear quadratic problem was suggested by Kalman in [1958]. The work on adaptive flight control was characterized by a "lot of enthusiasm, bad hardware and nonexisting

theory"[1983] . The lack of stability proofs and the lack of understanding of the properties of the proposed adaptive control schemes coupled with a disaster in a flight test [1958 caused the interest in adaptive control to diminish.

The 1960s became the most important period for the development of control theory and adaptive control in particular. State-space techniques and stability theory based on Lyapunov were introduced. Developments in dynamic programming [1957,1961], dual control [1965] and stochastic control in general, and system identification and parameter estimation [1971] played a crucial role in the reformulation and redesign of adaptive control. By 1966, Parks and others found a way of redesigning

the MIT rule-based adaptive laws used in the model reference adaptive control (MRAC) schemes of the 1950s by applying the Lyapunov design approach. Their work, even though applicable to a special class of LTI plants, set the stage for further rigorous stability proofs in adaptive control for more general classes of plant models. The advances in stability theory and the progress in control theory in the 1960s improved the understanding of adaptive control and contributed to a strong renewed interest in the field in the

1970s. On the other hand, the simultaneous development and progress in computers and electronics that made the implementation adaptive ones, of complex controllers, to such an as the

feasible

contributed

increased

interest in applications of adaptive control.

The 1970s witnessed several breakthrough results in the design of adaptive control. MRAC schemes using the Lyapunov design approach were designed and analyzed in [1979,1980]. The concepts of positivity and hyper stability were used in [1979] to develop a wide class of MRAC schemes with well-

established stability properties. At the same time parallel efforts for discrete-time plants in a deterministic and stochastic environment produced several classes of adaptive control schemes with rigorous stability proofs in 1980& 1984. The excitement of the 1970s and the development of a wide class of adaptive control schemes with well

established stability properties were accompanied by

several successful applications in 1980. The successes of the 1970s, however, were soon followed by controversies over the practicality of adaptive control. As early as 1979 it was pointed out by Egardt that the adaptive schemes of the 1970s could easily go unstable in the presence of small disturbances. The non-robust behavior of adaptive control became very controversial in the early 1980s when more examples of instabilities were published by loannou et al. and Rohrs et al. demonstrating lack of robustness in the presence of unmodeled dynamics or bounded Disturbances in

1983,1984 & 11985 respectively. Rohrs's example of instability stimulated a lot of interest, and the objective of many researchers was directed towards understanding the mechanism of instabilities and finding ways to

counteract them. By the mid1980s, several new redesigns and

modifications were proposed and analyzed, leading to a body of work known as robust adaptive control.

An adaptive controller is defined to be robust if it guarantees signal boundedness in the presence of

"reasonable" classes of unmodeled dynamics and bounded disturbances as well as performance error bounds that are of the order of the modeling error. The work on robust adaptive control continued throughout the 1980s and involved the understanding of the various robustness modifications and their unification under a more general framework during 1988-1991 . In discrete time Praly in 1984-85 was the first to establish global stability in the presence of unmodeled dynamics using various fixes and the use of a dynamic normalizing signal which was used in Egardt's work to deal with bounded disturbances. The use of the normalizing signal together with the switching amodification led to the proof of global stability in the presence of unmodeled dynamics for continuous-time plants in 1986. The solution of the robustness problem in adaptive control led to the solution of the

long-standing problem of controlling a linear plant whose parameters are unknown and changing with time. By the end of the 1980s several breakthrough results were published in the area of adaptive control for linear time-vary ing plants [5, 60-63]. The focus of adaptive control research in the late 1980s to early 1990s was on performance properties and on extending the results of the 1980s to certain classes of nonlinear plants with unknow parameters. These efforts led to new classes of adaptive schemes, motivated from nonlinear system theory [64-69] as well as to adaptive control schemes with improved transient and steady-state performance [70-73]. New concepts such as adaptive backstepping, nonlinear damping, and tuning functions are used to address the more complex problem of dealing with parametric uncertainty in classes of nonlinear

systems [66].

In the late 1980s to early 1990s, the use of neural networks as universal approximators of unknown nonlinear functions led to the use of online parameter estimators to "train" or update the weights of the neural networks. Difficulties in establishing global convergence results soon arose since in multilayer neural networks the weights appear in a nonlinear fashion, leading to "nonlinear in the parameters"

parameterizations for which globally 1.4. A Brief History 1J_ stable online parameter estimators cannot be developed. This led to the consideration of single layer neural networks where the weights can be expressed in ways convenient for estimation parameterizations. These approaches are

described briefly in Chapter 8, where numerous reading. references are also provided for further

In the mid-1980s to early 1990s, several groups of researchers started looking at alternative methods of controlling plants with unknown parameters [8-29]. These methods avoid the use of online parameter estimators in general and use search methods, multiple models to characterize parametric uncertainty, switching logic to find the stabilizing controller, etc. Research in these non-identifier-based adaptive

control techniques is still going on, and issues such as robustness and performance are still to be resolved. Adaptive control has a rich literature full of different techniques for design, analysis, performance, and applications. Several survey papers [74, 75] and books and monographs [5,39,41,45-47,49,50,66,76-93] have already been

published. Despite the vast literature on the subject, there is still a general feeling that adaptive control is a collection of unrelated

technical tools and tricks. The purpose of this book is to present the basic design and analysis tools in a tutorial manner, making adaptive control accessible as a subject to less mathematically oriented readers while at the same time preserving much of the mathematical rigor required for stability and robustness analysis. Some of the significant

contributions of the book, in addition to its relative simplicity, include the presentation of different approaches and algorithms in a unified, structured manner which helps abolish much of the mystery that existed in adaptive control. Furthermore, up to now continuous-time adaptive control approaches have been viewed as different from their discrete-time counterparts. In this book we show for the first time that the continuous-time adaptive control schemes can be converted to discrete time by using a simple approximation of the time derivative.

This page intentionally left blank

You might also like