Dynamic Systems
Dynamic systems are systems that change or evolve in time according to a fixed rule. For
many physical systems, this rule can be stated as a set of first-order differential equations:
ẋ = dx/dt = f(x(t), u(t), t)
In the above equation, x(t) is the state vector, a set of variables representing the
configuration of the system at time t. For instance, in a simple mechanical mass-spring-
damper system, the two state variables could be the position and velocity of the mass. u(t) is
the vector of external inputs to the system at time t, and f is a (possibly nonlinear) function
producing the time derivative (rate of change) of the state vector, dx/dt, for a particular
instant of time.
The state at any future time, x(t1), may be determined exactly given knowledge of the initial
state, x(t0), and the time history of the inputs, u(t), between t0 and t1 by integrating
Equation (1). Though the state variables themselves are not unique, there is a minimum
number of state variables, n, required in order to capture the 'state' of a given system and to
be able to predict the system's future behavior (solve the state equations). n is referred to
as the system order and determines the dimensionality of the state-space. The system order
usually corresponds to the number of independent energy storage elements in the system.
The relationship given in Equation (1) is very general and can be used to describe a wide
variety of different systems; unfortunately, it may be very difficult to analyze. There are two
common simplifications which make the problem more tractable. First, if the function f does
not depend explicitly on time, i.e. ẋ = f(x,u), then the system is said to be time invariant. This
is often a very reasonable assumption because the underlying physical laws themselves do
not typically depend on time. For time-invariant systems, the parameters or coefficients of
the function f are constant. The state variables, x(t), and control inputs, u(t), however, may
still be time dependent.
The second common assumption concerns the linearity of the system. In reality, nearly
every physical system is nonlinear. In other words, f is typically some complicated function
of the state and inputs. These nonlinearities arise in many different ways, one of the most
common in control systems being 'saturation' in which an element of the system reaches a
hard physical limit to its operation. Fortunately, over a sufficiently small operating range
(think tangent line near a curve), the dynamics of most systems are approximately linear. In
this case, the system of first-order differential equations can be represented as a matrix
equation, that is, ẋ = Ax + Bu.
Until the advent of digital computers (and to a large extent thereafter), it was only practical
to analyze linear time-invariant (LTI) systems. Consequently, most of the results of control
theory are based on these assumptions. Fortunately, as we shall see, these results have
proven to be remarkably effective and many significant engineering challenges have been
solved using LTI techniques. In fact, the true power of feedback control systems are that
they work (are robust) in the presence of the unavoidable modeling uncertainty.