Introduction to Non-linear Control
Dr. Md. Zahurul Haq, Ph.D., CEA, FBSME, FIEB
Professor
Department of Mechanical Engineering
Bangladesh University of Engineering & Technology (BUET)
Dhaka-1000, Bangladesh
http://zahurul.buet.ac.bd/
RME 3204: Control Systems Design
Department of Robotics and Mechatronics Engineering,
University of Dhaka
http://zahurul.buet.ac.bd/RME3204/
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 1 / 22
1 Non-linear Control
2 Adaptive Control
3 Optimal Control
4 Discrete-time System
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 2 / 22
Non-linear Control
Non-linear Control
Non-linear control refers to control systems where the relationship
between input and output is not proportional or linear. It deals
with systems that are non-linear, time-variant, or both.
Linear control theory applies to systems made of devices which
obey the superposition principle. These are governed by linear
differential equations. In a system where parameters do not
change with time are called linear time invariant (LTI) systems.
Non-linear control theory covers a wider class of systems that do
not obey the superposition principle. It applies to more real-world
systems, because most of the real control systems are non-linear.
These systems are often governed by non-linear differential
equations.
Non-linear control is crucial for addressing complex systems where
linear approximations fail to capture the full behaviour, and it
enables more accurate and effective control in various applications.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 3 / 22
Non-linear Control
Non-linear Control: Key Concepts
Non-linear Systems: Systems where the output is not directly
proportional to the input. This can be due to various factors like
saturation, dead zones, or complex physical phenomena.
Analysis and Design: Non-linear control theory provides tools to
analyse and design controllers for these systems, often involving
specialized techniques like Lyapunov stability theory, feedback
linearisation, or model predictive control.
Improved Performance: By accounting for non-linearities, these
controllers can achieve better performance, especially in systems
with complex dynamics or uncertainties, compared to the linear
controllers.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 4 / 22
Non-linear Control
Non-linear Control: Applications
Non-linear control finds applications in diverse fields, including
robotics, aerospace, chemical processes, and power systems.
DC Machine: The magnetization curve of a DC machine is a
classic example of a non-linear system where the relationship
between magnetic field and current is not linear, especially at high
current levels due to saturation.
Temperature Control: Simple on-off temperature controllers in a
heater demonstrate a non-linear control approach, switching the
heater on or off to maintain a desired temperature.
Biological Systems: Many biological processes exhibit non-linear
behaviour, making nonlinear control techniques suitable for
modelling and controlling them.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 5 / 22
Non-linear Control
Non-linear Control: Why Use?
Limitations of Linear Models: Linear models are often simplified
and may not accurately represent real-world systems. Non-linear
control provides a more accurate representation and better control
of complex systems.
Handling Uncertainties: Non-linear control can be more effective
in handling uncertainties and disturbances in the system, leading
to more robust control.
Improved Performance: By considering the non-linear
characteristics of the system, controllers can be designed to achieve
better tracking accuracy, stability, and disturbance rejection.
Cost-Effectiveness: In some cases, non-linear control can be more
cost-effective by allowing the use of simpler, non-linear sensors or
actuators.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 6 / 22
Adaptive Control
Adaptive Control
Adaptive control is a powerful technique for dealing with systems that
are subject to uncertainty and variations, offering improved
performance and reliability, but with its own set of design and analysis
challenges.
Adaptive control is a control method where the controller
automatically adjusts its parameters to accommodate changes in a
system or its environment. This is particularly useful to deal the
systems whose parameters are uncertain or vary over time.
Unlike robust control, adaptive control doesn’t require precise
knowledge of the bounds of these variations. Instead, it focuses on
actively adjusting the control law itself to maintain desired
performance.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 7 / 22
Adaptive Control
Adaptive Control: Key Concepts
Uncertainty and Variation: Adaptive control is designed for
systems where the parameters are not known precisely or change
over time. This could be due to factors like wear and tear,
environmental changes, or even changing operating conditions.
Automatic Adjustment: The core idea is that the controller
automatically tunes its parameters in real-time to compensate for
these changes.
Performance Maintenance: The goal is to maintain a desired level
of system performance, even when the system’s behaviour changes.
Distinction from Robust Control: While both adaptive and robust
control address uncertainties, adaptive control actively adjusts the
control law, while robust control designs a control law that works
within a certain range of expected variations.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 8 / 22
Adaptive Control
Adaptive Control: How it works
1 Identification: Adaptive control system first identifies the current
state of the system, including any variations or uncertainties.
2 Adaptation: Based on this identification, the controller adjusts its
parameters (gains, etc.) to compensate for the identified changes.
3 Control: The adjusted controller then applies the control signal to
the system to maintain the desired performance.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 9 / 22
Adaptive Control
Adaptive Control: Examples
Adaptive Cruise Control (ACC): In vehicles, ACC systems adjust
speed to maintain a safe distance from the vehicle ahead, even as
the speed of the other vehicle changes.
Aircraft Control: As an aircraft consumes fuel, its weight changes.
Adaptive control systems can adjust the control surfaces to
maintain stability and performance throughout the flight.
Industrial Process Control: Adaptive control can be used in
various industrial processes to maintain desired operating
conditions despite the variations in raw materials, temperature, or
other factors.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 10 / 22
Adaptive Control
Adaptive Control: Benefits
Improved Performance: Adaptive control can lead to better
performance compared to fixed-parameter controllers, especially in
dynamic or uncertain environments.
Reduced Initial Costs: By adapting to variations, adaptive control
may allow for more cost-effective designs and reduce the need for
overly conservative or redundant components.
Increased Reliability: Adaptive systems can be more robust to
unexpected changes and disturbances, potentially leading to
increased reliability.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 11 / 22
Adaptive Control
Adaptive Control: Challenges
Complexity: Implementing adaptive control can be more complex
than traditional control methods.
Analysis: Analysing the behaviour of adaptive control systems can
be challenging, especially when considering real-world factors like
noise and nonlinearities.
Convergence: Ensuring that the adaptive control system converges
to a stable and optimal solution can also be a challenge.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 12 / 22
Optimal Control
Optimal Control
Optimal control is a branch of control theory that focuses on
finding the best way to control a dynamic system over time to
achieve a desired objective.
It involves determining a control strategy or "control law" that
minimizes a specific cost or performance index while satisfying
system constraints and boundary conditions.
This field has applications in various domains like robotics,
aerospace, economics, and operations research.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 13 / 22
Optimal Control
Optimal Control: Key Concepts
Dynamic System: A system whose behaviour changes over time,
often described by differential equations.
Control Law: A rule or strategy that dictates how to adjust the
control inputs to the system based on its current state.
Performance Index (Cost Function): A mathematical expression
that quantifies the desired outcome or the cost associated with a
particular control strategy. It could involve minimizing errors,
energy consumption, or other relevant factors.
Constraints: Limitations on the system’s state or control inputs,
such as maximum allowed velocities, actuator limits, or safety
margins.
Optimization: The process of finding the control law that
minimizes the performance index while satisfying all constraints.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 14 / 22
Optimal Control
Optimal Control: How it Works
1 System Modelling: The first step is to create a mathematical
model of the dynamic system, often using differential equations.
2 Performance Index Definition: A suitable cost function is defined
that reflects the objectives of the control problem.
3 Constraint Specification: Any limitations on the system’s
behaviour are explicitly stated.
4 Control Law Derivation: Optimal control theory provides methods
to find the control law that minimizes the performance index.
5 Implementation: The derived control law is then implemented to
control the system in real-time.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 15 / 22
Optimal Control
Optimal Control: Examples of Applications
Robotics: Optimizing robot movements, trajectory planning, and
force control.
Aerospace: Optimizing the trajectory of a spacecraft to reach a
desired orbit or destination, Designing autopilot systems for
aircraft and spacecraft.
Economics: Optimizing investment strategies and resource
allocation.
Operations Research: Optimizing production schedules or resource
allocation.
Control of Power Systems: Improving the stability and efficiency
of power grids.
Finance: Developing trading algorithms and portfolio management
strategies.
Biology: Modelling and understanding animal movement and
coordination.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 16 / 22
Optimal Control
Optimal Control: Common Techniques
Dynamic Programming: A recursive approach for solving
optimization problems, particularly in discrete-time systems.
Linear Quadratic Regulator (LQR): A widely used method for
linear systems with quadratic cost functions.
Model Predictive Control (MPC): An advanced technique that
uses a model of the system to predict future behaviour and
optimize control actions over a finite time horizon.
Reinforcement Learning: A machine learning approach where an
agent learns to control a system by trial and error, receiving
rewards for desirable actions.
In essence, optimal control provides a framework for designing control
strategies that achieve the best possible performance from a dynamic
system, given a set of objectives and constraints.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 17 / 22
Discrete-time System
Discrete-time System
Discrete control is a method of automation that focuses on
controlling systems with a limited number of states, typically two,
like on/off or true/false.
It’s often used in applications involving logical and sequential
control of devices or processes, such as material handling and
automated manufacturing.
Discrete control differs from continuous control, which deals with
systems that have continuous inputs and outputs.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 18 / 22
Discrete-time System
Discrete-time System: Key Characteristics
Limited States: Discrete control systems operate with a finite
number of states for inputs and outputs, often just two, like on/off
for a motor or open/closed for a valve.
Logical and Sequential: It emphasizes logical operations and the
sequence of events to control the system, making it suitable for
tasks that involve specific steps and conditions.
Digital Implementation: Discrete control is frequently
implemented using digital computers or microprocessors, which
inherently work with discrete data points and logical operations.
Sampling: Discrete control systems often involve sampling
continuous signals at specific time intervals, effectively turning
them into discrete data for processing by the digital controller.
Zero-Order Hold: When a digital controller sends a command to a
continuous system, it often uses a zero-order hold, which
maintains the last received value until the next command is issued.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 19 / 22
Discrete-time System
Discrete-time System: Applications
Traffic Lights: A classic example is the control of traffic lights,
where the lights cycle through a sequence of red, yellow, and green.
Automated Doors: Opening and closing doors in a building or
factory based on sensors and programmed logic.
Material Handling Systems: Moving materials from one location
to another in a factory based on defined paths and conditions.
Industrial Processes: Controlling the sequence of operations in a
manufacturing process, such as filling a bottle, capping it, and
labelling it.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 20 / 22
Discrete-time System
Z-transform
The z-transform is a mathematical tool used to convert
discrete-time signals or systems into a complex frequency domain
representation.
It’s a crucial concept in digital signal processing, analogous to the
Laplace transform for continuous-time signals.
The z-transform allows for analysis of frequency response and
stability of discrete-time systems, such as digital filters.
Digital filter design: The z-transform helps in designing and analysing
digital filters, which are used to process discrete-time signals.
System analysis: It allows for analysing the frequency response and
stability of discrete-time systems, including those modelled by difference
equations.
Control systems: The z-transform is also used in the analysis and design
of discrete-time control systems.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 21 / 22
Discrete-time System
Z-transform: Key Concepts
Discrete-time signals: The z-transform works with signals
represented by a sequence of numbers sampled at discrete time
intervals.
Frequency domain representation: It transforms these
discrete-time signals into a complex frequency domain, allowing
for analysis of their frequency content.
z-plane: The z-transform uses a complex variable ’z’, which can be
visualized on the z-plane. The location of poles and zeros in the
z-plane provides information about the system’s stability and
frequency response.
Stability analysis: The z-transform is particularly useful in
determining the stability of discrete-time systems, such as digital
filters, by examining the location of poles in the z-plane.
© Dr. Md. Zahurul Haq (BUET) Introduction to Non-linear Control RME 3204 (2025) 22 / 22