Advanced Control Unleashed
Advanced Control Unleashed
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Terrence L. Blevins
Gregory K. McMillan
Willy K. Wojsznis
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Michael W. Brown
The information presented in this publication is for the general education of the reader. Because
neither the author nor the publisher have any control over the use of the information by the reader,
both the author and the publisher disclaim any and all liability of any kind arising out of such use.
The reader is expected to exercise sound professional judgment in using any of the information
presented in a particular application.
Additionally, neither the author nor the publisher have investigated or considered the affect of any
patents on the ability of the reader to use any of the information in a particular application. The
reader is responsible for reviewing any possible patents that may affect any particular use of the
information presented.
Any references to commercial products in the work are cited as examples only. Neither the author
nor the publisher endorse any referenced commercial product. Any trademarks or tradenames
referenced belong to the respective owner of the mark or name. Neither the author nor the publisher
make any representation regarding the availability of any referenced commercial product at any
time. The manufacturer’s instructions on use of any commercial product must be followed at all
times, even if in conflict with the information in this publication.
ISBN 1-55617-815-8
No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior writ-
ten permission of the publisher.
ISA
67 Alexander Drive
P.O. Box 12277
Research Triangle Park, NC 27709
ACKNOWLEDGEMENT xiii
FOREWORD xvii
Chapter 1 INTRODUCTION 1
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
General Procedure 20
Application Detail 26
Rules of Thumb 74
Theory, 76
Process Time Constants and Gains 76
Process Time Delay 79
Ultimate Gain and Period 80
Peak and Integrated Error 82
Feedforward Control 84
Dead Time from Valve Dead Band 84
Nomenclature, 85
References, 86
vii
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Chapter 7 FUZZY LOGIC CONTROL 239
Practice, 239
Overview 239
Opportunity Assessment 240
Examples 240
Application, 241
General Procedure 241
Rules of Thumb 242
Guided Tour 242
Theory, 244
Introduction to Fuzzy Logic Control 244
Building a Fuzzy Logic Controller 247
Fuzzy Logic PID Controller 251
Fuzzy Logic Control Nonlinear PI Relationship 254
FPID and PID Relations 257
Automation of Fuzzy Logic Controller Commissioning 258
References, 259
INDEX 431
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The authors wish to express their appreciation to Mark Nixon and Ron
Eddie from Emerson Process Management, for their enthusiastic support
and commitment of resources for this book, to Jim Hoffmaster, Bud Keyes,
Duncan Schleiss, John Berra, and Gil Pareja from Emerson Process Man-
agement for their inspiration and support in establishing the DeltaV
advanced control program, to Karl Astrom from Lund University, Tom
Edgar from the University of Texas at Austin, Dale Seborg from the
University of California, Santa Barbara and Tom McAvoy from the Univer-
sity of Maryland for their guidance in the pursuit of new technologies,
Mike Gray and Mark Mennen from Solutia Inc. for the initiation and
sustenance of advanced control applications and innovations, Ken
Schibler from Emerson Process Management for his help in setting the
direction of the book, Robert Cameron, Michael Mansy, Glenn Mertz, and
Gina Underwood from Solutia Inc. for their valuable comments, and
finally, Scott Weidemann from Washington University, and Jim Cahill,
Brenda Forsythe, and Cory Walton from Emerson Process Management
for their essential contributions to the videos and demos on the CD. The
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Greg is an ISA Fellow and received the ISA “Kermit Fischer Environmen-
tal” Award for pH control in 1991, the Control Magazine “Engineer of the
Year” Award for the Process Industry in 1994, and was one of the first
inductees into the Control Magazine “Process Automation Hall of Fame”
in 2001. He received a B.S. from Kansas University in 1969 in Engineering
Physics and a M.S. from University of Missouri – Rolla in 1976 in Control
Theory. Presently, Greg is an affiliate Professor at Washington University
in Saint Louis, Missouri and is a consultant through EDP Contract Services
in Austin, Texas.
Cell Phone: (314) 703-9981
E-mail: gkmcmi@[Link]
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
There has been a dynamic development of control over the past 50 years.
Many new methods have appeared. The methods have traditionally been
presented in highly specialized books written for researchers or engineers
with advanced degrees in control theory. These books have been very use-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ful to advance the sate of the art. They are however difficult for an average
engineer. The reasons are that it is necessary to read many books to get a
good coverage of advanced control techniques and that the level of mathe-
matics used requires a substantial preparation. This is a dilemma because
several of the advanced control techniques have indeed been very benefi-
cial in industrial and more engineers should be aware of them. Even if
many details of the new methods are complicated the basic underlying
ideas are often quite simple. Many methods have also been packaged so
that they are relatively easy to use. It is thus highly desirable to present the
industrially proven control methods to ordinary engineers working in
industry. This book is a first attempt to do this. The book provides a basis
for assessing the benefits of advanced control. It covers auto-tuning,
model predictive control, optimization, estimators, neural networks, fuzzy
control, simulators, expert systems, diagnostics, and performance assess-
ment. The book is written by four seasoned practitioners of control, hav-
ing jointly more than 100 years of real industrial experience in the
development and use of advanced control. The book is well positioned to
provide the bridge over the infamous Gap between Theory and Practice in
control.
Karl J. Astrom
xvii
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The advent of powerful and friendly integrated software has moved
advanced process control (APC) from the realm of consultants into the
arena of the average process control engineer. The obstacles of infrastruc-
ture and special skill requirements have started to disappear and we are
poised for an accelerated application of APC.
Until recently, most of this knowledge ended up with consultants, and the
success of the application often deteriorated once they departed. There is
now an opportunity for the engineers closest to the process and daily
operations to take a much more active role in the development and sup-
port of APC applications. It is a win-win situation in that the cost of APC
can be reduced by using consultants primarily in a higher-value-added
role of conceptual design and optimization. Even more importantly,
greater understanding, support, and involvement of onsite engineers can
increase the success rate, the on-stream time, and the longevity of an APC
application. This decrease in the cost and increase in the benefits will in
turn lead to a larger number of successful APC installations and a greater
interest in APC as a method of improving process efficiency and capacity.
However, much of the purpose and use of APC has been clouded in the-
ory. The theory is scattered among many books written for graduate
school programs in advanced process control. Application papers typi-
cally concentrate on the benefits of specific APC projects and serve more
as advertisements for particular consulting or software firms than as
implementation guides. Little if anything has been written for the practic-
ing engineer on how to select, design, configure, commission, and tune
APC systems. The purpose of this book is to demystify APC and make it
more accessible. To that end, the book focuses on practice and applications
backed up by enough theory to insure a deeper understanding.
The THEORY section presents the major facets of selected approaches to the
deployment of each APC technology as part of a state-of-the-art tool set.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For brevity, the section does not survey all the possible methodologies and
techniques, but focuses on those that are innovative and simple enough to
be integrated into a distributed control system.
This book covers a great deal of ground. Each of the technologies dis-
cussed here could easily fill a book in itself. However, users today don’t
have the time or inclination to read a lot of material. Lists, hints, rules of
thumb, and concise explanations are employed to save the reader time and
to provide both a better perspective on the whole picture and an improved
ability to drill down to obtain specific implementation guidance. The book
concentrates on what is most important. Users can quickly get to the heart
of the matter without getting lost in the details associated with a specific
tool or suffering from information overload.
Included with the book is a compact disc that contains a set of examples of
the technologies discussed in the book. They demonstrate, by means of a
step-by-step procedure and a detailed dynamic process model, how to
configure, test, and run each APC application. Configuration and case files
use a virtual plant that has a complete scalable Distributed Control System
(DCS) with a suite of APC tools and a high-fidelity plant simulation.
A companion set of Power Point slides that illustrates all of the major Fig-
ures, equations, tables, lists and rules included in the book is on the CD.
These slides and the hands-on exercises make the book practical as a text-
book for courses on both basic and advanced process control. Chapters 2
and 6 receive the most extensive treatment because introductory courses
are most common. Also, students and users alike need to first concentrate
on getting the basic regulatory control system designed correctly and
tuned properly before moving on to more advanced topics. Most of the
material has been tested in an introductory course on process control for
junior and senior chemical engineers at Washington University in Saint
Louis. These students have demonstrated the ability to immediately apply
these APC tools to example problems after a brief tutorial, using their
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The tutorials and presentations on the CD do not require any special soft-
ware or hardware beyond a PC with a media player, speakers, and a dis-
play with a screen area of at least 1024 by 768 pixels.
This book with its appendices and CDs should enable the average process
engineer to develop a good understanding of the representative principles
and techniques of APC. This knowledge will be helpful in setting objec-
tives, evaluating potential APC opportunities, and applying the most
appropriate APC technologies. Readers should feel free to contact the
authors at their e-mail addresses if they have any questions about the use
of the book, exercises, demos, slides, or APC tools described.
All royalties from this book will be given directly to universities, consortia,
and educational programs to promote and enhance the development and
use of advanced process control. A beneficiary of each year’s royalties will
be chosen by the authors.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
The advanced control projects with the largest benefits usually have made
significant improvements in the basic regulatory control system. While
advanced process control (APC) techniques can partially compensate for
such limitations as missing measurements, excessive dead time, and poor
signal-to-noise ratios, a solid foundation will provide the lowest total cost,
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
greatest total benefit, and the longest lifecycle for the advanced control
system. Deficiencies in the measurement and the final element can
increase the time required for process testing and identification by a factor
of 5 or more and can significantly reduce the improvement in process
capacity and efficiency provided by APC.
The core of a solid foundation for advanced process control is good mea-
surements and final elements. The measurement is the window into the
process and must be able to provide an undistorted view of small changes
in the process. The final element is the means of affecting the process and
must be able to make small changes to the process. This overview pro-
vides a perspective of how these objectives are best met by reducing the
reproducibility error, noise, and interferences in the measurement and
decreasing the stick-slip and dead band in the control valve.
Measurement
Reproducibility is the closeness of agreement of an output for an input
approaching from either direction at the same operating conditions over a
period of time. Repeatability is the closeness of agreement of an output for
successive inputs approaching from the same direction at the same operat-
ing conditions. Reproducibility includes the repeatability as it deteriorates
over time plus drift, and is the better number for control. Another impor-
tant consideration is the interference from changes in process fluids and
operating conditions. Unfortunately, the specifications given by manufac-
turers for such measurements as accuracy, linearity, or rangeability are
extraneous if not misleading because they are either not as important as
reproducibility, drift, and interference or are generated under fixed labora-
tory conditions.
Final Element
The most common final element is the control valve. Controller outputs
also manipulate the speed of pumps and power to heaters. With final ele-
ments that are totally electronically set, there are no issues of stick and slip
as there are for control valves, and any dead band that exists is purposely
introduced and adjustable to reduce the response to noise. Also, the
response of the manipulated variable (flow for the pump and heat for the
heater) is linear with controller output. Variable-speed drives have essen-
tially no time delay or time constant and rate limiting is normally adjust-
able and not an issue except for surge control. Heaters are inherently slow,
but most temperature processes are also slow.
Usually, a control valve will not move on its own or when the controller
output is constant unless the actuator is undersized or the positioner is
unstable. Also, if the valve were to drift, the positioner and process con-
troller would correct for it. Thus, long-term reproducibility and noise are
not normally issues for control valves. While noise is not generated in the
valve stroke, noise in the process variable can be passed on as rapid
changes in the valve signal, which, if they exceed the resolution limit or
dead band of the control valve, can cause excessive wear and tear and pre-
mature failure of the packing.
Thus, in the normal scheme of things, slip is worse than stick, and stick is
worse than dead band, and dead band is worse than stroking time. For
sliding stem valves, stick-slip and dead band go hand in hand since the
common cause is excessive packing friction. In fact, if the slip is equal to
the stick, it is effectively the same thing as the resolution limit. The resolu-
tion of sliding stem valves can be estimated as half of the dead band [2.4].
In other words, where you have excessive dead band, you tend also to
have excessive stick-slip. However, in rotary valves, there are different
sources of stick-slip and dead band. A rotary valve could have a large
dead band but little stick-slip.
Pneumatic positioner
requires a negative Stroke
signal to close valve (%) Digital positioner
will force valve
shut at 0% signal
Stick-Slip
0 Signal
dead band (%)
The effect of slip is worse than stick, stick is worse than dead band,
and dead band is worse than stroking time (except for surge control)
Until recent years, when you asked a control valve manufacturer to esti-
mate the dynamic response of a control valve, you were given the stroking
time of the actuator. Even now, if you ask for a response time that includes
the valve, it will be for a change of 10% in controller output at 50% posi-
tion so that the effect of stick-slip and dead band are largely removed [2.5].
In actual operation, the change in controller output per scan is typically
less than 0.5% and can occur at positions less than 20% where the friction
of the sealing surfaces increases the stick-slip. Tests done at these condi-
tions will unearth the real response problems. In valves, stick and slip go
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
together and can be identified while the loop is operating for fast measure-
ments, as shown in Figure 2-2. Here, stick is the amount of change in the
controller output where there is no change in the process variable and slip
is the rapid change in the process variable divided by the product of the
valve and process gain.
59
58.5
3.25 Percent
58 Controller Output
Ball Travel Backlash + Stiction
57.5
57
56.5
Stroke 56 Dead band is
% slip peak to peak
55.5 amplitude
55
stick
54.5
54
53.5
53
0 100 200 300 400 500 600 700 800
Time ( Seconds )
change in flow for a change in rotation and the valve gain approaches
zero.
To summarize, the numbers that traditionally have been cited by the man-
ufacturer for valve performance, such as leakage, stroking time, linearity,
and rangeability, do not provide the information needed to measure con-
trol loop performance. The user needs to know the stick-slip, dead band,
and sensitivity of the installed valve assembly at operating conditions.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Valve Travel (degrees)
Suggested throttle range is 25 to 45 degrees
3
Gain Model
2
Gain
(%Flow/%Input) EnTech Gain
1 Specification
0
0 10 20 30 40 50 60 70 80 90
Valve Travel (degrees)
Suggested throttle range is 10 to 60 degrees
3
Gain Model
2
Gain
(%Flow/%Input) EnTech Gain
1 Specification
0
0 10 20 30 40 50 60 70 80 90 100
Valve Travel (%)
Suggested throttle range is 5 to 75 %
Effect on APC
Advanced control tools such as feedforward control, online estimators,
and model predictive controllers can minimize to a significant degree the
effect of measurement deficiencies. Feedforward control can bypass the
irregularities and delay in the controlled variable but still must work
through the manipulated variable. Since the exact size of stick and dead
band is extremely variable, undercorrection is normal and the overall
improvement is minimal. Filters can reduce the effect of noise; and model
predictive control can reduce the adverse effect of noise, resolution, and
reproducibility by minimizing the error between a process vector created
from a model of the process and the set point vector [2.6]. However, its
model is based on the assumption that the control valve actually moved
for the recent past changes in the controller output. Thus, advanced con-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
trol algorithms are more vulnerable to deficiencies in the control valve
than measurement. While a kicker algorithm can theoretically reduce the
effect of dead band and stiction, overcorrection will cause excessive move-
ment similar to slip [2.4]. There is no computational correction for valve
slip. The effect of slip is amplified by high valve sensitivity (valve gain)
and high process sensitivity (process gain). The only solution for slip, and
the best solution for stiction and dead band, is a change in the valve type,
assembly, and accessories or the use of a variable-speed drive.
control system from doing its job. This scrutiny involves an analysis of the
type, location, and installation of the instrument and final element. Both
the degree to which deficiencies in the measurement can be compensated
for by advanced control techniques, and the permissible amount of stick-
slip and dead band, must be part of a cost-benefit analysis.
Opportunity Assessment
In this section, some questions are offered that could form an OA to find
improvements in a basic control system. Question (1) deals with the ability
to overdrive the manipulated variable on startup or for a major set point
change in a batch operation to reduce the amount of time it takes for the
controlled variable to reach set point. This question is also important in
performance of an advanced control system to help reduce the time lag in
the manipulated variable for the model predictive controller. There is, of
course, a tradeoff between rise time and degree of permissible overshoot,
but in general, for temperature and composition control of volumes with
mixing and for the start of a continuous or batch process, the output
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
should initially be saturated high but backed off from this limit before the
controlled variable approaches set point. Various algorithms and tuning
methods are available to provide overdrive. The fraction that the startup
time or a batch cycle can be reduced is proportional to the ratio of the
missing area of overdrive to the total area of the manipulated variable dur-
ing the rise time.
reduce the time to reach a batch set point and eliminate operator attention
requests.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
2. Is the variability less in the controlled variable of the loop when the
controller is in manual?
3. Is the variability less in other loops when the controller is in
manual?
4. Is the variability less in the process variable for an important
constraint if the controller gain or rate setting is decreased or
integral time is increased?
5. Would better reproducibility and less noise in measurement reduce
the variability in a process variable for an important constraint?
6. Have tight shutoff valves, high temperature packing, key lock
shafts, vane actuators, scotch yoke actuators, or valves without
digital positioners been used in control loops that affect important
constraints?
7. Have any of the top 20 mistakes been made in an important loop?
(See Appendix D for a list of the mistakes made every year for the
last forty years.)
8. Are there opportunities to linearize the manipulated variable for a
primary controller by creating a secondary loop that encloses the
nonlinearity?
9. Are there opportunities to attenuate a load upset to a primary loop
by creating a secondary loop that encloses the disturbance?
10. Are there flows that can be ratioed and used as a feedforward
signal to enforce a material balance for startup and to compensate
for changes in flow rate?
11. For batch operations, can phases be eliminated by going from
sequential to parallel actions, such as simultaneous heating, filling,
pressurization, and venting?
12. Can batch cycle time be reduced by a decrease in wait times, hold
periods, operator attention requests, manual actions, or lab sample
analysis time?
13. Can batch end points be automated by the use of a property
estimator, trajectory, or sustained rate of change?
14. Can batch cycle time be reduced by overdrive or an all-out run and
coast?
If the variability in a loop decreases when the loop is in manual, it indi-
cates that the loop was doing more harm than good, due to poor control
valve performance, inappropriate tuning, and/or interaction. If the valve
does not respond to small steps (e.g., 0.25% to 0.5%) in the controller out-
put, the oscillations are probably due to the control valve. If an increase in
the controller gain or a decrease in the integral time increases the variabil-
ity, it is mostly due to incorrect tuning. Lastly, if the variability in other
loops is less when a loop is put in manual, the variability is the result of
interaction.
If the variability in a loop increases when the loop is in manual, there are
load upsets that were being attenuated by the loop and it was doing some
good. If the variability stays the same, the fluctuations are mostly due to
noise or lack of measurement reproducibility.
There are also some obvious flaws that will stand out from some simple
tests. If there are significant non-uniform fluctuations in the measurement
regardless of the mode of the controller, then the selection and installation
of the transmitter are suspect. These problems are most often associated
with insufficient runs of straight pipe upstream or sensing line problems.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
One of the last and most obvious opportunities is the use of cascade con-
trol and ratio control. The most common type of cascade control is a flow
loop that deals with the nonlinearity of the control characteristic and com-
pensates for pressure upsets so that the primary control loop can manipu-
late flow instead of valve position and not see the effect of pressure
swings. The next most common cascade control system uses a jacket, or
coil inlet or output temperature secondary loop, to insulate a primary
crystallizer or reactor temperature control loop from changes in coolant
temperature and the nonlinearity associated with the manipulation of
coolant makeup flow.
The largest and most frequent opportunities in basic control are summa-
rized in Table 2-1 and discussed in detail throughout the rest of Chapter 2.
Simple equations for the fundamental relationship between either the
standard deviation or the peak or integrated error for upsets can be used
for each type of improvement.
Examples
Neutralization Process
Figure 2-4a shows a two-stage neutralization process. The economic vari-
able is yield. The optimum yield is for pH between 6 and 8. A byproduct is
formed that is 1% of the total product when the pH goes above 8. Of
greater concern is the fact that the reaction time increases from 2 minutes
by a factor of ten for each pH unit below 6 pH. The first stage is a static
mixer with a residence time of 2 seconds and the second stage is a well
mixed vessel with a residence time of 20 minutes. The titration curve has a
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
particularly steep slope between 6 and 8 pH (1 ∆pH per 0.0001 ∆ratio) and
will greatly amplify a valve stick-slip limit cycle, as shown in Figure 2-4b.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Distillation Process
Figure 2-5a shows a distillation column, feed tank, and a storage tank for
the distillate product. The series of plots in Figure 2-5b are indicative of the
nonlinear relationship between tray temperature and both the distillate-to-
feed ratio (Fd/Ff) and the impurity in the product. The process gain seen
by the temperature loop is the slope of the plot versus Fd/Ff. The inverse
of the slope of the plot of temperature versus impurity concentration
Reagent AC
Stage 2 2-1
FT
Reagent AC
2-1
Stage 1 1-1 AT
FC 2-1
1-2 FT
1-1 AT
1-1
FT
Static Mixer
1-2
Feed
2
pipe
diameters
Neutralizer
Discharge
pH
Reagent Flow
Influent Flow
Thermocouple cards with a 400o C span are used for the temperature mea-
surements. The distillate control valve has GraphoilTM packing and a
pneumatic positioner. The storage tank residence time is 4 hours and the
Feedforward
Summer RSP
FT FC
Σ 2-1 2-1
Signal
FT Characterizer
1-1
AC Reagent
f(x) Stage 2
1-1
FC Reagent
1-2 Stage 1 *1
AT
*1 1-1
FT
Static Mixer
1-2
Feed
20
pipe AC
diameters 2-1
Neutralizer
*1 - Isolation valve closes when control valve closes
AT
2-1
Discharge
20
pipe
Figure 2.4c Basic Neutralizer Control System diameters
time delay in the temperature loop is 1 hour. The reflux-to-feed ratio is 10.
If concentration of impurities in the product in the storage tank exceeds
the spec by more than 0.1%, the product must be recycled. For every 0.1%
reduction in impurity the steam flow to the reboiler must be increased by
0.1%.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Move the location of the sensor down into the tray so it always is
immersed in the liquid rather than in the vapor, or even worse, a
splashing liquid.
5. Replace the thermocouple and its DCS input with a 3- or 4-wire
RTD with a smart temperature transmitter.
6. Tune the overhead distillate receiver level controller with a high
controller gain to insure that the effects of small changes in
distillate flow translate into changes in reflux flow and thus
changes in the column temperature.
7. Tune the feed tank level controller with a low controller gain to
smooth out the changes in feed to the column. Consider the use of
error squared control. For a batch-to-continuous transition in an
undersized feed tank, use an adapted velocity limited feedforward
per Appendix B for optimum smoothing.
8. Add signal characterization to the controlled variable to
compensate for the nonlinearity in the process variable depicted in
Figure 2-5b. Provide a faceplate for the operator that displays the
actual tray temperature rather than the linearized controlled
variable of distillate demand.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
9. Add a secondary flow controller to each column loop to
compensate for the nonlinearity associated with the control valve
and to prepare the column loop for feedforward control.
10. Add a flow feedforward signal to the temperature and level
controller outputs and display the actual and desired ratio of
distillate to feed. Make sure the feedforward action is active when
the temperature controller is in manual and the operator can easily
go to flow ratio control and adjust the ratio for the startup of the
column.
The benefits from the reduction in variability afforded by the listed
improvements will be estimated in Chapter 3. The improvements are illus-
trated in Figure 2-5c.
Application
General Procedure
1. Track down and correct the source of sustained oscillations. A
power spectrum analyzer may be required to find the loops with
the common period of oscillation. Beware of a slow scan time of
the I/O and controller that will cause a slower than actual period
and a smaller than actual amplitude from aliasing. For trends or
data obtained from data historians, make sure the data highway
PC
3-1
LT LC Vent
3-1 3-1
Feed Tank
Distillate
Receiver
PT
3-1
Reflux Overheads
FC
3-3 Thermocouple
TE TC
Tray 10 3-2
3-2
FT
Column
3-3
Feed
Storage Tank
FC LC
3-4 3-2
FT LT
3-4 3-2
Steam
Bottoms
Operating
Point
Temperature
Distillate Flow
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feed Flow
% Impurity
Impurity Errors
reporting and the time intervals between data points for historical
data are not too slow and the trigger for exception reporting and
compression is not set too high. Also, the data must be saved for at
least a month to catch different process conditions and modes of
operation.
PC
3-1 Feedforward summer
LC LT Vent
FT3-3 Σ 3-1 3-1
Feed Tank
RSP
FC RSP FC
Distillate
3-1 Receiver 3-2
PT
3-1 FT FT
3-1 3-2
Reflux Overheads
Feedforward summer
FC
3-3 FT3-3 Σ
Signal Characterizer
RTD
FT
Column TT TC
3-3 f(x)
Feed Tray 6
3-2 3-2
Storage Tank
FC LC
3-4 3-2 RSP
FC
3-5
FT LT
3-4 3-2
Steam
FT
3-5
Bottoms
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
reset cycle and how the integral time must be increased [2.3].
The best fix, outlined in the Application Detail section, can be
relatively expensive in that it requires a new control valve
designed to minimize backlash and friction or a variable-speed
drive. An alternative that can mitigate but not eliminate the
limit cycle is a level-to-flow cascade loop.
c. Controllers can create periodic upsets from noise if the reaction
to the noise causes the controller output to exceed the dead
band of the control valve from the gain or rate setting being too
large. This most often happens in level controllers, where
controller gains can be quite large.
d. Controllers can amplify periodic upsets whose period is close
to the natural period of the control loop. Resonance occurs
from the feedback action of the controller being in phase with
the disturbance oscillation. It most often occurs for control
loops in series that have similar loop time delays such as liquid
pressure and flow, and inline equipment in series (heat
exchangers, static mixers, and desuperheaters).
e. Interacting controllers can cause sustained oscillations. Here,
the output of a controller affects another controller and vice
versa. A steady state relative gain analysis (RGA) can reveal
the nature of the interaction. However, the dynamics must be
considered as well since the interaction is particularly severe if
the periods of oscillation of the loops are similar. The best
solution is a change in pairing of the control loops per the
RGA. If this is not feasible, model predictive control (MPC)
should be used.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
control band can cause sustained oscillations. With the
disappearance of mechanical sensors, this primarily occurs for
temperature control loops that use thermocouple input cards
instead of narrow-range smart transmitters.
b. Although less common, sustained oscillations can also occur
from controllers tuned so aggressively that they bang back and
forth between output limits, and from nonlinear loops that
have a very high central gain region surrounded by
exceptionally low gain regions. This can occur for control
valves when an insufficient fraction of the system drop has
been allocated as a valve drop, strong acid and base titration
curves, and the temperature response of some monomer and
water distillations. For process nonlinearities, the addition of
signal characterization of the controlled variable and rate
action can eliminate or mitigate the limit cycle. For valve drop
problems, the size of the piping and/or the pump impeller
may need to be increased.
2. Track down the source of long settling times. Here, the oscillations
eventually die out but take too long or cause too much variability.
The most common cause is inappropriate controller tuning, such as
the use of too much reset action (too small an integral time) in
evaporator, reactor, or column temperature or concentration
controllers, or other loops dominated by a large time constant; too
much gain or rate action in level controllers on surge tanks; and too
much gain or rate action in liquid pressure, flow, inline
concentration (blending), or sheet gauge or moisture controllers or
in loops dominated by a large time delay (dead time dominant).
3. Check the sensor selection, installation, and location for
opportunities, per best practices, to improve the reliability,
reproducibility, rangeability, and resolution, and to reduce noise
and decrease loop time delay. Orifice meters and chromatographs
are some of the least reliable measurements and are the biggest
sources of excessively fast and slow noise. Chromatographs are
also the largest source of measurement time delay from sample
transportation and analysis cycle time. Look for ways to eliminate
sensing lines and sample lines by the use of sensors that mount
directly in the pipeline or on the vessel [2.7].
4. Look for ways to reduce the time delay in control loops by changes
in the design of the equipment, piping, instrumentation, final
elements, and the pairing of controlled variables with manipulated
variables.
5. Tune the controllers for the best compromise between robustness
(stability), performance, and smoothness. It is important to realize
that the tuning rules change with the ratio of time delay to time
constant and that all loops will see both load upsets and set point
changes. Methods that focus on set point changes (servo control)
and noise introduced into the measurement are applicable to
aerospace, web, and parts manufacturing but not to processes for
the chemical, petroleum, food, and drug industries and
environmental control. Make sure the tuning method takes into
account the relative degree of dead time and provides the proper
capability for load rejection. Beware of any control loop analysis
that concentrates solely on set point response, and the introduction
of noise or upsets downstream of the process and directly into the
measurement [2.8]. These methods were developed from control
programs in system science or electrical engineering and tend to
ignore the effect of the process and equipment dynamics,
characteristics, and objectives.
6. Find opportunities to employ cascade control. Wherever there is a
reliable flow measurement and a primary loop whose time delay
and time constant are more than five times slower than a flow
loop, a secondary flow controller should be created. There are
some cases where the dynamics are not appropriate for cascade
control. Examples of undesirable choices would be inline pH-to-
reagent flow and liquid pressure–to-flow cascade control because
the primary and secondary loops have about the same time delay.
7. Look for opportunities to add feedforward control, especially flow
feedforward where manipulated flows are ratioed to a feed flow.
Make sure the feedforward signal does not arrive too soon and
cause inverse response.
8. For improvements that cannot be covered by the maintenance
budget, the benefits from the reduction in variability can be
estimated by the calculations in Chapter 3 to justify the project.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Application Detail
This section will take a closer look at the methods to improve the response
of valves and measurements, reduce the total loop time delay, tune con-
trollers, employ cascade control, and add feedforward control.
Valve Selection
Control valves are often selected based on the lowest cost valve that has
the required materials of construction. Often tight shutoff is sought.
Nowhere in the valve specification is there a requirement that the control
valve move or respond to a change in signal. Consequently, rotary valves
are chosen because they are the least expensive and offer models with low
leakage rates. They are also thought to offer the highest rangeability. In
reality, the rotary valve has the least usable rangeability because the
installed characteristic gets too flat for small and large controller outputs.
Figure 2-3a shows how the characteristic is too flat below 5 degrees and
above 45 degrees for a butterfly valve. Figure 2-3b shows how the charac-
teristic is too flat below 10 degrees and above 60 degrees for a ball valve. If
you further take into account that the stick-slip significantly increases
when these valves are less than 15 degrees open, the actual usable range-
ability of these valves is less than 20:1, instead of the 200:1 and 400:1 stated
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
in the literature.
By contrast, the sliding stem valve installed characteristic doesn’t get too
flat until it gets below 5% open or above 75% open as illustrated in Figure
2-3c. Also, its stick-slip is an order of magnitude or more less and usually
doesn’t increase dramatically until the valve is less than 5% open. Also,
unlike the rotary valve, the trim movement of a sliding stem valve closely
matches the actuator shaft movement so that a digital positioner, whose
feedback is typically actuator-stem position, can, by aggressive tuning,
actually compensate for high packing friction. As a result, the real range-
ability of sliding stem valves with digital positioners is 40:1.
Of course, valve manufacturers who offer only rotary control valves will
develop clever ways of diverting attention from these issues or even pitch
the opposite by the use of labels like “high performance” and “high range-
ability” that ignore flat installed characteristics and stick-slip. The user
must realize that “high performance” indicates the ability of the valve to
provide tight shutoff and to withstand high temperatures. These same fea-
tures translate into excessive friction and low performance in terms of con-
trol. The use of a digital positioner cannot correct for the inherent stick slip
problems of rotary valves and can essentially deceive the user into think-
ing it is doing a great job by fancy plots and statistics of the step response
of the actuator stem position. Unfortunately, the ball or disc position does
not track the actuator shaft position, because of gaps in linkages, toler-
New designs of sliding stem (globe) valves, such as that shown in Figure
2-6, reduce the amount of metal used in the body and pockets and crevices
where process material can stagnate and accumulate. This makes the valve
more competitive with the rotary valve for exotic materials, large line
sizes, and fouling or slurry service. Above 6 inches in line size, the cost of
sliding stem (globe) valves can become large enough to warrant further
investigation. If the reduction in stick-slip and loop variability offered by a
sliding stem valve doesn’t provide an acceptable rate of return on the
additional investment, the user should take a closer look at rotary valves,
but avoid any valves originally designed for isolation or interlocks. Sepa-
rate automated “high performance” or “on-off” valves should be used for
isolation or interlocks, and low friction valves for throttling service. Since
many of the rotary valves are flangeless (wafer bodies), the lifecycle cost
should include not only the cost of increased variability, but the increased
difficulty of proper installation and alignment and the increased risk of a
safety incident and reportable release of hazardous materials.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Integral flange
Stem
guided
Retainer
seat ring
Stream-lined passages
Figure 2-6. Sliding Stem Valve with Streamlined Passages and Less Metal
Rotary valves must also pass the checks on the maximum pressure drop.
The rotary valve must meet the maximum pressure drop rating at shutoff
and the maximum allowable pressure drop to avoid choked flow, flashing,
cavitation, and exceeding the noise limit. In general, sliding stem valves
offer higher pressure drop ratings, higher allowable pressure drops to pre-
vent cavitation, and noise reduction trim, and are thus the first choice for
high pressure, boiler-feed water, steam and condensate systems.
If a rotary valve is still the best choice, make sure the connection of the
actuator shaft to the ball or disc stem is a splined connection, as shown in
Figure 2-7, to minimize the tolerance and associated play in the connection
so that the backlash is less than 0.5%. Key lock connections can cause a
backlash of 8%. Also, the shaft diameter should be large and the shaft
length should be short so that shaft windup does not cause a stick-slip
greater than 0.5% [2.10].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
low friction packing, with the greatest deterioration found in designs that
employ keyed connections, long slender shafts, and high friction sealing
of surfaces for tight shutoff. To help avoid the many traps of creative
advertising, the user should keep in mind the popular myths listed in
Table 2-2.
A piston actuator can reduce the stroking time of large valves once a valve
starts to move. However, the design of most pistons exhibits poor resolu-
tion and dead band that will cause an exceptionally slow response to small
changes in valve position that can get worse if the cylinders are not prop-
erly lubricated. For small changes in valve position, a diaphragm actuator
is generally faster and more precise. Also, a diaphragm actuator does not
require lubrication or as much maintenance unless its temperature rating
is exceeded.
Figure 2-8. Crevice-free Sanitary Valve with High Rangeability and Sensitivity
In all valves, there is a valve prestroke time delay: the time it takes for
enough air to move into or out of the actuator to change the air pressure
enough to start to move the actuator stem. The stroking time is the time
required to complete its transition to the new stem position after the actua-
tor starts to move. The tests to document the prestroke time delay and the
stroking time typically consisted of 10% or larger steps, done with the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
valve disconnected. The results depended solely on the type and size of
actuator and the type and flow capacity of the actuator connections and
accessories; they did not include the effect of valve dead band or stick slip.
A ramp (a series of small steps held for the loop scan time) would better
duplicate the actual valve response for closed-loop control. The use of a
ramp is particularly important for pneumatic positioners and boosters
because they exhibit a drastic increase in response time as you approach
the resolution limit of the linkages and flapper nozzle assembly. To quan-
tify dead band, stick and slip, a series of steps, first for a reversal of posi-
tion and then in the same direction are used. Each step is held for the
prestroke dead time and stroking time identified from a 10% step. The
steps are continued until movement occurs. The actual valve movement in
excess of the size of the last step is indicative of the amount of slip.
Figure 2-9 shows how the response time changes as a function of the type
of control valve, shaft connections, actuator, and positioner. Diaphragm
actuators, sliding stem valves, and digital positioners have the fastest
response by far to a range of small step sizes, which is the goal of 99% of
all control valves. The combination of an electrical or hydraulic actuator
and a sliding stem valve can have an even better resolution. The only dead
band is what is introduced in the setup of the positioner to eliminate
dither. The prestroke dead time is essentially zero but the stroking time
increases proportionally to the step size and can become quite large for the
electrical actuator. Hydraulic actuators provide the fastest response for
small and large step sizes but are complex and expensive.
Valve Installation
The installation and location requirements for a control valve are generally
less than for a sensor. Ideally, control valves should have the same straight
run of pipe upstream and downstream as a differential head flow meter,
since both constitute a variable orifice [2.9]. Adherence to the straight-run
requirements rarely occurs in industry but is common in the flow test labs
used to establish flow characteristics and flow coefficients [2.9]. The repro-
ducibility error from an erratic flow profile is not as important for a final
element as it is for a measurement because the control loop will drive the
manipulated variable as necessary to reach set point. However, if the flow
is going to be computed through the control valve based on valve position
and pressure drop, the reproducibility of the resulting flow measurement,
and hence the straight-run requirements, become more important.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
400
3 4 5
40.0
Response 6
Time (sec)
1
4.0
2 7
0.4
0.1 1.0 10.0 100
Step Size (%) All valves look good
for about a 10% step
1 - variable speed drive with dead band adjustment set equal to zero
2 - sliding stem valve with diaphragm actuator and a digital positioner
3 - sliding stem valve with diaphragm actuator and pneumatic positioner
4 - rotary valve with piston actuator and digital positioner
5 - rotary valve (tight shutoff) with piston actuator and pneumatic positioner
6 - large valve or damper with any type of positioner
7 - small valve with any type of positioner
Figure 2-9. Response Times for Different Types of Final Elements (Valves,
Actuators and Positioners)
For partially filled lines, there is an excessive time delay even when the
valve stays open. A change in valve position causes a crest or valley in the
wave to travel down the pipe or channel. The transportation delay is the
distance divided by the velocity of the wave but the velocity is difficult to
estimate. For a very low flow down a vertical pipe, the velocity of a falling
film can be computed [2.14]. Whenever the control valve closes, manipu-
lated fluid in the downstream piping and injectors or dip tubes slowly
migrates into the equipment or destination and the process fluid backfills
the same volume. The result is a long delay between the closure of the
valve and the end of manipulated fluid flow and a similarly long delay
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
between the opening of the control valve and the start of the manipulated
fluid flow into the equipment or destination.
For a pressurized, completely full pipeline, dip tube, and injector, the time
delay can be estimated as the volume between the valve and the injection
point divided by flow of the manipulated fluid. For pH control systems
where the manipulated fluid is a reagent, the flow can be as low as one
gallon per hour and just one gallon of volume downstream of the valve
can result in one hour of time delay every time the reagent control valve
closes [2.14].
The control valve should have block and drain valves so that it can be
removed safely and easily. For large continuous processes, it is desirable to
have a manual bypass valve to keep the unit online while the valve is
tested or repaired. Also, the outlet isolation valve can be closed and the
bypass valve, shown in Figure 2-10, opened for inline testing and tuning
of the digital positioner at process pressures and temperatures. For slurry
service, rotary valves can be mounted in a vertical-flow up pipe to pro-
mote self-draining and prevent solids buildup. For sliding stem valves, a
vertical mounting will cause additional wear of the packing from the
weight of the actuator.
flush flush
FT
1-1
A B
drain drain
Figure 2-10. Installation Requirements for a Flow Meter and Control Valve
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The time from minimum to maximum speed is adjustable within the limits
imposed by the impeller inertia and the motor horsepower. The factory
setting is conservative. The speed response is a velocity-limited ramp rate
with no time delay or lag. Consequently, the response for small speed
changes is fast. For example, if it takes a drive 9 seconds to go from 10% to
100% speed, it will take only 0.1 second to change the speed by 1%.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For strong acid and base pH systems, the requirement for precise adjust-
ment would best be met by a VSD, but the flow rates are often too small
for a centrifugal pump and the location of the pump on the ground creates
a huge reagent delivery transportation delay. Instead, an electronically set
metering pump is used with the piping designed to stay full.
Measurement Selection
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For flow measurements, inline meters such as coriolis mass flow meters,
magnetic flow meters, vortex shedding meters, and thermal mass flow
meters should be considered because these meters eliminate sensing lines,
external connections, and small holes that are the largest source of errors,
failures, leaks, and maintenance. To reduce the effect of flow profiles and
changes in process fluid, preferred order of selection is the order listed.
Coriolis mass flow meters require no straight runs, are not affected by Rey-
nolds Number or fluid properties, and have by far the best reliability,
reproducibility, rangeability, and resolution for the measurement of both
mass flow and density. The coriolis meter is the only true mass flow meter.
Magnetic and vortex flow meters are velocity volumetric devices and can
be used to compute mass flow only for a fixed and known composition by
the measurement of temperature and pressure. This is also true for pitot
tubes with differential pressure, pressure, and temperature transmitters
that are advertised as mass flow meters. The thermal mass flow meter is
not a volumetric meter but its reading will depend upon the heat capacity
and thus the composition of the fluid.
Coriolis flow meters above 2 inches get expensive, but still may be justi-
fied where the ability to measure and control a mass balance is important.
Often overlooked are the benefits of an accurate density measurement and
an approximate temperature measurement (the temperature sensor is on
the outside surface of the tube and is not in contact with the fluid) to create
online estimators of fluid composition. Unfortunately, the the fluid may be
too corrosive for the materials of construction offered, the fluid tempera-
ture or concentration of particles may be too high, or the pipe size too
large.
Coriolis flow meters are so accurate that they can be used to replace load
cells or weigh tanks for batch charges. Also, for flat titration curves and
constant composition feeds and reagents, simple mass ratio control with
coriolis flow meters has proven to be more accurate and more reliable than
a pH control. The hardware cost of a coriolis flow meter is high compared
to a differential head meter, but the installation cost may be less since there
are no straight-run requirements or additional measurements to compen-
sate for pressure and temperature. The project cost may be still higher, but
the life-cycle cost is often significantly better for the coriolis meter, as
shown in Figure 2-11, because the coriolis meter requires less maintenance
and accumulates benefits from tighter control.
Lost opportunity
after 10 years
coriolis
orifice
The higher purchase price to get the coriolis technology is partially offset
by lower installation costs but will still often lead to higher project costs
Benefits but can lead to lower Lifecycle costs from less maintenance and better
($) yields from more accurate mass balances and control of stoichiometry
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Next to the coriolis mass flow meter, the magnetic flow meter has the fewest
interferences since it is not affected by either Reynolds Number or viscos-
ity and is relatively insensitive to flow profile. The main limitation is that
the fluid conductivity must be greater than 1 micromho/cm (0.1 for spe-
cial units). For erosive service, ceramic linings are offered.
The flow profile is a big factor for the vortex meter, particularly near the
low end of the meter’s range. If the velocity drops below 1 fps, or Rey-
nolds Number is less than 20,000, or the viscosity is greater than 30 centi-
poises, or the concentration of particles is above 2%, the vortices are not
shed uniformly and the vortex frequency deviates from a proportional
relationship to flow. At low velocities the reading can become very erratic.
The hardware and setup costs of the radar level measurement has decreased
and many of the calibration complexities have been automated to the
point where the lifecycle cost is often less than the differential pressure
(d/p) method of level measurement. It is the most accurate of the common
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
types of level measurement. The main limitation of radar is that the fluid
must have a dielectric constant greater than 2. For tall narrow vessels, the
minimum 8-degree angle of divergence of the beam may result in the
gauge not being able discern the bottom of vessel. The gauge must be pro-
grammed to ignore any obstruction, including dip tubes and agitator
blades, and typically requires an empty tank at some point during the cal-
ibration procedure. Pulsed lasers that are not adversely affected by dust or
vapor can potentially become an even more accurate method of level mea-
surement [2.17]. The angle of divergence is less than a degree and there is
no dielectric requirement. However, a relatively clean sight glass window
is required.
The differential pressure level measurement depends upon density of the fluid
and the condition of the sensing and equalization lines. A second d/p
with both connections below the minimum level can be added to provide
a representative measurement of the density if the vessel is well mixed,
although the accuracy is usually good to only two significant digits. The
sensing and equalization lines can be eliminated by the use of capillary
systems or transmitters mounted directly on vessel flanges for both the
bottom total pressure and top equalization pressure. However, the error
introduced by capillary systems can be significant if there are bubbles in
the fill, or differences in the temperature or length of the capillary. The
communication of signals for the computation of level from dual transmit-
ters is best done digitally to eliminate digital-to-analog (D/A) and analog-
to-digital (A/D) converter errors. Even so, the error increases as the vessel
pressure increases and can become unacceptable because the bottom d/p
measures both liquid head and vessel pressure relative to atmospheric
pressure.
Nuclear level measurements are completely isolated from the process, but are
affected by density unless a second device is added. Strip sources are rec-
ommended to eliminate the need for compensation of the difference in
radiation path length from a point source to the strip detector. The license
procedure is considered a hassle and anything nuclear tends to scare peo-
ple even though the exposure is less than what they receive from the sun.
For analytical measurements, inline meters and probes that do not require
sample systems will pay off by the elimination of sample transportation
delays and the reduction in the life-cycle cost of a sample system. If a den-
sity measurement is sufficient, the coriolis meter offers the fastest and
most accurate and reliable response. For simple water mixture and com-
plex general mixtures, inline meters such as microwave and nuclear mag-
netic resonance (NMR) should be investigated respectively. [Microwave
can be used for simple 2 or 3 component water mixtures, while NMR can
handle aqueous and non-aqueous complex mixtures]. For coal, oil and
mineral slurries, the prompt gamma neutron activation analyzer can
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Measurement Installation
Sensing lines should be eliminated wherever possible by direct mounting
a differential pressure (d/p) transmitter for flow measurement or pressure
measurement, as shown in Figure 2-12a and 2-12b, to the pipe connection.
Isolation, flush, drain, and equalization valves are necessary to minimize
the exposure to chemicals during the removal of the transmitter. An equal-
ization valve is used when both the high and low sides are connected to
the process to offer the opportunity for a zero adjustment and to protect
the d/p from “over-range” from just seeing the upstream pressure.
d/p
FT
Equalization valve H L
1-1
flush drain
Large bore connections
d/p
PT
H L
1-1
flush drain
gas
drain
shown in Figure 2-10 to minimize the distortion of the flow profile and to
provide a more constant pressure. For liquid flow, the upstream location
helps prevent a partially filled meter, besides reducing exposure to flash-
ing and cavitation. Bubbles adversely affect the accuracy of all flow meters
and the implosion of bubbles can cause serious damage in addition to
erratic readings. Isolation, flush, and drain valves are again recommended
for safe removal of an inline meter, although some practices for high haz-
ardous materials seek to minimize the number of connections and leak
points. A bypass valve allows the plant to run on manual while the instru-
ment is repaired. The isolation valves upstream of the flow meter shown
must be wide open when the flow meter is in service for the same reasons
that a control valve is undesirable upstream of a flow meter. If there are
solids, the meter can be installed in vertical pipe with flow up to help
drain the piping. For coriolis meters, a single straight tube is desirable to
eliminate erosion at the bends of a U-tube and unequal distribution of sol-
ids in a dual tube. The meter size should be chosen to provide the opti-
mum velocity to minimize the effect of solids concentration on accuracy.
Figure 2-10 and Table 2-3 show the relative straight run requirements for
different types of flow meters (the A and B values are in Table 2-3). An ori-
fice with a large beta ratio (high orifice bore to inside pipe diameter ratio)
has the greatest upstream straight-run requirement, followed by the vor-
tex meter operating at a low fluid velocity. The upstream requirements
also increase for multiple fittings in different planes or valves upstream
that are not completely open or are partially plugged. The upstream
straight-run requirement can be dramatically reduced by the use of
straightening vanes. The manufacturer should be consulted for actual
requirements based on your piping details and process conditions. The
coriolis flow meter has no upstream or downstream straight-run require-
ments.
Sample lines should be eliminated wherever possible for all types of elec-
trodes by the use of insertion or injector assemblies. Injector electrode
holders with manual or automatic retraction are now offered. Even though
these assemblies have built-in isolation, flush, and drain capability, the
user may choose to have the piping set up as shown in Figure 2-13a and 2-
13b to provide additional protection for hazardous materials. For three
electrodes, a series arrangement, as shown in Figure 2-13b, is favored to
help keep the velocity and concentration the same for all three meters. The
electrodes must be pointed down at a 30- to 60-degree angle, as shown in
Figure 2-13c, to prevent a bubble from becoming lodged in the tip or at an
internal electrode. The first electrode should be at least 20 pipe diameters
from the discharge of a pump or static mixer to reduce pressure pulsation
and facilitate some mixing. The electrodes should also be separated by 10
pipe diameters to help establish a more uniform velocity and they should
be inserted far enough into the line or vessel to get a representative read-
ing. The mounting of electrodes in a pipeline with a 5 to 9 fps velocity is
preferred to a vessel because the higher velocity in the pipe makes the
electrodes respond faster and keeps them cleaner. The bulk fluid velocity
in even highly agitated vessels rarely exceeds 1 fps except near the agitator
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
blade tip. For solids and high caustic or temperature service, there is a
compromise to keep the velocity low to decrease erosion and chemical
attack and yet high enough to reduce coatings. The slot in the protective
shroud of the electrode tip should be oriented to shield the electrode from
abrasion and chemical attack but provide a sweeping action to decrease
fouling. Finally, the transportation delay between the vessel or the point of
reagent addition and the electrodes should not exceed 5 seconds.
AE
1-1
pressure drop for
each branch must throttle valve to
be equal to to keep adjust velocity
the velocities equal AE
1-2
flush
AE
1-3
Differences in velocity, concentration, and temperature are less for probes in series !
AE AE AE
1-1 1-1 1-1 throttle valve to
flush adjust velocity
10 pipe 10 pipe
diameters diameters
20 to 80 degrees
from horizontal
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Advanced Temperature Measurement and Control, ISA, 1995, pp. 14, Figure 2.3
flush
elbow
Desuperheater TE
or static mixer 1-1
20 pipe
diameters
drain
the water purge and the nitrogen purge lines must each have a check valve
before being combined, and each line must have a check valve. The sens-
ing and equalization lines for low to moderate vessel pressures can be
eliminated by the use of separate d/ps for the total pressure and equaliza-
tion, direct mounted on the bottom and top nozzles as shown in Figure 2-
16b. A third d/p can be direct mounted at an intermediate nozzle to com-
pute fluid density, and a temperature sensor can be used to compensate
for the changes in the dimensions of the vessel, to provide a more accurate
level measurement. Flush connections or extended diaphragms are used
for the lower nozzles to help keep the diaphragm clean. The signals are
communicated digitally to a computer or a programmable electronic con-
trol system.
equalization line
flush
LT
H L Purge
1-1
d/p
drain drain
Figure 2-16a. Purged Sensing and Equalization Lines for Level Measurement
flush
d/p
LT
H L
1-2
drain digital
LY LT
1-3 1-3
flush digital
LT
H L
1-1
d/p
drain
Thus, for purposes of this chapter, the open loop response will be a first-
order response that can be characterized by a total time delay, a negative
or positive feedback time constant, and a steady state gain. The time it
takes the process variable to get out of the noise band after a step change
in the controller output (CO), is the observed time delay (τd), or dead time.
It excludes any time delay due to valve dead band or stiction. The Theory
section shows how to estimate the additional time delay from valve dead
band. The time it takes after the time delay for the response of the process
variable (PV) to reach 63% of the final change for a self-regulating
response is the negative feedback open loop time constant (το), or time lag.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The open loop response of each major component of the plant (control
valve, process, and measurement) in Figure 2-19 has a first-order-plus-
dead-time approximation. The controller also contributes a time delay
from the scan time and time constants from the signal filters. Material and
energy balances are used in the Theory section to show origin of the pro-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
cess time constants and gains and how they change with operating condi-
tions. Equations in the Theory section also estimate the dead time from
mixing, transportation delay, and valve dead band.
Process
Variable
0.72∗Eo
(%)
0 Acceleration
Ramp
1 ∆PV
Eo Load
Upset ∆t
curve 0 = Self-Regulating
2 curve 1 = Integrating
curve 2 = Runaway
Time
0 τd τ τ’ (min)
Controlled Ko ∆MV
Manipulated Kmv
Variable ∆CV Variable ∆CO
(%) (gpm)
∆PV
Process Kpv Controlled Kcv
∆CV
Variable Variable
(pH) ∆FR
(%) ∆PV
∆CO Local
Set Point
PID Kc Ti Td
∆CV
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Equations
∆%CV
K o = ------------------- = K mv * K pv * K cv (2-1)
∆%MV
∆%PV ⁄ ∆ t K
K i = --------------------------- = ------o (2-2)
∆%CO τo
where:
The total loop time delay (dead time) is the most important of the three
key variables that describe the open loop response of a control loop. It
delays the ability of a controller to see or react to a disturbance. The
minimum peak error is the maximum excursion of the process variable
during this time delay. The oscillation period is also proportional to the
time delay. Thus the integrated error is proportional to the time delay
squared for self-regulating processes with a large time constant that limits
the excursion within the time delay to less than the full change of the
process variable. The equations to approximate these relationships are
developed in the Theory section.
Perfect control is theoretically possible if the total time delay is zero and
there is just a single process time constant. However, in industrial pro-
cesses the total loop time delay is never negligible because even if the pro-
cess time delay is negligible, the addition of a measurement, valve, and
digital controller adds time delay. For flow, pressure, and level control,
most of the time delay in a control loop comes from the automation system
[2.1]. While the time delay cannot be zero, the objective, particularly for
loops with operating point nonlinearities, like the pH and column temper-
ature control examples, is to reduce the total time delay. This is because the
extent of the excursion on the titration curve or tray temperature curve,
and hence the effect of the nonlinearity, is decreased by a reduction in loop
dead time.
Pure dead times come from transportation delays (pipes, sample lines,
static mixers, coils, jackets, conveyors, sheet lines, and textile fiber lines),
valves (prestroke dead time, dead band and stiction), and anything digital
or with a cycle time (microprocessors and analyzers). Equivalent dead
time comes from time constants in series from instrumentation (sensor
time lags, thermowell time lags, and transmitter filter times and dampen-
ing adjustments), analog input cards, (analog filters), and process variable
filter times (digital filters). The exact values are not important, just the rel-
ative sizes. The engineer or technician should work on the largest, most
cost-effective sources of dead time [it is not just the job of the control engi-
neer but also the process engineer who is responsible for equipment
design and the selection of instruments for small automation projects and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the technician who often determines the sensor location and loop scan
times. Some plants don’t have a control engineer].
The largest time constant does not have to be in the process. For processes
not dominated by time delay, an increase in the time constant, no matter
where it appears in the loop, will allow an increase in controller gain, even
though the final effect is not beneficial. For example, a large time constant
in the measurement, such as a large thermowell lag or process variable fil-
ter time setting, will allow a higher controller gain and give the illusion of
better control because the controlled variable is an attenuated version of
the real process variable. Equation 3-2 can be used to estimate the effect.
All time constants much smaller than the largest time constant are con-
verted to equivalent dead time. While the fraction converted to dead time
depends upon the relative size of the small to the large time constant, very
small time constants can be summed up as totally converted to dead time
because it is difficult to find and estimate all the small time constants and
sources of dead time. Thus, the total time delay for a control loop is the
sum of all the pure time delays and the small time constants. Dead time
compensators and model predictive controllers can account for the effect
of time delay on the response to changes in the controller output, but the
minimum peak error for unmeasured load upsets and the initial delay
before the start of the set point response is still fixed by the total time
delay.
The open loop time constant is approximately the largest of the time con-
stants plus the portion of all of the small time constants not converted to
time delay. If each of the small time constants is less than 10% of the larg-
est time constant, so that each is essentially converted to equivalent time
delay, the largest time constant can be considered to be the open loop time
constant. The purpose here is to show the relative sizes and sources of
time delay. There are too many unknowns to calculate an exact value. In
industry, the open loop time constant, total time delay, and open loop gain
can not be accurately calculated, except possibly for flow and level, and
must obtained by plant tests.
--`,```,,,```,`,````
The open loop gain (sensitivity) can be too low or too high. If the valve
gain is too low (valve sensitivity is too low), the controller has little effect
on the process and the controlled variable will wander and be at the mercy
of upsets [2.5]. If the process or measurement gain (sensitivity) is too low,
the controlled variable is not representative of the process performance. If
the valve gain (sensitivity) is too high, the effect of stick-slip is excessive
and just the act of putting a controller in automatic can cause unacceptable
oscillations. If the process or measurement gain (sensitivity) is too high,
the nearly full-scale oscillations will scare most people even if the actual
performance of the process is acceptable. The classic example of this prob-
lem is the pH control of a strong acid and base system with a static mixer.
Try explaining to the operator that the actual change in hydrogen ion con-
centration is tiny for a system that oscillates between 2 and 12 pH.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The total loop time delay, open loop time constant, and open loop gain are
rarely constant but rather a function of operating conditions. For example,
the process gain for composition and temperature control is inversely pro-
portional to feed rate, and the process time constant is inversely propor-
tional to feed flow for back-mixed volumes, whereas the time delay is
inversely proportional to feed flow for plug flow. The theory section
shows the source of these process nonlinearities.
Increasingly, the control loop set point is changed by either a master loop
for cascade control or a model predictive controller for advanced control,
or by a unit operation for an automated startup sequence, product transi-
tion, and batch operation. Even for those loops whose set point is not
changed from a remote source, the local set point is a handle used for star-
tup, sweet spots, and to relieve boredom. Operators are notorious for
moving set points despite claims to the contrary. Plus, the operator has to
start up the unit, which may consist of a series of set point changes as he
walks the unit up to operating conditions.
On the other hand, if there were no process upsets, you wouldn’t need a
controller: You could find and manually set an output to a final element
that would be good indefinitely.
controlled variable. The integrated error (Ei) can be estimated from the
tuning settings and is equivalent to the IAE if the response is not oscilla-
tory. Of increasing importance is the settling time (Ts), which is the time it
takes for a loop to stay within a specified band around the set point after a
set point change or load upset to detect sustained oscillations. Since pro-
cesses and equipment have limits that can trigger interlocks, violate envi-
ronmental constraints, or initiate side reactions, the overshoot (E1) for set
point changes and the peak error for load upsets (Ex) are also important.
The decay ratio is the amplitude of the first peak (E1 or Ex) divided by the
second peak (E2). Finally, the rise time (Tr) (time it takes the controlled
variable to first reach a specified band around the set point) is important
for reducing batch cycle, startup, and transition time and decreasing the
open loop response time (T98) (time to reach 98% of the final value) for
master and model predictive controllers. Figures 2-20a and 2-20b show the
closed-loop performance indices for a set point change and a load upset,
respectively. In the Theory section, equations are developed to estimate Ei
and Ex for load upsets.
Overshoot = A
A B
Decay = B/A
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rise
Time
Settling Time
Figure 2-20a. Closed Loop Performance Indices for a 10% Setpoint Change
Peak Error = A
A
B
Decay = B/A
Settling Time
Figure 2-20b. Closed Loop Performance Indices for a 40% Load Upset
An increase in the controller gain will greatly reduce the peak error, the
rise time, and the return time, but may increase the overshoot and the set-
tling time. High controller gains will amplify noise, increase interaction,
and pass on more variability from the controlled variability to the manipu-
lated variable. Figures 2-22a and 2-22b show the effect of the controller
gain setting on the response of an effectively proportional-only controller
to a load upset and a set point change, for a process with a time constant
that is 5 times larger than the time delay. An increase in controller gain
dramatically speeds up the initial rate of approach to set point by over-
driving the controller output. It will also start to back off the controller
output from the output limit as soon as the controlled variable comes
within the proportional band, which is the percent change in the control
error (difference between the measurement and set point) necessary to
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆%CO2 = ∆%CO1
Ramping or driving
∆%CO1 action from reset
seconds/repeat
∆%PV offset
set point
The offset is inversely proportional to gain but is
only completely eliminated by integral action
0 Time
(seconds)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
not cause a classic overshoot but instead an oscillatory approach to set
point. It is high controller gain combined with integral action that causes
overshoot.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 2-23a. Response of an Integral-only Controller to a 10% Set point
Change
spike because of a built-in filter whose time constant is typically 1/8 of the
derivative time. The contribution will decay to zero since the offset has a
slope of zero. The direction of the change in controller output depends
upon the sign of the change or the acceleration of the error, so the propor-
tional mode has a definite sense of direction and rate of approach to set
point.
The PID algorithm can have the derivative action on either the control
error or on the controlled variable. The latter method was developed to
reduce the bumps to the controller output from rapid set point changes
made by the operator. Unfortunately, for set point changes made by
sequences, cascade, and advanced control systems, the derivative mode
works against the change requested because it only knows that the mea-
surement is starting to move and that any movement is undesirable. For
this reason, derivative action on control error is preferred so that deriva-
tive action is beneficial if the loop is dominated by a time constant and set
point velocity limits are readily available. An increase in derivative time
will decrease the peak error and reduce overshoot and the period of oscil-
lation. Too much derivative action can increase the rise, or return, time and
the settling time. The derivative mode can respectively speed up or slow
down the initial approach for a set point change by acting on the control
error or the controlled variable.
Figures 2-24a and 2-24b show the effect of the derivative time setting on
the response of a proportional-plus-derivative controller to a load upset
and a set point change, for a process with a time constant that is 4 times
larger than the time delay. The derivative mode is even more likely than
the gain mode to amplify noise, increase interaction, and transfer variabil-
ity to the manipulated variable. It can decrease or increase the response
time of master or model predictive controllers depending on how it is
used. It should not be used on dead time–dominant systems or any system
with abrupt or erratic changes. It works best on processes with large time
constants, low noise, and good measurement repeatability and resolution
so that the response of the controlled variable is smooth.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Temperature and composition loops are the prime candidates for PID con-
trollers.
--`,```,,,```,`,````,``,`
Nearly all temperature controllers should use PID controllers because the
derivative mode provides a phase lead that compensates for the phase lag
of the thermal lags in the thermowell and process. Wherever the process
variable can accelerate, whether due to positive feedback or nonlinearity,
derivative is helpful since it reacts to a change in the rate change. In the
distillation-column example, the control point is on the knee of a plot of
tray temperature versus distillate-to-feed ratio. An increase in feed can
cause the drop in temperature to accelerate on the steep slope. However, if
thermocouple or RTD input cards are used instead of a smart transmitter,
the steps from hitting the resolution limit seriously reduce the amount of
derivative action that can be used. A temporary fix is to add a filter that
smooths out the steps. If there is an inverse response, where the initial
response is opposite to the final response, the derivative mode cannot be
used. This can occur in furnace temperature control where the controller
output is the firing demand that works within a cross limit to make air
lead fuel on a load increase.
For tight control of processes with a time constant much larger than the
dead time, there is benefit in aggressive preemptive and anticipatory
action. Thus, these processes should maximize the gain and derivative set-
ting and overdrive the output to reduce the rise and return time. The inte-
gral time is increased (reset action decreased) since it has no sense of
direction and increases overshoot. If the controller gain is larger than 5,
there is enough muscle from the proportional mode, and the derivative
setting can be small or zero. Derivative mode is a necessity regardless of
gain setting for a process whose control variable can significantly acceler-
ate, whether due to a nonlinearity (pseudo-runaway) or positive feedback
(real runaway), such as a polymerization reactor or a fermentor in the
exponential growth phase. For these processes, the integral time setting is
increased to about 10 times the ultimate period so that gain and rate action
dominate the response.
Conversely, for tight control of processes with a time delay much larger
than the largest time constant, gain and rate action must be minimized
and integral (reset) action maximized, which means the integral (reset)
time must be minimized. The integral mode adds smoothness not inherent
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
in the process. The integral time factor is decreased and can be as small as
1/8 of the ultimate period for a pure dead time process.
Figures 2-25a through 2-25f show the effect of the three mode settings on a
PID controller for a process with a time lag 4 times larger than the loop
time delay. Note that a gain setting too large increases overshoot due to
the presence of reset action. An integral (reset) time setting that is too
small causes a greater overshoot and increases the period of oscillation.
Too much derivative action causes an oscillatory response and a shorter
period but no real overshoot. Figures 2-26a through 2-26d show the effect
of two mode settings on a PID controller for a process with a time lag 4
times smaller than the loop time delay. The doubling of the gain setting is
much more disruptive.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Gain Doubled
Figure 2-25a. Effect of Gain on Set point Response of PID for a Large Time
Lag–to–Time Delay Ratio
Gain Halved
Base Case
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Gain Doubled
Figure 2-25b. Effect of Gain on 40% Load Upset to PID for a Large Time Lag–
to–Time Delay Ratio
Figure 2-25c. Effect of Reset on Set point Response of PID for a Large Time
Lag–to–Time Delay Ratio
Figure 2-25d. Effect of Reset on 40% Load Upset to PID for a Large Time Lag–
to–Time Delay Ratio
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 2-25e. Effect of Rate on Set point Response of PID for a Large Time
Lag–to–Time Delay Ratio
Some loops will fail during an auto tuner pretest because the loop
response is too small or too large within the allowable time frame of the
pretest, or because the valve has too much stick-slip. Also, fast runaway
Base Case
Figure 2-25f. Effect of Rate on 40% Load Upset to PID for a Large Time Lag–to-
Time Delay Ratio
Base Case
Gain Doubled
Gain Halved
Figure 2-26a. Effect of Gain on Set point Response of PID for a Small Time
Lag–to–Time Delay Ratio
and integrating loops cannot be safely taken out of the auto mode. For
these loops, a closed-loop tuning method is best because it is the fastest
method for a large time constant, it keeps the controller in service with
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Gain Halved
Base Case
Gain Doubled
Figure 2-26b. Effect of Gain on a 20% Load Upset to PID for a Small Time Lag–
to–Time Delay Ratio
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Base Case
Figure 2-26c. Effect of Reset on Set point Response of PID for a Small Time
Lag–to–Time Delay Ratio
maximum gain, (safest for processes that can get into trouble quickly or
have a non-self-regulating response), and it includes the effect of poor
valve response in the tuning [2.29].
Figure 2-26d. Effect of Reset on a 20% Load Upset to PID for a Small Time
Lag–to–Time Delay Ratio
Tight Liquid Level 5 (1.0-30) 5.0 (0.5-25)* 600 (120-6000) 0 (0-60) CLM
Gas Pressure (psig) 0.2 (0.02-1) 5.0 (0.5-20) 300 (60-600) 3 (0-30) CLM
* An error square algorithm or gain scheduling should be used for gains < 5
Methods: λ - Lambda, CLM - Closed-loop Method, SCM - Shortcut Method
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
11. Return the output limits to their proper values if narrowed for testing.
Set point changes are used because they are more likely to cause an oscilla-
tion than a change in the controller output: A step change in a set point is a
step change in the error seen by the controller, whereas a step change in
the controller output is smoothed out by the time constants in the loop.
After the new tuning settings are entered, the loop should be checked for a
step change in the controller output and the load rejection capability of the
new settings monitored and compared to historical data for the old set-
tings.
Equations
Tu = 0.7*To (2-3)
Equation 2-4 extends the utility of the closed loop by compensating for
large and small time delay–to–time constant ratios. It estimates an integral
(reset) time to be respectively 1/8 and 1/2 times the oscillation period for
a self-regulating process with a pure time delay and with a large time
constant respectively. [1/8 factor is for a pure time delay and ½ factor is a
large time constant] For integrating and runaway processes, Equation 2-4
yields an integral (reset) time that approaches 10 times the ultimate period
as the negative-feedback time constant becomes large compared to the
time delay or positive-feedback time constant. For pure time delays, the
Lambda tuning method provides a more accurate calculation of the tuning
settings for a desired closed-loop time constant. For very slow loops, the
shortcut tuning method, whereby the user only needs to see the time delay
and the initial rate of change of the PV, can be used to save time, as
detailed in Reference 2.4.
The Theory section shows that a self-regulating process with a pure time
delay and a large time constant will oscillate at 2 and 4 times the time
delay, respectively. See Reference 2.1 for equations to estimate how the
ultimate period increases from 4 times the time delay for integrating and
runaway processes.
Cascade Control
A cascade control system consists of a primary controller that manipulates
the set point of a secondary controller that in turn manipulates a final ele-
ment. The secondary controller can be operated in the automatic mode
with a local set point or in the cascade mode with a remote set point.
Industrial systems are designed to make sure the remote set point of the
secondary controller (output of the primary controller) is equal to the local
set point when the secondary controller is switched from the automatic to
cascade mode so that there is a bumpless transfer.
For optimum performance, the time delay and time constant for the sec-
ondary controller should be 5 times faster than the respective values for
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the open loop response of the primary controller. If this is not possible and
the periods start to approach each other in value, the loops will fight and
may resonate. The interaction can be reduced by artificially slowing down
the primary controller by an increase in its process variable filter and scan
time or by a decrease in its gain. Unfortunately, this also slows down the
ability of the primary controller to react to load upsets that originate in the
primary loop. It may still be worthwhile to go to cascade control, however,
because most of the upsets are in the secondary loop and/or the lineariza-
tion of the primary loop is beneficial.
If the time delay of the secondary (inner) loop is small, its ultimate period
is small and any oscillation is effectively attenuated, per Equation 3-2, by
the primary (outer) loop time constant. If the inner open loop time con-
stant is large compared to its time delay, the secondary controller gain can
be relatively high, which gives this controller muscle to rapidly correct for
inner loop upsets. Also, the maximum excursion (peak error) for the inner
loop is reduced. Furthermore, by going to cascade control, one of the two
largest time constants that created equivalent time delay and was detri-
mental to performance of the single loop is now a beneficial term for the
inner loop [2.30]. In this scenario, the inner loop can correct for inner loop
upsets before they are even seen by the outer loop, so it doesn’t matter that
the primary controller may need to be detuned if the inner loop time con-
stant is not much smaller than the outer loop time constant. The plot of
simulation results in Figure 2-27 shows how the ratio of the peak error for
a cascade loop to the peak error for a single loop decreases as the inner-to-
outer loop ratio of time delays decreases and the ratio of time constants
increases [2.30]. The results shown in the figure are for self-regulating
inner and outer loops. The improvement is greater for integrating and
runaway loops.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tuning and Control Loop Performance: A Practitioner’s Guide, 3rd Edition, ISA, 1994, pp. 241, Figure 11.2
Figure 2-27. Reduction in Peak Error for Inner Loop Upsets by Cascade Con-
trol
TT
TT
2-2
2-1
Cooling Tower
Cooling Tower
Makeup
Reactor Return
Product
changes and the lack of straight runs results in poor repeatability of any
calculation. While it is true that the nonlinearity of an equal per cent flow
characteristic compensates for the inverse relationship between process
gain and feed flow for a temperature loop, this compensation is far from
exact and can be better done by output signal characterization or a feed-
forward multiplier instead of a summer.
Cascade control can do more harm than good if the signal-to-noise ratio,
rangeability, or reliability of the secondary measurement is poor or the sec-
ondary loop is actually slower than the primary loop. This is the case for a
static mixer pH-to-reagent flow cascade loop, particularly when the
reagent flow measurement is an orifice meter. If the reagent flow loop
used a coriolis flow meter, and its scan time was small, the stiction and
dead band of the valve was small, and the digital positioner and flow con-
troller were tuned for a fast response, it would be a different story; cascade
control might be beneficial and facilitate reagent-to-feed ratio control.
The secondary controller should be tuned for fast response. If the inner
loop is much faster than the outer loop, it is permissible for the closed-
loop response of the secondary controller to be quite oscillatory because
the amplitude of the oscillations is effectively attenuated by the time con-
stant of the outer loop. Also, an offset in the inner loop is theoretically of
little consequence to the outer loop since the inner loop only exists for the
purpose of the outer loop. This implies that the secondary controller
should use mostly gain and rate action. This is true for valve positioners.
These are secondary controllers whose remote set point (desired valve
position) is the output of the primary process controller. Pneumatic posi-
tioners were high-gain proportional-only controllers. Digital positioners
often use some form of proportional-plus-rate algorithm. Reset is never
used in a positioner because it would cause overshoot and the offset from
proportional-only control is small due to the high gain action. The second-
ary coolant temperature controller in Figure 2-28 should be tuned with
mostly gain and rate action. However, in many situations reset is used in
the secondary controller because operators get concerned about offsets,
secondary set point limits may need to be enforced, and the ratioing of
flows needs to be exact, especially for inline blending. Reset action is used
in flow controllers to not only eliminate offsets but to also help make the
set point response of the flow loops match up for better timing.
Feedforward Control
In feedforward control, a controller output is calculated to compensate for
a measured disturbance. It provides a preemptive action to enforce a mate-
rial or energy balance. The block diagrams in Figures 2-29a and 2-29b
show the use of feedforward, with and without cascade control. The feed-
forward signal must arrive at the same point in the process simulta-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
neously with the load upset and must be equal but opposite in sign to the
load upset. Feedforward control is never perfect and should be corrected
by feedback control if a suitable feedback measurement or estimator is
available. The total error can be approximated as the root mean square of
the errors in the feedforward measurement, feedforward gain, and feed-
forward timing. In general, the feedforward measurement goes through a
dead time block and a lead-lag block for proper timing and is finally
biased by a process controller output for feedback correction. The feedfor-
ward gain can be the gain in the lead-lag block or a separate gain. If the
controller output goes directly to a control valve, the feedforward calcula-
tion must also go through a signal characterizer that computes the valve
position for a desired flow from the installed valve characteristic. If the
valve nonlinearity is beneficial for feedback control, the characterization is
done before the bias from the feedback controller is added to the feedfor-
ward signal; otherwise it is done after the feedback correction as shown in
Figure 2-29a. It is vastly preferred that the controller output be the remote
set point of a flow controller, as depicted in Figure 2-29b, so that valve
nonlinearity, dead band, and pressure upsets are not issues.
Feed
Forward Delay Lag Gain
Delay τdff τdL τL KL
∆DV Fi
Load Upset
∆FF
Gain Kff Σ
∆CO Local
Set Point
PID Kc Ti Td
∆CV
Delay Lag Gain Lag Delay Lag
τc2 τdc τc1 Kcv τm2 τdm2 τm1 τdm1
Lag Delay
Controller Measurement
Figure 2-29a. Feedforward Control Block Diagram
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feed
Forward Delay Lag Gain
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Delay τdff τdL τL KL
∆DV Fi
Load Upset
computed from an energy balance for a heat exchanger. Usually, the ratio
is entered by the operator but there is an opportunity to put the energy or
material balance calculation online if there are reliable temperature or
composition measurements.
A common feedforward control system for sheets, webs, films, and con-
veyors is speed ratio control, where the roll speed is ratioed to another roll
speed or an extruder speed. The ratio is corrected by a controller of aver-
age gauge thickness. The timing requirement is very tight: Speed ratio
control of one roll to another must be done within milliseconds.
Normally, the feedforward time delay and lag are adjusted to make sure
the feedforward doesn’t arrive too early due to a time delay or time lag in
the path of the load upset. The lead is adjusted to compensate for a time
lag in the path of the manipulated variable. The lag is increased as neces-
PV PV
perfect!
none!
uncorrected load upset
feedforward gain and timing just right
Time Time
PV PV
Time Time
PV PV
Time Time
sary to make sure that noise from the feedforward measurement doesn’t
show up as dither in the final element.
The greatest benefits of feedforward control occur in loops with large time
delays and large time constants because the integrated error and the tim-
ing window are large. Distillation columns are the prime candidates, fol-
lowed by reactors, crystallizers, and evaporators. About the toughest
application in which to get the timing rate is liquid pressure control,
because the process time constant is so small. Feedforward control is
essential for relatively fast periodic upsets, although the better solution is
to eliminate the root cause, which is typically another control loop. If the
period of the disturbance is less than twice the ultimate period of the loop,
the feedback controller cannot correct for the upset within the settling time
and may amplify the upset and do more harm than good.
When feedforward control is used, there are some important best practices
to consider that are outlined in Table 2-7.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
flow or speed ratio control for feedback measurement problems and maintenance.
4. Display any feedforward timing errors, particularly when the feedforward time delay
is too short, and automatically calculate and correct for any known transportation
delays for sheet lines and pipelines.
5. Make sure the transfer from feedforward to feedback control or vice versa is bump-
less.
6. For dead time–dominant processes, such as sheet lines and pipelines, make sure
the feedforward timing is accurate. For flow or speed ratio control, make sure the
manipulated variable response is in unison with the disturbance variable response.
can slowly optimize the feedforward gain to account for drift and
unknown parameters.
If there are unmeasured load upsets, the feedforward can ask for the
manipulated variable to go in the wrong direction. For example, if the col-
umn temperature makes a sharp turn downward below the set point due
to an unmeasured load upset, an increase in feed that would increase
reflux flow by flow feedforward would make the unmeasured upset
worse. Experienced operators will decrease the reflux flow. Just one of
these situations is enough for the operator to lose confidence in the feed-
forward. An adaptive strategy could compensate for this feedforward mis-
take by looking at trajectories of feedback and feedforward measurements.
It would be easier to do this with model predictive control because a better
feedforward trajectory is available, although a trajectory of the bias correc-
tion would need to be generated to indicate the path of the unmeasured
load upset.
Rules of Thumb
Rule 2.1. — The largest opportunity for final elements is to eliminate stick-slip
and dead band. The effect of slip is worse than stick and stick is worse than
dead band.
Rule 2.2. — The control valve with the best response is a sliding stem control
valve with a diaphragm actuator and a digital positioner. The dead band and
rangeability limits in the variable-speed drive must be relaxed. If the slid-
ing stem valve has high temperature or environment packing, the digital
positioner must be aggressively tuned. If a rotary valve must be used,
make sure it has a splined connection between the disc and actuator shaft
and short large-diameter shaft. For some extremely fast critical applica-
tions, such as polymer pipeline and incinerator pressure control, a vari-
able-speed drive is essential.
Rule 2.3. — The largest opportunity for measurements is the selection and instal-
lation of a sensor for better reproducibility, less noise, and minimal interference.
The reproducibility can be estimated as the repeatability and drift. The
need for reliability and resolution is a given and less of a problem today
than reproducibility and noise.
Rule 2.4. — The best flow measurement is the coriolis flow meter. The main lim-
itations are that the coriolis meter may not be offered in suitable materials
or size, or may not have the temperature rating needed.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
to a 3-or 4-wire RTD sensor installed in a piping elbow. The main limitation is
that the RTD may not be suitable for very high temperatures.
Rule 2.6. — The best level measurement is a radar gauge. The main limitations
are a fluid with a dielectric constant of less than 2.0 and a tall, narrow ves-
sel.
Rule 2.7. — Check the life-cycle cost, including the cost of variability, before
choosing a less expensive control valve or measurement. The hardware cost is
generally a small part of the life-cycle cost.
Rule 2.8. — Use smart transmitters. The improved accuracy and diagnostics
are well worth the extra cost.
Rule 2.9. — Use Fieldbus for major upgrades and new installations. The
reduced cost of commissioning and wiring, the expanded diagnostics, and
improved accuracy from the elimination of A/D error are significant.
Rule 2.10. — Use a closed-loop method if an auto tuner pretest fails or is not safe.
The closed-loop method keeps the loop in auto and includes the effects of
valve stick-slip and dead band.
Rule 2.11. — For a process with a large time constant, use more gain and rate
action to overdrive the manipulated variable, to decrease rise time, peak error, and
return time. If the measurement is smooth, you can use rate action to
reduce overshoot.
Rule 2.12. — For dead time–dominant loops, significantly decrease the integral
(reset) time setting. It can be as small as 1/4 of the time delay or 1/8 of the
ultimate period for a pure dead time process.
Rule 2.14. — Go for the largest and least expensive ways to reduce loop dead
time. The automation system is the largest source of time delay for flow,
level, and pressure loops.
Rule 2.15. — Use cascade control to correct for secondary loop disturbances
before they affect a primary loop, or to linearize the manipulated variable for feed-
forward or ratio control. If the secondary loop is not 5 times faster than the
primary loop, the scan time or filter time must be increased or the gain
decreased for the primary controller to slow down the primary loop.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 2.16. — Use feedforward control for loops with a large time delay, time lag,
or periodic disturbance, to provide a preemptive correction for load upsets. The
timing and gain must both be right and the feedforward signal must not
arrive too soon.
Theory
If dQr /dTr < Fo∗Cr + U∗A, and dQr /dTr > Fo∗Cr + U∗A, then we have a
negative-feedback time constant and positive-feedback time constant,
respectively. The time constant is not constant but is proportional to
reaction mass and inversely proportional to the outlet flow and the
product of the heat transfer coefficient and area for the jacket.
For a batch reactor (no outlet flow) with a negligible heat release (dQr /dTr
= 0), we have the general form of the equation to approximate the thermal
time lag of a closed volume. It can also be used for a thermowell by substi-
tution of the proper parameter values.
The process gain depends upon the input under consideration. If the
manipulated or disturbance variable is feed flow:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
In either case, we see that the process gain is not constant and is inversely
proportional to the outlet flow and the product of the heat transfer
coefficient and area for the jacket.
For a jacket coolant system with recirculation where there is equal volume
displacement of coolant return flow by coolant makeup flow, so that jacket
flow is constant, we have the following equation for the temperature of a
mixture of makeup and recirculation flow:
The process gain (Kp) for the control of the jacket inlet temperature (Ti) by
manipulation of coolant makeup flow (Fc) in a secondary loop is the
partial derivative dTm / dFc:
For a total mass balance, the rate of accumulation of mass is equal to the
mass flow in minus the mass flow out:
ρ∗Ar∗dL/dt = Fi - Fo (2-19)
Here we see that the integrator gain (Ki) for the level response is inversely
proportional to the product of the fluid density (ρ) and the cross sectional
area of the reactor (Ar).
For plug flow, the entire residence time is a transportation delay. The time
delay is the volume divided by the flow. The flow in pipelines, sample
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
lines, static mixers, coils, and heat exchanger tubes can be considered to be
essentially plug flow.
For perfect mixing, the entire residence time is a process time constant. In
well-mixed volumes of proper geometry and with baffles, the portion of
the residence time that shows up as time delay can be estimated as half the
turnover time, as shown in Equations 2-23 and 2-24, and most of the resi-
dence time becomes a process time constant [2.14].
Lastly, time constants in series create time delay. When the flow of
material or energy can reverse direction depending upon the sign of the
driving force, the time constants are interactive. Conductive heat transfer,
gas flow in pipelines, and the tray response in columns all have interactive
time constants. As the number of equal interactive and non-interactive
time constants in series increases, the time delay increases, from 0.02 to
0.16 and 0.14 to 0.88 times the sum of the time constants respectively. The
portion of the sum not converted to time delay is the process time constant
for a first order–plus–dead time approximation. Thus, interactive time
constants do not create much time delay and the time delay–to–time
constant ratio is always rather nice. A large number of non-interactive
time constants creates an extremely difficult to control dead-time-
dominant system.
1
K u = -----------------------------
- (2-25)
K o * AR –180
1
AR –180 = ----------------------------------------------
2
- (2-26)
[ 1 + ( τo * ωn ) ]1 ⁄ 2
2
[ 1 + ( τo * ωn ) ]1 ⁄ 2
K u = ----------------------------------------------- (2-27)
Ko
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Since the natural frequency in radians per minute (ωn) is 2π divided by the
ultimate period (Tu), we can express the ultimate gain (Ku) as a function of
the ultimate period.
2
[ 1 + { ( τo * 2 * π ) ⁄ Tu } ]1 ⁄ 2
K u = --------------------------------------------------------------------- (2-28)
Ko
For a time constant much larger than the time delay (τ >> τd), the ultimate
gain is:
2*π*τ
K u = ------------------------o (2-29)
Ko * Tu
Since for this case the ultimate period is about 4 times the time delay (Tu ≅
4 ∗ τd), the ultimate gain can be simplified to a ratio of the time constant to
time delay.
τo
K u = 1.6 * ----------------- (2-30)
Ko * τd
Since the controller gain is a factor of the ultimate gain (Kc = 0.25∗Ku), the
controller gain is proportional to the time constant and inversely
proportional to the time delay and the open loop gain.
τo
K c = 0.4 * ----------------- (2-31)
Ko * τd
If the time delay is much larger than the time constant (τd >>το), it can be
shown that Equation 2-27 reduces to the ultimate gain being the inverse of
the open loop gain. This relationship can also be realized from the
amplitude ratio being 1 for a pure time delay.
1
K c = 0.25 * ------ (2-32)
Ko
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tu = (-360/φ) ∗ τd (2-34)
For a time constant much larger than the time delay there is a -90 phase
shift from the time constant, which leaves only -90 phase shift (φ) needed
from the time delay to reach the -180 total phase shift; the ultimate period
becomes simply 4 times the time delay.
If, on the other hand, the time constant is so much smaller than the time
delay that essentially all -180 phase shift (φ) comes from the time delay, the
ultimate period approaches 2 times the time delay.
For τd >>το:
Tu = 2 ∗ τd (2-36)
The following curve fit shows how the ultimate period changes from a
multiple of 2 to 4 of the time delay and as a function of the relative sizes of
the time constant and time delay.
0.65
τo
T u = 2 * 1 + ---------------- * τd (2-37)
τo + τd
τ /τ )
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
If the time delay is less than the time constant, we can simplify the
relationship.
τd
E x = ---------------- * E o (2-39)
τo + τd
If the time delay is much less than the time constant, we end up with the
ratio of the time delay to the time constant.
τd
E x = ----- * E o (2-40)
τo
The minimum integrated error (Ei) is approximately the peak error (Ex)
multiplied by the time delay and is thus proportional to the time delay
squared.
τd
E i = ----- * τ d * E o (2-41)
τo
1
E x = ------ * E o (2-42)
Kc
If we further realize that the integral time (Ti) is a factor of the ultimate
period that is a multiple of the time delay, we have a relationship where
the integrated error (Ei) is proportional to the integral time and inversely
proportional to the controller gain.
1
E i = ------ * T i * E o (2-43)
Kc
For dead time–dominant loops, the peak error approaches the open loop
error (Eo), and the integrated error approaches the product of the open
loop error and the integral time.
For τd >>το:
Ei = Ti∗Eo (2-44)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feedforward Control
The equations for feedforward control are derived from the steady-state
material or energy balance at the point of control. For a heat exchanger
where the bypass is throttled for temperature control, the feedforward
equation for the hot bypass flow (Fhb) is: [2.19]
DB
τ dv = ----------------------- (2-47)
∆CO ⁄ ∆t
The rate of change of controller output depends upon the controller tuning
and the error. If we consider the effect of just controller gain (Kc) for a loop
dominated by a large time lag so that the amount of reset action used is
small [2.19]:
If we use Equation 2-31 for the controller gain with a detuning factor (Kx)
and realize that the rate of change of the controlled variable (∆CV / ∆t ) is
simply the pseudo-integrator gain (Ki = Ko / τo) for the large open loop
time lag (τo) multiplied by the change in the actual valve position (∆AVP),
we have: [2.19]
Kx
∆CO ⁄ ∆t = ------------------- * K i * ∆AVP (2-49)
K i * τ do
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
If we cancel out the integrator gains and approximate the change in actual
valve position on the average as the controller output minus one half of
the dead band (∆AVP = ∆CO – DB), we end up with an expression to
estimate the valve time delay from the valve dead band and the observed
loop time delay for a step change in controller output. [2.19]
DB
τ dv = ---------------------------------------------------- * τ do (2-50)
K x * ( ∆CO – DB ⁄ 2 )
Nomenclature
A = heat transfer area (ft2)
Ar = cross sectional area of reactor (ft2)
AVP = actual valve position (%)
AR180 = amplitude ratio at a phase shift of -180 degrees
CAo = mass fraction of component A in reactor outlet
CAf = mass fraction of component A in reactor feed
Cc = heat capacity of cold fluid (btu/lb*oF)
Cf = heat capacity of feed to reactor (btu/lb*oF)
Ch = heat capacity of hot fluid (btu/lb*oF)
Cr = heat capacity of liquid in reactor (btu/lb*oF)
Cj = heat capacity of liquid in jacket (btu/lb*oF)
CO = controller output (%)
CV = controlled variable (%)
DB = dead band (%)
Ei = integrated error (e.u.)
Eo = open loop error (e.u.)
Ex = peak error (e.u.)
fn = natural frequency (cycles/hr)
Fa = agitator pumping rate (lb/hr)
Fc = coolant makeup flow (lb/hr)
Fci = cold fluid inlet flow to exchanger (lb/hr)
Ff = feed flow to reactor (lb/hr)
Fhi = hot fluid inlet flow to exchanger (lb/hr)
Fho = hot fluid outlet flow from exchanger (lb/hr)
Fhb = hot fluid bypass flow around exchanger (lb/hr)
Fj = jacket coolant flow (lb/hr)
Fo = outlet flow from reactor (lb/hr)
k = reaction rate constant (btu/lb*hr)
Kc = controller gain
Kp = process gain (e.u./e.u.)
Ki = integrator gain (e.u./hr)
Ko = open loop static gain
Ku = ultimate gain
L = liquid level in reactor (ft)
Mr = mass of liquid in reactor (lbs)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
References
1. McMillan, Gregory K., Tuning and Control Loop Performance, 3rd Edition, 1994,
ISA.
2. Ruel, Michael, “Stiction: The Hidden Menace,” Control, November 2000, pp.
71-76.
3. Ruel, Michael, “Control Valve Health Certificate,” Chemical Engineering,
November 2001, pp 62-65.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
The objectives for this chapter are to help the user identify advanced pro-
cess control (APC) opportunities, estimate benefits, select the best technol-
ogy, sustain the solution, and track the benefits. Industry is driven by cost
and benefits analysis and is generally not interested in a great technology
looking for an application. This chapter combines the methods that have
been used extensively to identify opportunities for APC in the chemical
and the pulp and paper industries [3.1] [3.2] [3.3] [3.4] [3.5].
The strongest advocates of maintaining the status quo are often the most
experienced people in operations. For batch operations this corresponds to
fixed feed rates, hold times, and valve positions.
Process control deals with change. If there were no variations in raw mate-
rials, utilities, process and ambient conditions, equipment performance,
production rates, or desired operating points, there would be little need
for control loops. In actual plant operations nothing is completely con-
stant. Thus, most of the opportunities for process control deal with
change.
Capital expenditures for most plants have come under increased scrutiny.
In order to make the improvements that provide the biggest bang for the
buck, the engineer needs to know what is important despite an over-
whelming number of choices of control technologies and an explosion of
information and data. This chapter provides a perspective on how to
choose the most appropriate technology.
The time frame, pattern, and sequence of changes are important for track-
ing down the sources and assessing the impact of variability. The faster the
change, the more difficult it is for a controller to correct. However, changes
with a short period are more effectively attenuated by process variable fil-
ters and back-mixed volumes. Table 3-1 summarizes the most common
sources of change, and their relative speeds.
Noise and repeatability errors in a measurement are fast and are passed on
to the manipulated variable. Analyzers are a notable exception as they are
a source of slow frequency variability from a long sample transportation
delay and processing (cycle) time.
Market changes are relatively slow, but the implementation is fast. Once a
rate change is decided upon, operators tend to make large step changes in
feed set points that are not simultaneous or coordinated. Transitions are
accomplished even more quickly to minimize the cross contamination of
products. As inventories decrease, market volatility increases, and the
demand for more flexible manufacturing increases, the frequency of rate
and product changes will increase to the point where steady-state opera-
tion may be a distant memory.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
changes are often a major point of disruption. Operators also tend to get
bored and actually love being in charge of inventory control where they
have to changes flow set points to keep woefully undersized surge or feed
tank volumes from overflowing or running dry. Unfortunately, the set
point changes made by operators are fast, inconsistent, impatient, and do
not take into account the effect of time delays, time lags, and interactions.
Figure 3-1 shows the pyramid of technologies for advanced control. The
base is the solid foundation of basic process control discussed in Chapter
2. The next layers of loop and process performance monitoring provide
the tools to quantify the opportunities, sustain the performance, and track
the benefits of APC. Loop monitors can identify when and where an auto
tuner needs to be run again. Rules can be added on top of the
performance-monitoring layers to provide better explanations and
automatic corrections. Expert systems can be developed to deal with
abnormal situation management, such as equipment and automation
system failures and degradation.
TS
RTO
LP/QP
Ramper or Pusher
Property Estimators
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Fuzzy Logic
Auto Tuner
TS is tactical scheduler, RTO is real time optimizer, LP is linear program, QP is quadratic program
The advanced control technology with the best track record to date for
increasing plant efficiency and capacity is the model predictive controller
[3.7]. The built-in knowledge of the process dynamics and constraints
from extensive plant testing, the ability to readily control the amount of
variability transferred to the manipulated variables by move suppression,
plus the ability to readily add some optimization, are the features respon-
sible for its success. The objective of any advanced control program should
be to get at least to the level of model predictive control with a constraint
pusher for a simple optimization of a single variable. It is expected that as
process modeling tools become easier and less expensive to use and main-
tain, real-time optimization will start to deliver more consistent benefits
and applications that vie for the top of the pyramid will be more common
[3.8].
Opportunity Assessment
Figures 3-2a and 3-2b show how reducing the variability in a process vari-
able associated with a constraint translates to the opportunity to move the
variable closer to the constraint. The traditional way of depicting the
improvement is to show the current peak at the optimum location for the
given degree of variability, as illustrated by Figure 3-2a. The benefits are
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
then based on how much the peak can be moved if the standard deviation
is reduced and still provide the same small number of violations of the
constraint. In practice, the peak is typically set by the operator much more
conservatively so that even if the variability is not reduced, just a better
understanding and automation of the positioning of the peak relative to
the constraint affords an additional opportunity, as shown in Figure 3-2b.
Manual set points are chosen based on opinions and war stories rather
than data. The operator is usually the largest and most active constraint to
optimum operation.
2-Sigma 2-Sigma
Set Point
value
PV distribution for
improved control
2-Sigma 2-Sigma
LOCAL
Set Point Upper Limit
PV distribution for
original control
2-Sigma 2-Sigma
RCAS
Set Point
2-Sigma 2-Sigma
tools and procedures for identification of this economic gain are similar to
those used for identification of the process gain for property estimators
(Chapter 8). The process must start with an online measurement or calcu-
lation of capacity, yield, and utilities to provide the economic variables. In
some cases, missing or inaccurate measurements will be found in the pro-
cess of putting the economic variables online. It also provides a head start
for true performance monitoring (Chapter 4) and real-time optimization
(Chapter 10).
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
control system. Questions (1) through (20) ascertain how important it is to
reduce variability in an operating limit (constraint) and what process vari-
ables are involved. The questions generally apply to both batch and con-
tinuous unit operations. Appendix A presents the more extensive list of
questions that were used by Monsanto and Solutia for opportunity assess-
ments in the last decade in all of the major processes, OAs that led to pro-
cess control improvements worth $60 million a year in benefits. ICI has
cited 2% to 6% of operating costs as the benefits achieved through
improved process control. Besides the cost savings, ICI is convinced it has
increased the safety and reduced the environmental impact of its plants
[3.9].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
19. Could the amount of scrap be reduced in a sheet line by operation
closer to the cross and machine direction constraints on gauge?
20. Could the roll speed be increased on a sheet line by operation
closer to the sheet moisture and extruder speed constraints?
21. Could the amount of sheet giveaway to the customer be reduced
by tighter control of average thickness and profile near the edges?
22. Can the production rates of unit operations be coordinated to take
maximum advantage of surge volumes to increase plant capacity?
23. Could a reflux flow be reduced to decrease the steam flow to a
reboiler?
24. Could a recycle flow be reduced to decrease utility use and
pressure drops and to increase residence time and surge volume
capacity?
Most reactors operate with an excess of one or more reactants because the
downside of a deficiency due to less than ideal mixing or variability in the
feeds or the composition measurement is low reaction rates and the forma-
tion of byproducts that can lead to waste treatment problems and hazard-
ous operation, besides a loss in production. The amount of excess reactant
is often based on operating practice and how many times an operator has
had actual or perceived problems from operating too close to the con-
straint.
In order to estimate the benefits from reducing feed variability, the reduc-
tion in variability in the excess component of interest must be computed
from an online analyzer, estimator, or attenuation calculation. The online
estimator must include the time constant associated with mixed volume to
show the smoothing effect. It does not need to have the time delay associ-
ated with an analyzer. The cycle time of chromatographs and lab analysis
is too slow and will alias the relatively fast reactor concentration oscilla-
tions from feed variability.
A
AR = ------o (3-1)
Ai
1
AR = ------------------------------------- (3-2)
2 0.5
[ 1 + ( τ*ω ) ]
For τ∗ω > 1:
1 -
AR = ----------------------- (3-3)
2*π*f 0 *τ
Since fo = 1/To:
To
AR = ---------------
- (3-4)
2*π*τ
where:
Ai = amplitude of the oscillations at the inlet to the volume
(e.u.)
Ao = amplitude of the oscillations at the outlet of the volume
(e.u.)
AR = amplitude ratio
fo = frequency of oscillation (cycles/minute)
ω = frequency of oscillation (radians/minute)
To = period of oscillation (minutes)
τ = time constant for back-mixed volume (minutes)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Unfortunately, the user does not usually know the time delay and it is
rarely if ever constant. The time delay can be measured online as the time
it takes for the controlled variable to get beyond the noise band whenever
there is a change in the set point or manual output. Alternatively, the dead
time can be captured whenever an auto tuner is run on the loop. For flow
and liquid pressure, the time delay can be estimated as one half of the loop
scan time and the valve time delay, which is the time between a change in
the controller output and the actual valve position as measured by a smart
positioner for a sliding stem valve. In practice to date, accurate estimates
and identification of the time delay have not normally been available. The
key point is that the calculation interval should be slowed down for the
process capability calculation to be more representative of the actual capa-
bility of feedback control.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
between the limit and the mean value. This shift provides the same distri-
bution of data points beyond the limit. In most cases, the mean value is
more than two standard deviations away from the limit so that the actual
number of data points beyond the limit is quite small.
Equations 3-10 and 3-11 express the mean and shift in terms of an old and
new set point. It is important to realize that these are estimates for the low-
est possible variability in the controlled variable. In some cases, it may be
undesirable to transfer all this variability from the controlled to the manip-
ulated variable. Model predictive control excels at determining how much
variability is transferred and getting the most out of both feedback and
feedforward control. Thus, the full benefit offered by these calculations is
only approached by the application of model predictive control built on a
solid foundation of good valves and measurements.
$$Savings
Limit or
Spec Target
Setpoint
Equations
2 2 2 1⁄2
S ffc = [ ( S ffm ) + ( S ffg ) + ( S ffd ) ] (3-6)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Production
Increase $$
Throughput at
Throughput limit I.e. maximum
possible
Operator setpoint
Value of limiting
input or constraint
Limiting
Condition Process input of
constraint limit
Time
SP old = CV M (3-10)
where:
Kp = process gain (∆CV/∆MV)
SPnew = new set point
SPold = old set point
Sapc = standard deviation possible for advanced process control
Sfbc = standard deviation possible for feedback control
Sffc = standard deviation for feedforward control
Sffd = standard deviation of feedforward dynamics error
Sffg = standard deviation of feedforward gain error
Sffm = standard deviation of feedforward measurement error
Stot = total standard deviation
∆CVm = shift in mean value of controlled variable
CVM = mean value of controlled variable
CVL = limit (constraint) for controlled variable
MVM = mean value of manipulated variable
MVL = limit (constraint) for manipulated variable
The questions listed do not cover any reduction in down time or penalties
associated with exceeding equipment and environmental limits. For exam-
ple, the life of glass linings and the rupture disks depend upon the number
of temperature and pressure cycles. Rupture discs can cause reportable
emissions and a brief excursion in an effluent stream below 2 pH or above
12 pH can classify a whole volume as hazardous waste.
Examples
Figure 3-4, any variation in this temperature from best operating point will
result in less production at a given feedstock input. The slope is the pro-
cess gain used to calculate the benefits as you approach the optimum
operating point.
3500
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
3400 Mean Variation Production
(DegC) (DegC) Loss(%)
Ammonia 415 0 0
Production 3300 415 25 0.103
(mol/hr) 415 50 0.415
460 25 1.102
460 50 1.412
3200
3100
350 400 450 500
Bed #1 Inlet
Temperature (Deg C)
Shifting Bottlenecks
Stressed-out plants with old equipment, difficult processes, and recycle
streams often have a shifting and confusing bottleneck. The proper APC
pathway can solve and demystify the problem. Often overlooked are the
significant side benefits of process knowledge gained from the implemen-
tation and day-to-day operation of an APC system.
Figure 3-6 shows the major unit operations for the production of a nylon
intermediate chemical. Since the solids content of the streams is high, the
plant is over 40 years old, and the plant is sold out and running at more
than four times the original nameplate capacity, one or more pipelines,
reactors, evaporators, crystallizers, and centrifuges are always shut down
for washout or maintenance. Centrifuges periodically trip on vibration or
spill over solids (slop) into the recycle. The equipment and operators are
Unmeasured
Wash Water Crystallizers (Cx)
Reactors (Rx) Evaporators (Ev) and Centrifuges (Cs)
Recycle Hydroclones (Hc)
Tk
Feed
Rx Rx Ev Cx Hc Cs
Feed Rx Rx Tk Ev Tk Cx Hc Tk Cs Tk
Feed Rx
Rx Ev Cx Hc Cs
Purge
Reactors, evaporators, and crystallizer heat transfer surfaces get coated Centrifuges slop
with solids and must be periodically shutdown and manually defrosted Recycle solids into recycle
Tk
All surge and recycle tanks (Tk) are undersized Operator sets crystallizer feed rate
based on visual inspection of solids
Purge in sample of hydro-clone overflow Unmeasured
Wash Water
so stressed that the control room is always in crisis mode with incessant
alarms and flipping of screens. Most of the controller outputs are satu-
rated high. War stories rule. Engineer burnout and turnover are high. The
young engineers thrown into the fray defer to operator opinion rather
than applying chemical engineering principles. Management is afraid to
do anything because capacity has actually decreased after each debottle-
necking project.
A large amount of water is introduced into the process from washout con-
nections but is largely not metered. The equipment must be cleaned and
started up manually, so the operator is naturally reluctant to push capac-
ity. The surge volumes that are used as feed and recycle tanks are seriously
undersized for the present production rates, the additional water load,
and centrifuge slopping. The operators continually adjust feed rates or
add water to keep tanks from running dry. Sometimes the recycle tanks
overflow from the additional water load and product is lost to the sewer.
The feed rate to each of the final-stage crystallizers is set by the operator
based on a visual inspection of a sample from the recirculation line for the
amount of solids. The interpretation of the concentration and particle size
is subjective and the main goal of the operator is to reduce how often the
crystallizer must be taken down and the crystal buildup on the walls and
heat-transfer surfaces removed. A supervisory control system is periodi-
cally turned on that finds the lowest production rate and drags the whole
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
plant down to match it. Since the feed to the last stage of crystallizers is
typically set low from the margin of operator error and lack of confidence,
the tail wags the dog, and the plant capacity decreases each year despite
capital projects to replace and add equipment to reduce bottlenecks.
Some pipeline and pump sizes are increased so control valves are not wide
open, and some on-off valves and magnetic flow meters must be installed
to measure and control water addition. Coriolis mass flow meters are
installed not only to provide more constant solids loading of the feed to
the unit operations but also to provide estimators of solids concentration
and crystal buildup on walls. Nuclear gauges are added to each centrifuge
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
to measure the cake mass in the basket. The combination of mass feed flow
and basket solids load control improves the production rate and on-stream
time of the centrifuges. The scheduling and the sequencing of valves for
automated cleaning and startup makes the performance of the equipment
more reproducible and predictable. The use of variable speed drives to
throttle the feed to the hydroclones increases the pressure drop available
for separation of solids in the hydroclones and eliminates significant slip-
stick since sliding stem valves are not suitable for the high solids concen-
tration. Model predictive control is used to control the solids concentration
in each reactor, evaporator, and crystallizer and to manage the overall
inventory control and purge rate. Rampers and pushers are used to maxi-
mize feed rates without a projected violation of a level or process con-
straint. Finally, real-time optimization is used to track bottlenecks,
coordinate feeds, optimize recycle and purge flows, reach an understand-
ing of the root causes, separate fact from fiction, and provide data to lead
to successful and viable projects.
Application
General Procedure
1. Install a control loop and process performance monitoring
systems.
2. Ask marketing to identify the key business drivers.
3. Develop management sponsors, onsite advocates, and resources.
4. Determine the percentage of time sold out, scheduled and
unscheduled downtime, profit per pound, variable costs per
Unmeasured
Wash Water Crystallizers (Cx)
Reactors (Rx) Evaporators (Ev) and Centrifuges (Cs)
Recycle Hydroclones (Hc)
Tk
Feed
Rx Rx Ev Cx Hc Cs
Feed Rx Rx Tk Ev Tk Cx Hc Tk Cs Tk
Feed
Rx Rx Ev Cx Hc Cs
Purge
Reactors, evaporators, and crystallizer heat transfer surfaces get coated Centrifuges slop
with solids and must be periodically shutdown and manually defrosted Recycle solids into recycle
Tk
All surge and recycle tanks (Tk) are undersized Operator sets crystallizer feed rate
based on visual inspection of solids
Purge in sample of hydro-clone overflow Unmeasured
Wash Water
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Application Detail
Neutralizer
The benefits from the reduction in variability afforded by the improve-
ments to the basic control system can be estimated for the pH example in
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Chapter 2. The largest sources of variability in the existing system are the
ball valves with excessive stick-slip caused by the excessive capacity and
friction and missing positioner. The limit cycle from the reagent valves in
terms of a reagent-to-feed ratio is translated to a pH limit cycle as illus-
trated in Figure 2-4b. However, the limit cycle from the first-stage valve is
first multiplied by the amplitude ratio based on the period of oscillation of
the first stage and the residence time of the second stage as shown in
Equation 3-12.
Equation 3-13b is a curve fit for the distribution of data points in neutral-
izer pH for a limit cycle. The standard deviation of the limit cycle for the
new precise control valves is estimated and the calculations are repeated.
To see the improvement in yield by other improvements, the decrease in
the fractional integrated error (∆Xp) can be estimated for a typical set of
feed rate changes. For the benefits in terms of less reagent use for a more
optimum pH set point, Equations 3-15 and 3-16 can be used to estimate the
shift in the mean (∆Xm) from reduced variability. Whether it is best to not
move the pH set point and reduce the fraction of product below the low
constraint, or to lower the pH set point to the point where there is no
change in the fraction of product below the low constraint, depends upon
the values of ∆Bp and ∆Bm.
S ni = ( S mi ⁄ 100% ) * ( F r ⁄ F f ) * P n * AR n (3-12)
z i = ( X L – X M ) ⁄ S ni (3-13a)
3 2
φ i = – 1.4743 * z i + 14.488 * z i – 46.888 * z i + 50 (3-13b)
∆X p = ( φ i – φ j ) ⁄ 100% (3-13c)
∆B p = ∆X p * ∆Y p * C t * F f * E f (3-14)
∆X m = ∆S ni (3-15)
∆B m = ( ∆X m ⁄ P n ) * C t * F f * E r (3-16)
where:
ARn = amplitude ratio for neutralizer
∆Bp = delta benefits from less product below low constraint ($/
hr)
∆Bm = delta benefits from shift of mean closer to low constraint
($/hr)
∆Yp = yield loss from distribution below low constraint
Ct = conversion factor for time (24hr/day)
Ef = economic factor for cost of feed ($/lb)
Er = economic factor for cost of reagent ($/lb)
Ff = feed flow (lb/hr)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Fr = raw material (reagent) flow (lb/hr)
φi = percent of data points below low constraint for limit cycle i
(%)
Pn = process gain in neutralizer from titration curve slope
(∆pH/∆ratio)
Smi = standard deviation in mixer reagent valve position for limit
cycle i (%)
Sni = standard deviation in neutralizer pH for limit cycle i (pH)
∆Xp = delta fractional product below low limit
∆Xm = shift in mean (pH)
XM = mean (pH)
XL = low constraint (pH)
zi = number of standard deviations from the mean to the flow
constraint (see eq. 3-6a)
By the use of a sliding stem valve with a digital positioner, the standard
deviation of a limit cycle in the first reagent valve is reduced from 10%
(Sm1) to 0.1% (Sm2). The process gain is 10,000 pH per flow ratio (Pn), the
flow ratio at set point is 0.1 (Fr/Ff), the set point is currently 7 pH (XM), the
feed flow is 100 kpph (Ff), the limit cycle period is 1 minute (To), the
residence time in the well mixed neutralizer is 20 minutes (τ), the cost of
feed and reagent are both $0.1/lb (Ef and Er), and the conversion decreases
linearly from 98% at 6 pH (XL) to 88% at 5 pH.
The benefits in reduced feed costs from a better conversion for a 7 pH set
point can be estimated as follows with Equations 3-17 through 3-25:
1
AR n = ------------------------
- = 0.008 (3-17)
2 * π * 20
3 2
φ 1 = – 1.4743 * ( 1.25 ) + 14.488 * ( 1.25 ) – 46.888 * 1.25 + 50 = 11% (3-20)
φ2 = 0% (3-23)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆B p = 0.11 * 0.04 * 24hr/day * 100000pph * 0.1$/lb = $1,056/day (3-25)
The benefits in reduced reagent costs from a shift of the set point toward
the low constraint can be estimated as follows with Equations 3-26 and
3-27:
From the above analysis, it is clear that it is better to leave the set point at
7 pH and reduce the fraction of product that is below 6 pH.
Distillation Column
The benefits from the reduction in variability afforded by the improve-
ments to the basic control system can be estimated for the distillation
example in Chapter 2 with equations similar to those given for the pH
example. The limit cycle before and after the addition of the aggressively
tuned digital positioner is multiplied by an amplitude ratio to determine
the attenuation from the storage tank volume. The period of the limit cycle
is about 6 times the temperature loop time delay, and the time constant of
the storage tank with just an eductor for mixing is about half the residence
time. The attenuated limit cycles are next translated from an oscillation in
the distillate-to-feed ratio to temperature oscillation as illustrated in Fig-
ure 3-5c. The amplitudes of the temperature oscillations are then multi-
plied by the process gain to translate from temperature to impurity
concentration and finally by the economic gain to translate from impurity
concentration to the cost of extra recycle and steam.
cant. The change in total temperature loop dead time affects the loop
period and hence the amplitude ratio.
The value of flow feedforward and better feed tank level controller tuning
can be estimated by the reduction in fractional integrated error (∆Xp) in
product impurity concentration for a typical set of flow upsets. Case stud-
ies have shown a reduction of 2:1 from better control valves and a 5:1
reduction from an improved control scheme in product variability for a
distillation process [3.10].
The value of reduced valve stick-slip, although real and significant, is one
of the most difficult improvements to quantify. The estimation of benefits
from advanced control is usually much simpler because higher value-
added variables are more directly affected or optimized.
Paper Machine
The benefits from the use of model predictive control for the basis weight
of a paper machine shown in Figure 3-7a can be rather easily estimated by
Equations 3-28 and 3-29. Figure 3-7b illustrates how a decrease in the 2
sigma standard deviation can result in a shift of the mean much closer to
the optimum.
AT
1-1
Basis Weight
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Number of
Samples
Basis Weight
Product Spec
Minimum Wt.
Shift in aveage
(mean) value
Reactor
A key operation in a plant is the catalytic reaction system. In this example,
the catalyst concentration is critical to maximize the conversion of raw
material to product, to improve yield and to minimize costs. Every pound
of the product is sold so it is also important to avoid operating conditions
that could lead to downtime and lost production [3.12].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
XL = low constraint for catalyst concentration (%)
Popular Excuses
The following is a compilation of popular excuses used by people who
want to maintain the status quo and stay in their comfort zones. Most
reflect a process-design and steady-state viewpoint and a lack of under-
standing of dynamics and the advanced control goal of building and
incorporating process knowledge by extensive process monitoring, test-
ing, and modeling. In case these excuses are used as a means to delay an
APC opportunity, the APC counterpoint is given.
Rules of Thumb
Rule 3.1. — Measurements of present variability and estimates of reduced vari-
ability, attenuation factors for back-mixed volumes, conversion factors, and an
economic gain factor can be used to provide a more accurate estimate of the bene-
fits from advanced control. The gains and factors must trace the path from
the controlled variable to the process variable in the product that leaves
the plant. Dynamic property estimators may be useful to find process
gains and put the calculations online.
Rule 3.2. — Find the best practical production rate or yield from the best periods
of operation or batches from cost sheets and the best theoretical rate or yield from
steady-state and dynamic simulation of new operating conditions. The opportu-
nity assessment questions should all be oriented to find how to eliminate
the gap between the actual and practical or theoretical performance.
Rule 3.3. — Loop and process performance monitoring systems and online plant
economic performance calculations are essential to maximize and sustain benefits.
Without these calculations in place, the benefits will get lost in the noise or
attributed to other activities.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 3.4. — If the benefits are not documented and reported, advanced process
control will not get the recognition needed to insure resource availability and com-
mitment. After any initial enthusiasm, the effort will rapidly fade. Due to
downsizing and information explosion, the average user is faced with an
overwhelming number of initiatives and supposedly neat ideas.
Rule 3.6. — Insure that at least a model predictive control (MPC) with some
degree of optimization is implemented. MPC has the best track record for ben-
efits.
Rule 3.7. — APC technologies must be employed in closed loop control to achieve
most of the benefits. Advanced control improvements that stay in an advi-
sory mode achieve a small fraction of the benefits possible, because of
operator inattention and bias.
Rule 3.10. — Plants with changing economic objectives, complex recycle effects,
shifting bottlenecks, and complex nonlinear relationships in several unit opera-
tions need a real-time optimization. High-fidelity steady-state models are
used for the real-time optimization of continuous constant conversion pro-
cesses, whereas dynamic models are needed for batch operations and for
processes where reaction and polymerization rates are important for opti-
mization.
References
1. Bialkowski, William L., “Process Control Audits Have Major Impact on
Uniformity,” American Papermaker, September 1990, pp. 50-57.
2. McMillan, Gregory K., “Benchmarking Studies in Process Control,” InTech,
November 1992, pp. 44-46.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Control, Prentice Hall, Inc., 1995.
5. Bialkowski, William L., “Process Analysis and Diagnostics,” Fisher-
Rosemount Systems Users Group Meeting, November 1996.
6. Luyben , Michael L. and Luyben, William L., Essentials of Process Control,
McGraw Hill Chemical Engineering Series, 1997.
7. Chia, T.L., and Lefkowitz I, “Add Efficiency with Multivariable Control,”
InTech, September 1997, pp. 85-88.
8. Mansy, Michael M., McMillan, Gregory K., and Sowell, Mark S., “Step Into the
Virtual Plant,” Chemical Engineering Progress (CEP), February 2002, pp. 56-61.
9. Tinham, Brian, “Control in the Chemical Industry,” Control and
Instrumentation, January 1993, pp 34-35.
10. Beal, James F., “Process Control Analysis, Improvements and Results,” ISA
Expo/Conference, Houston, September 10-13, 2002.
11. Bialkowski, William L., “Process Control Related Variability and the Link to
End Use Performance,” Control Systems Conference, Helsinki, September 17-
20, 1990.
12. Shunta, Joseph P., “Assessing & Implementing Control Improvement
Opportunities,” ISA Short Course SC05(Du), 1996, Instructor’s Notes.
Practice
Overview
Target product quality and production levels can be maintained by contin-
uously evaluating the performance of the process and the control system.
The maximum throughput and operating efficiency of a plant are ulti-
mately determined by the process design and equipment selection. How-
ever, in many cases a plant's operation is far from achieving the ultimate
capability inherent in the plant design and equipment. For example,
numerous studies done in the pulp and paper industry show that loop uti-
lization ranged from 55% to 76%, depending on production area.
The reasons for process variation and poor control utilization can be attrib-
uted to one or more of the following:
119
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,
One of the main reasons such conditions exist is that the downsizing of
support services in many plants has resulted in plants operating with a
minimal staff for process control and instrumentation maintenance. Often
there is only enough manpower available to fix the critical problems found
today that are limiting production and affecting product quality. Under
these circumstances, there is little time to study the process operation to
determine if an abnormal condition exists that could soon be a major
source of process disturbance if not addressed. A problem may not
become visible until the situation has deteriorated to the point where it
affects product quality or production. Operating in this “firefighting”
mode may lead to variations in operation that result in less than maximum
production in a sold-out market, or to operating at less than best efficiency,
or to wider variation in product quality, regardless of production rate.
One of the key factors is that the performance of control loops decays with
time as a result of the wearing of control valves, loss of calibration of trans-
mitters, or changes in the operation of the process. Some plants have real-
ized the impact that control and field devices are having on their
operation.
Efforts to achieve best plant performance must address both the areas of
analysis and diagnostics.
In some cases, additional staff has been added and a performance moni-
toring tool has been purchased to periodically evaluate control loop and
field device performance. However, this solution is costly and often hard
to justify in the short term. Tools layered on top of a traditional Distributed
Control System (DCS) to detect abnormal operation have had limited suc-
cess in the detection of problems in instrumentation and fast processes
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Measure
Process Identify Determine Eliminate
Variation, Areas for Root Cause Source of
Control Improvement of Variation Variation
Utilization
Analysis
Work Order - Diagnostics
Operation Problem
Opportunity Assessment
The justification for investment in tools and people for process and control
analysis and diagnostics is the reduction in process variability and the
associated improvement in plant profitability. A process parameter’s effect
on a particular plant’s production depends greatly on that plant’s limita-
tions and operating conditions. If the answer is yes to some of following
questions, there is enough of an economic incentive to implement a perfor-
mance monitoring system in a plant.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The built-in diagnostic and analysis tools of modern scalable control sys-
tems provide the unprecedented capability to automatically identify pro-
cess variation caused by under-performing loops. This is done by
continuously calculating the improvement in control that is possible and
comparing this to the baseline loop performance. This approach to perfor-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
indication may in some cases only require that the PID control block be re-
tuned to correct a change in operating condition. An uncertain indication
on a transmitter may sometimes be quickly resolved using the online diag-
nostics provided by the manufacturer of the device.
Unfortunately, in some cases the source of the problem may not be appar-
ent and further analysis is required. In particular, resolving the source of
high variability may not be as easy as just re-tuning the loop. Even though
the loop may be correctly tuned, valve stick slip may cause the loop to
continually oscillate, as discussed in Chapter 2 and illustrated in Figure
4- 2.
Setpoint (SP)
Stem Position
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
today’s world, data drives the decision-making process and the ability to
make use of the vast amounts of process data is essential. In many plants,
the rate at which we are accumulating this data surpasses our ability to
extract knowledge from data and use it for better decision making. Multi-
variate statistical analysis is a technique that allows us to analyze plant
data in order to extract underlying themes in the behavior of the data, and
to then use these themes to monitor the state of the process.
Traditionally, the task of monitoring plant data has fallen into the statisti-
cal process control (SPC) or the univariate statistical analysis world. The
manufacturing industries have made great progress by focusing on key
quality variables and monitoring these variables with univariate analysis
techniques such as SPC. But are these techniques still relevant now that
the volumes of data being extracted by our automation systems have
increased by orders of magnitude? The answer is, Not always! In previous
years we had 10 engineers per quality parameter being monitored; today a
single engineer is asked to monitor 10 quality parameters. The drive to
understand quality coupled with the vast amounts of data can present a
formidable task, one well suited to the application of multivariate tech-
niques.
The multivariate techniques that are showing success today are called
Principal Component Analysis (PCA) and Partial Least Squares or Projec-
tion to Latent Structures (PLS). Multivariate statistical analysis is not a
new technology, but the application of PCA and PLS has gained attention
due mainly to the data explosion. The manufacturing and process indus-
tries have invested heavily in real-time data acquisition systems or Process
Information Management Systems (PIMS) and there is now a strong desire
by manufacturing directors to see this accumulation of data put to use to
improve plant operation; hence the strong interest in multivariate moni-
toring techniques. Other multivariate monitoring techniques include Fac-
tor Analysis, Eigen-vector Analysis and Singular Value Decomposition.
The field that uses multivariate statistical analysis tools for real-time anal-
ysis of process data is called Multivariate Statistical Process Control
(MSPC). The attraction of MSPC is in the ability to rapidly develop behav-
ioral models of your process, akin to fingerprinting normal process behav-
ior, and then to continuously compare current plant behavior to the
normal fingerprint. If a deviation from normal plant behavior is identified,
MSPC will allow you to identify which plant variables are the major con-
tributors to the cause of the deviation.
The ability to monitor, identify and diagnose the cause of process variabil-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ity is a task every engineer will recognize as important and one that will
improve the performance of our manufacturing facilities. The technologies
Examples
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Control Variability
The response of a control loop in its designed mode of operation may not
be adequate to compensate for process disturbance and changes in set
point. The root cause of poor control performance may be traced to the
control design, tuning, measurement or actuator performance. Many mod-
ern control systems provide a means to automatically quantify the varia-
tion seen while on automatic control. Using this embedded feature, it is
possible to quickly identify control loops that require maintenance. If such
capability is not available in the control system, it may be possible to uti-
lize the statistical calculations included in a plant historian or to add loop
performance-monitoring packages that connect directly to the DCS or to
the plant historian through OPC connectivity. Both methods will monitor
control loop variability and provide insight into changes in control perfor-
mance.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
seen for the same operating condition and timeframe, it is possible to iden-
tify loops whose performance is degraded. Through the investigation of
problems that cause increased variability, it is often possible to find the
root cause and significantly improve plant operation. As discussed in
Chapter 3, this reduction in variation often makes it possible to shift oper-
ating points closer to operating constraints or to maintain the best operat-
ing set point, and thus provide greater throughput or improved quality.
Outlier
X2
X1
Caster Monitoring
The next sample industrial process we will consider for MSPC is a caster
monitoring application in a steel mill. The process is a continuous, straight
mold, Demag slab caster, shown in Figure 4-4. Liquid steel enters the
caster process where it is partially solidified in a water-cooled copper
mould. The steel exits the mould, where the steel is contained within a
thin solidified shell or skin, and cools as it moves along the process line
towards the run-out. Upon leaving the mould, there is potential for the
loss of containment of the molten steel. This loss of containment and
release of molten liquid steel is a called a breakout. A casting breakout is
an extremely hazardous occurrence that causes equipment damage and
loss of production. An MSPC monitoring system was installed to provide
real-time monitoring of the casting process variables in order to predict
potential breakout conditions and provide operators with information to
help diagnose casting process problems.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tundish - 60 tonne
Mould
Bending
Unbending
Runout
The MSPC casting application has a few multivariate SPC control charts
representing the stable mould operation and alarms on detection of any
significant anomalies in the behavior of the process variables. At any time,
the operator can view how the measured plant values are contributing to
these alarms, to help diagnose process problems. The final PCA model of
the process contained over 240 inputs, collected selectively over several
months of operation in order to cover many different operating scenarios.
The PCA models were further summarized into control charts, using
Hotelling’s T2 statistic, to be monitored along with the prediction-error
control chart. T2 statistics provide an indication of variability. The T2 is
named for Harold Hotelling, a pioneer in multivariate statistical analysis.
These three charts form the basis of the real-time MSPC system, providing
continuous information to the operator on the stability of the mould oper-
ation. This example of an MSPC system, implemented in a steel plant,
delivered benefits in the areas of increased production, improved safety
and much improved process understanding—the ultimate goal of any
monitoring system [4.12].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 4-5. Display Interface for Slab Caster Example (Courtesy of Dofasco,
Inc.) [4.12]
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Application
General Procedure
1. Decide on the objectives of performance monitoring in terms of the
relative importance of diagnosing and predicting the following:
a. Measurement problems
b. Rotating equipment problems
c. Hydraulic resonance and shock waves
d. Pressure regulator and steam trap problems
e. Interlock and alarm system sequence of events
f. Control valve problems
g. Improper controller tuning
h. Improper controller modes
i. Interaction between control loops
j. Feedforward and disturbance analysis
k. Weeping and flooding of column trays
l. Distribution and mixing of feeds, components, phases,
polymers, and particles in fluidized beds, columns, fermentors,
reactors, and crystallizers
m. Analyzer and sample system problems
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
constraint has been active, and the percent of time that each
constraint has been violated.
b. Provide trending and statistics of the bias correction to each
trajectory.
c. Provide trending and statistics of predicted values for each
trajectory.
d. Provide trending and statistics of optimization set points (set
points from rampers and pushers).
Application Details
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
With a modern control system, all control loops are automatically moni-
tored on a continuous basis and any degradation in loop performance or
the detection of an abnormal condition in a measurement, actuator, or con-
trol block is automatically flagged. These systems may thus identify prob-
lem areas sooner than can audits done with portable PC-based tools. By
using automated system performance monitoring and built-in diagnostic
tools, the typically limited resources of plant maintenance can be used to
advantage to resolve measurement, actuator and control problems. As a
result, maintenance costs may be reduced or a higher overall level of sys-
tem performance may be maintained and process variability reduced. Any
reduction in variability can lead directly to greater plant throughput,
greater operating efficiency and/or improved product quality.
Continuous Control
In evaluating the performance of a continuous process, control systems
may calculate indices that quantify loop utilization, measurements with a
“bad,” “uncertain,” or “limited” status, limitations in control action, and
process variability. In addition, for control loops, the systems show the
potential improvement possible in control loop performance. The follow-
ing are a few practical scenarios that show how such information might be
used to determine an operating problem.
Limited Output
An instrument technician for the power plant notices that the system
shows the oil-flow control loop is limited in operation. Having been
alerted to this problem, the technician determines that the set point for the
oil header pressure control has been lowered below the design pressure,
forcing the oil valve to go fully open under heavy load conditions. When
the pressure control is readjusted to its designed target, the oil valve can
meet its set point without going fully open.
Bad I/O
A key temperature measurement is flagged by the system as having been
“bad” more than 1% of the time over the last day. Having been alerted to
the fact, the instrument technician re-examines the transmitter calibration
and finds that the device has been calibrated for a temperature range that
is too low. Re-calibrating the transmitter restores the accuracy of the mea-
surement and improves the operation of the process.
High Variability
During normal operation of the plant, the plant engineer sets all variabil-
ity limits to the current value plus 5 percent. After a few weeks, he notices
that the system has flagged a critical flow loop as having excessive vari-
ability. Upon further investigation, he discovers that the valve positioner
connection to the valve stem is loose, causing the control loop to cycle
severely. Fixing the valve positioner returns the variability index to its nor-
mal value.
All these cases are quite typical. Individually they may seem to have mini-
mal impact on plant operation. However, the net effect of these problems
could be significant if they were not addressed in a timely manner. An
automated analysis system to evaluate performance can detect abnormal
situations as they happen and thus play a key role in preventing produc-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tion losses and product quality problems.
1. Trends
A high-speed trend provides the ability to trace any parameter
associated with measurement or control actions at the same rate at
which these values change. A trend resolution of 100 msec is usu-
ally sufficient to analyze most measurement and control problems.
2. Histograms
A histogram allows the distribution of the variation in a measure-
ment value or actuator position to be analyzed. A bell shaped dis-
tribution indicates that the source of variation is random in nature.
3. Power Spectrums
A power spectrum is a frequency distribution of the components
that make up a measurement or actuator signal over a selected
period of time. Such information may be helpful in determining
the magnitude and frequency of process noise or the frequency of
disruptions caused by loop interaction or upstream processes.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Cross Correlations
The influence of other control loops and upstream conditions on
the variability of a control loop may be determined through cross
correlation
Diagnostic tools for process and control analysis are available as embed-
ded features in some control systems. These tools may be used to analyze
both fast and slow processes. If an integrated tool set is not available, diag-
nostic applications can be layered on the control system for diagnostics,
although they are often limited to analysis of slow processes. Figure 4-6
illustrates the diagnostic information that is presented to the user by a typ-
ical performance monitoring system.
FT101/PID1
Analysis
+ANAL_FT101PID020602
ANAL_FT101PID020802
PowerSpectrum
Histogram
Autocorrelation
Trend
Trend Auto correlation
Cross Correlation
CORR_FT101PID020802
PI103/PID1/OUT
AI321/AI/PV
+CORR_FT101PID020902
+CORR_FT101PID021002
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Batch Control
The application of online performance monitoring is most often associated
with continuous processes. However, the percentage improvement from
its application to batch control could be larger because the state of the con-
trol loops typically receives less attention than the sequence of the batch
operation. The emphasis to date in batch control has been on event sched-
uling rather than on process control. The benefits of monitoring can be
especially significant when you consider the higher profit margin and
value-added opportunities for specialty chemical and pharmaceutical
products, which are almost always batch processes. In such cases, the
answers to Opportunity Assessment questions (4) or (5) are frequently
Yes.
Figure 4-7 shows a batch reactor for a sold-out high profit–margin spe-
cialty chemical plant. One of the feeds and a byproduct can be recovered
from the vent system if there is sufficient condensing to reflux unreacted
feed back to the reactor and there is no liquid carryover from high level or
foaming. Periodically there are interruptions due to high reactor pressure,
level, and exchanger temperature. The cooling water pressure is some-
times too low and its temperature too high. The feed profile is scheduled
to minimize the activation of interlocks and excessive riding of the con-
denser, vent system, and exchanger limits. The efficiency and safety of the
operation depends upon a uniform quality, flow rate, and distribution of
the feeds within the batch. There is no online composition measurement of
the product. The yield varies significantly with each batch.
PC
1-3
TC
1-3 PT
1-3
FC TT
1-3 1-3 Vent System
Eductor
PC
FT 1-1
1-3
Anti-Foam Coolant
FC PT
1-2 1-1 TC TT
1-2 1-1
FT TC RSP
1-2 1-1
Feed B
FC TT
1-1 1-2
FT Coolant
1-1
Feed A LT LC
Batch Reactor 1
1-2 1-2
Discharge
The standard deviation and variability indices are high and the control
output blocks are limited at times during the batch for the reactant B and
antifoam feed loops, the condenser and exchanger temperature loops, and
the reactor pressure loop.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The low-limit flag for the output function block of the reactant B flow con-
troller is periodically activated throughout the batch. After some investi-
gation, it is discovered that the original sliding stem control valve was
replaced with a ball valve that has too much capacity and too much fric-
tion near the closed position. The reactant B flow controller ramps up its
output but there is no appreciable increase in flow above the leakage flow
until its output reaches 6 per cent. Then there is a burst of flow so high
above the set point that the controller must close the valve. This square
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
wave of flow from the stick-slip of the oversized control valve continues
through most of the batch.
The antifoam controller output during some batches ramps up to its high-
output limit. A correlation analysis shows that this coincides with the
simultaneous demand for antifoam by other batch operations and a dip in
supply pressure. An undersized antifoam pump is pinpointed as the cul-
prit. A further look at other variables shows there is a significant cross cor-
relation between high antifoam flow controller output and high reactor
pressure controller output and high condenser temperature controller out-
put. This indicates the possibility of carryover into the vent system that
can result in contamination of the byproduct and a corresponding loss in
yield of the main product.
frequency with significant power. This is a clue that there is a reset cycle
from excessive integral action from the eductor pressure loop. During hot
summer days the process gain and time constant are less, making the reset
cycle more severe.
The above analysis reveals the importance of extending the scope of per-
formance monitoring to other unit operations, utility systems, vent sys-
tems, and ambient conditions. It also shows the value of being able to
trend performance indices, status flags, events, and variables to look for
coincident events. It is important that the trend be fast enough to deter-
mine what happened first and that the oscillation period or waveform not
be distorted by aliasing. Finally, there is obviously a need for some basic
understanding of the system to determine the actual cause-and-effect rela-
tionships.
Once the basic system has been improved, the use of performance moni-
toring can be extended to include online multivariate principal component
analysis (PCA). A PCA worm plot can provide rapid recognition of
batches that start to trend away from normal operation to warn of abnor-
mal conditions before they cause significant problems of reduced yield or
capacity. It can also lead to Partial Least Squares (PLS), Neural Network,
and dynamic linear estimators of batch end time and product concentra-
tion based on various peaks during the batch cycle. For the above reactor,
it turned out that the amount of fresh feed added to reach the first peak in
temperature was an indication of the composition of the recycled feed and
a sustained low condenser and vent valve position could be used to pre-
dict the batch end point.
While the above analysis was for a specialty chemical, the opportunities
for fermentors could be even greater due to the need for tight dissolved-
oxygen and pH control despite interactions and interferences and the
advantages of having online estimators to detect deviations of batch con-
ditions and cell and product concentration. Similarly, the analysis should
be extended to substrate and nutrient feed systems, antifoam systems,
vent systems, and utility systems. Figure 4-8 shows the main control loops
for a batch fermentor.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
PC
FC 1-1
1-3 RSP
PT
1-1
Vent
FT
1-3 AC
Anti-Foam 2-2
Dissolved
Reagent
Oxygen
VSD
TC RSP TC
2-1 2-2
steam coolant
TT
TT 2-2
FC 2-1
1-2 pH
AT
AC AT 2-2
2-1 2-1
FT
1-2 Batch Fermentor
Substrate
tempered water
RSP FC
1-1
FT
1-1
Air
Figure 4-8. Batch Fermentor Control System C S
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Multivariate Analysis
Although the methods for monitoring plant data have traditionally been
limited to statistical process control (SPC) or univariate statistical analysis,
manufacturing industries have made good use of these methods to iden-
tify and monitor key quality variables. Today, the volume of data being
extracted by automation systems has increased by orders of magnitude.
The drive to understand quality coupled with the vast amounts of data
can present a formidable task.
What are the factors that make process data analysis a challenge?
And what are the elements that make PCA and PLS good multivariate
techniques for modeling plant behavior from real-time operating data?
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Identify which and how the original plant variables contribute to the
deviation
One of the principal strengths of the MSPC approach is the ability
to drill down and identify how the original plant variables contrib-
ute to the abnormal behavior in plant performance. Knowing how
the original plant variables contribute allows action to be taken to
bring the plant back within control.
Temperature (X1)
Pressure (X2)
Flow (X3)
The first step in MSPC is to draw a line through the data in the direction of
maximum variability. This line is called the first principal component and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
projecting the data points onto this component defines our first latent vari-
able. A second line can now be drawn through the data, orthogonal to the
first (linearly independent), and in the direction defining the second great-
est variability as shown in Figure 4-10. This is known as the second princi-
pal component, and again the data points can be projected onto this
principal component to generate latent variable 2.
These 2 new principal components now form a new plane in our data
space, which is commonly referred to as a scores plot. The scores plot for
our temperature, pressure and flow data is shown in Figure 4-11; it cap-
tures over 92% of the variability in the dataset. The scores plot has reduced
the order of our data from 3 dimensions to 2 dimensions. Monitoring how
the data points move on this plot is one of the graphical tools used in iden-
tifying abnormal behavior.
X3
PC1
X2
PC2
X1
5
4
3
2
1
PC-2
Scores 0
-1
-2
-3
-4
-5
For example, the temperature, pressure and flow data show significant
deviation from model norm, behavior not easily detected from the time
series trends. Although we have limited our analysis to reducing 3 vari-
ables down to 2 principal components, we can often reduce hundreds of
variables into only several principal components and still capture a signif-
icant amount of the variability in the dataset.
Scores Plot
The scores plot allows the observations to be viewed in the new co-ordi-
nate system of principal components. The new principal components will
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
form a sub-space, and projecting the observations down onto this sub-
space will allow us to visualize how our observations relate to our PCA
model of the process. A common technique for real-time monitoring is to
connect the observations in the scores plot, forming a worm of a preset
length. Every new observation adds a value to the head of the worm,
while an older observation is dropped from the tail. Following the direc-
tion of the worm can indicate when the process is trending towards abnor-
mal behavior.
Hotelling T 2
The Hotelling T2 is a statistical parameter indicating variability of the mul-
tivariable process. It can be used to classify when an observation in the
data is a strong outlier. A strong outlier in the data indicates abnormal
behavior. The Hotelling T2 statistic is most often represented as an ellipse
on the scores plot with 95% or 99% confidence intervals. Thus, any obser-
vation outside of these limits would indicate a strong outlier that does not
conform to the normal correlation model of the process data. Figure 4-11
illustrates the result of the Hotelling T2 classification for a process with
two principal components. Prediction Error or Model Residual Plot
Contribution Plot
One of the many advantages of the PCA approach is that the information
in the model is open for inspection. If a situation is detected the system can
present how the original variables are contributing to the detected fault.
These are most often presented as a contribution difference from model
center or contribution difference between two observations.
Although this treatment of the MSPC topic has been limited to the moni-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rules of Thumb
Rule 4.1. — Make sure the scan time of the I/O, module, and performance moni-
toring system is faster than 1/5 of the dead time for fault diagnostics. This will
prevent misalignment of events and aliasing of loop oscillations because it
will insure at least 10 samples per oscillation for dead time–dominant
loops and 20 samples per oscillation for self-regulating processes with a
large process time constant.
Rule 4.2. — To analyze items 1a through 1e in the General Procedure, the scan
time should be as fast as the input card or field device to avoid improper identifica-
tion of periods of oscillation (aliasing) and the sequence of events. The speed and
richness of diagnostics from Fieldbus devices can be an important advan-
tage.
Rule 4.3. — To track down the cause of oscillating control loops, locate the loops
with significant power at the same frequency on a process flow diagram (PFD)
and trace the direction of manipulated flows between the loops. The loop from
which the manipulated flow originates is the source of the oscillations. It is
generally, but not necessarily, the furthest loop upstream.
Rule 4.4. — Watch out for periodic disturbances. Aggressive tuning of level
loops, valve stick-slip, too small a reset time on loops dominated by a time
lag (reactors, fermentors, evaporators, crystallizers, and columns), and too
high a controller gain for loops dominated by a time delay (webs, sheets,
and pipelines) are the most common causes of a periodic disturbance. If
the loop manipulates an inlet or outlet flow to the volume, the oscillating
loops will be upstream or downstream, respectively.
Level loops on surge and feed tanks tend to be tuned too tightly. Level
measurement noise is a frequent problem, particularly if a high gain or
any rate action is mistakenly used. Valve stick-slip often appears as a
square wave limit cycle. If the reset time is less than the oscillation period
for a loop dominated by a large time constant, the reset time is probably
too small. These loops need to overdrive the manipulated variable and
require more gain than reset action (see Chapter 2). Conversely, the pulp
and paper industry tends to have loops that are dominated by large time
delay, and consequently the use of a gain or reset time that is set too high
is more of an issue.
Rule 4.5. — If the oscillation period is less than 4 seconds, it probably originates
from rotating equipment, pressure regulators, actuator instability, burner insta-
bility, resonance, or measurement noise. Incipient surge and buzzing due to
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
sized actuators and springs can cause large rapid fluctuations in flow.
Burner instability can cause rapid oscillations in fuel flow and furnace
pressure.
Rule 4.6. — Items 1a through 1i in the General Procedure can usually be handled
by performance monitoring systems that provide relative statistical measures of
variability, capability, utilization, sustained cycling, oscillation frequency and
power (Power Spectrums), saturated controller outputs, and limited process vari-
ables.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 4.7. — The diagnosis and prediction of process equipment, operation,
sequence, allocation, and raw material problems (items 1j through 1s in the Gen-
eral Procedure) generally require cross correlation analysis and multivariate prin-
cipal component analysis (PCA).
Rule 4.8. — Online property estimators should be created to enhance the repeat-
ability and reliability of online and lab analysis systems by the addition of Partial
Least Squares (PLS) analysis, dynamic linear estimators, or neural networks.
Time delays and time constants must be used to align the inputs with out-
puts. Some of the need for data alignment can be reduced by the use of
long scan times but this slows down the calculation for estimation and
fault detection to a point where it may be too late or unable to resolve
what occurred first. This is a more important issue for continuous opera-
tions than batch operations since batch outputs can be tied to a batch, step,
and phase identification number and time (see Chapter 8).
Rule 4.9. — Control loops that are intentionally not in service due to batch oper-
ations, product or grade produced, or trains of equipment that are shut down to be
maintained or cleaned, must be flagged as such and other diagnostics automati-
cally suppressed. Otherwise, true problems are camouflaged by false alerts
and alarms. Also, the “Normal Mode” of loops used in such batch opera-
tions should be automatically updated to reflect the needs of the batch
sequence.
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for a performance monitor embedded in
an industrial control system. The following areas are addressed:
From the explorer view at the left of the interface, the user may select the
entire plant or an individual process area to examine. Based on this selec-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tion, a summary is provided that shows the number of modules that are
being utilized for control, monitoring, and calculation within the selected
area. Also, a summary is provided of the modules that have an I/O or con-
trol block with abnormal conditions. The individual modules that have an
abnormal condition are listed in the right portion of the view in the Sum-
mary tab. The types of problems that have been detected are shown using
a frown face.
When a module is selected in the summary list, the individual I/O and
control blocks in that module are listed in the bottom right portion of the
interface. The percent time that an abnormal condition existed is shown
for each block. If this time exceeds the defined limit, then this condition is
flagged as an abnormal condition in the interface; that is, a frown face is
shown for the module. Through the Filter selections, it is possible to view
information for the previous or current hour, shift or day. Also, the user
may select the type of blocks and whether all modules or just those with
abnormal conditions are displayed.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
By selecting the Print icon, the user may choose to print a module sum-
mary report or a detail report. Also, to allow the operator to view the per-
formance and utilization information from his displays, a standard
function block and dynamo are used to include this information in the
operator interface. Through this dynamo an operator may also enable and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
disable the monitoring in the associated area. This dynamo appears in an
operator display as shown in Figure 4-15.
Theory
The ability to quickly inspect control and measurement loops has a pri-
mary importance in industrial applications. Both poorly tuned loops and
malfunctioning field devices jeopardize product quality and production.
In the last decade this problem has received significant attention from both
academia and practitioners. Much of this work has focused on an assess-
ment of control loop performance using minimum variance controller as a
reference; see Harris [4.1] and Desborough and Harris [4.2] and [4.3]. Bea-
verstock et al. used heuristic definitions of unit production performance
tailored to specific applications, rather than loop performance [4.4].
Numerous other researchers have advanced performance monitoring. In
particular, Rhinehard [4.5] explored simple ways of computing standard
deviation from measurement-to-measurement deviations and filtering.
Harris et al. [4.6] and Huang et al. [4.7] provided guidelines for perfor-
mance assessment of multivariable controllers. Qin reviewed recent works
on performance monitoring in [4.8]. Shunta gave an excellent practical
summary of the performance monitoring in the monograph [4.9].
Figure 4-16. Block Parameter Status and Mode are Utilized in Performance
Monitoring --`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The status associated with each function-block output gives a direct indi-
cation of the quality to the measurement of control signal. Quality is
defined by describing the measurement as suitable for control (Good),
questionable for use in control (Uncertain) or not suitable for use in con-
trol (Bad). In addition, the status provides an indication of whether a mea-
surement or control signal is high or low limited. For example, if a
measurement is operating above its calibration range, the quality of the
measurement may be shown as “Uncertain.” Downstream blocks may
• Bad I/O — the status of the block process variable (PV parameter) is
“bad,” “uncertain” or “limited.” A sensor failure, inaccurate
calibration, or measurement diagnostics have detected a condition
that requires attention by maintenance.
• Limited (control action) — a downstream condition exists that
limits the control action taken by the block. Such limitations may
prevent the loop from achieving or maintaining set point.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
• Mode not Normal — The actual mode of the block does not match
the normal mode configured for the block. An operator may change
the target mode from normal because of equipment malfunction.
The percent time that these conditions exist over an hour, a shift, and a day
is computed for every block and compared to a configured global limit for
each condition. When one of these limits is exceeded, the associated
module is displayed in the main summary display.
It is possible that the status of all inputs to a control block is normal and
the mode of the block is correct and yet the control provided is poor. No
indication is included in the standard block defined by Fieldbus that
directly indicates this problem. However, statistical techniques exist that
allow the quality of control to be determined in a reliable manner. Leading
companies in the process industry use such techniques to evaluate the per-
formance of control loops[4.9]. Based on a knowledge of total and capabil-
ity standard deviation of the control measure, illustrated in Figure 4-17, it
is possible to compute a variability index for control measurements that
compares current control performance to the best achievable for the pro-
cess dynamics.
Process
Value
Time
Figure 4-17. Capability and Total Standard Deviation for Process Variable
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∑ Xi
i=1 -
X = -------------- (4-2)
n
When loop performance is of concern, then standard deviation alone may
not provide sufficient information to evaluate loop tuning. To gain an
objective judgement, a reference value for the process control performance
is required. The best performing feedback control theoretically is mini-
mum variance control, Sfbc. This value may be calculated [4.9] directly
based on the knowledge of the total, Stot, and capability, Scap, standard
deviation of the process measurement as shown in the Formulas 4-3 and
4-4.
S cap 2
S fbc = S cap 2 – ---------- (4-3)
S tot
∑ ( Xi – Xi – 1 )
2
S cap = i=2
---------------------------------------
- (4-4)
2(n – 1)
Based on a the value of total standard deviation and standard deviation
for minimum variance control, it is possible to define a variability index
for the control loop that reflects how close control performance comes to
minimum variance.
S fbc + s
Variability Index VI = 100 1 − (4-5)
Stot + s
where s is the sensitivity factor.
Finally, for control blocks, overall control performance is as follows:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
each execution of the function blocks. The parameter values are then
updated per every n executions of the function block. For a typical imple-
mentation, the update is done after 120 executions of the function block.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
where:
N
1
MAE =
N
∑ | ( y (t ) − y ) |
1
t (4-8)
The measurement value is used in I/O blocks to calculate the mean value.
In control blocks, either the working set point or the measurement value is
used depending on block mode.
The relation between standard deviation and a mean absolute error can be
verified by computing a mean absolute value for the normalized Gaussian
distribution, p(x):
∞ x2
1 −
p( x ) =
2π ∫e
−∞
2
(4-9)
x2 0 ∞
1
∞ x2 0 x2 ∞ x2 x2
1 − − − 1 −2 − 2
| x|= ∫ | x|e 2
= − xe 2 + xe 2 =
∫ ∫ e −e 2
= π = .7978845 (4-10)
2π 2π −∞
2π
−∞ 0
−∞ 0
MR
S cap = ------------- (4-12)
1.128
where:
1 N
MR = ∑ | ( y(t ) − y (t − 1)) |
N −1 2
(average moving range) (4-13)
Only the summing component associated with the MAE and MR is done
each execution. The division of the sum by N or N-1 is done as part of the
Stot, and Scap calculations only once every N executions (default N=120).
Capability standard deviation calculation requires that the sampling rate
be fast enough. The requirements for the sampling rate are similar to the
scan rate of a control loop. A practical estimate for selecting scan rate for
control loops is sampling five or more times per time constant. [4.9].
TT AT FT Adaptive
1-1 2-1 3-8
Tuning
Model
FT Neural FT Predictive
1-2 Network 2-9 Control
AT TT TT
Control
1-2 2-5 3-8
FT FT
FT
4-3 6-2
5-7
Multivariable Real Time Blending
FT AT FT
Fuzzy Logic Optimization
4-6 5-8 6-6
Control
LT FT FT
4-6 5-8 6-8
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Abnormal Inputs
For a single input control block such as the PID or FLC control block, the
status of the primary input (PV) of the block may be used to determine if
the input is abnormal. For example, if the PV status is uncertain or “bad”
where:
∀ = a logical sum of I = 1, …, N Conditions
Control Limited
Where advanced control is providing multiple process inputs, the control
interface to the process will be done through I/O or control blocks. Both
provide an output that will be used as a back-calculation input to the
advanced control application. If the control action taken is limited down-
stream, then this is reflected in the status of the associated back-calculation
input. Since the control objective will not, in general, be met if any of the
control outputs becomes limited, then the indication that control is limited
must consider the back-calculation input status associated with all control
outputs. This may be calculated as follows:
Incorrect Mode
For single input/output blocks, the status of the primary input, cascade
input, back-calculation input, and target mode must be used in determin-
ing achievable mode of operation. This mode of operation is reflected in
the mode parameter as the actual mode attribute. When a block is config-
ured, the customer may indicate the normal mode that the block is
designed to operate in. By comparing the actual mode attribute to the nor-
mal mode attribute for the block, it is possible to determine whether the
block is operating in its designed mode. Advanced control applications
may be engineered to use the standard status and mode definition. To cal-
culate mode, the status of all inputs used in the control and the status of all
back-calculation inputs will be utilized. Each output provided by the
advanced control application will be designed to support handshaking,
bumpless transfer, and windup prevention based on the back-calculation
inputs. Thus, mode will be treated exactly the same as other control blocks
in the control system. Incorrect Mode may be determined as follows:
Control Index
For single loop PID and FLC control, the variability index is calculated
based on the total and capability standard deviations calculated in the
control block. In an advanced control application involving multiple
inputs and targets, the measure of control performance must consider
each controlled input compared to its target value. For advanced control
applications, such as MPC or multi-variable fuzzy logic, this concept may
be extended to calculate an average index or minimum value of the index:
L
CIA = 1
L∑
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
CI ( i )
1
Diagnostic Tools
Insight into the source of variation may be obtained using tools that sup-
port the calculation of power spectrum and cross correlation:
The power spectrum may be calculated from the Fourier series coefficients
as illustrated in Figure 4-19.
+ +
+
+
+ + + +
+ +
+
Fourier Series
+
+
n
X (t ) = ∑ ( Ai cos( wi t ) + Bi sin( wi t ))
+ + +
i =1
i 1
Where wi = 2π
N ∆T
N = Number of po int s collected
Power Spectrum N
n=
+
2
Power +
+
Amplitude Pi at frequency wi is :
Pi
+ Pi = Ai2 + Bi2
+
+ +
+
Frequency Wi
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Y (t ) + +
+
+
+
+
+ +
+
+ Cross Correlation
+ +
+
+ + +
X (t ) +
+
+ +
+ + +
+ + + + +
+ + + +
N −k
1
Time
N
∑(X i − X )(Yi + k − Y )
C xy (k ) = i =1
σ xσ y
+
+ Where N = Number of samples
Cxy +
K = 0,1,2,..., N − 1
+
+ +
+ +
+
+ + +
+ +
+ +
Time shift K
The status and actual mode attributes used by the performance monitor-
ing system to calculate loop utilization, limited condition, and bad mea-
surement normally do not change in value. Thus, communication
requirements may be minimized by reporting parameters only on a
change in these attribute values [4.10], [4.11]. If this approach is taken in
the system design, then the communication load for reporting these
parameters will normally be close to zero. Performance statistics may be
calculated over a specified period of time, e.g., 120 executions of the con-
trol or measurement block, and then reported to the performance monitor-
ing application. Thus, these parameters are reported every 60 seconds for
a block with an execution rate of 0.5 seconds.
Scalable System
Main Workstation Workstation Workstation
Server Client Client
Application Application Application
PM - FB =
Parameters
PM - FB Process/Performance
PM - FB Reported by PM - FB
Parameter Monitoring
Parameter Exception Parameter
Function Block
Scalable Scalable Scalable
Controller Controller Controller
When the server is first placed online, the current state of the required
attributes is reported once and subsequent updates are sent by exception
reporting.
The support of fast speed trend for diagnostics places a special require-
ment on the control system design. Inaccuracies may be introduced by jit-
ter and aliasing as a result of measurement sampling:
the sample rate must be at least twice as fast as the highest frequency in
the sampled signal. If this is not possible, then aliasing occurs, as illus-
trated in Figure 4-23.
x
Measurement
x
Value Plotted
Assuming
Uniform
Sampling
x
x
Variation
in time of
sampling x
x
Sample
Taken Time
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
x x
Measurement
Value Plotted
Assuming
Uniform Sampling
Periodic
Samples Time
Historically, the only way to collect high-speed data for diagnostics was
dedicated tools that attached at the terminal strip of the control system, as
illustrated in Figure 4-24.
Scalable System
Workstation (s)
Scalable
Controller
Temporary
Wiring to I/O
I/O File terminations
PC Based Diagnostic
FT Tool With I/O
9-1
However, many Fieldbus devices introduced in the last few years support
the collection of high-speed data for diagnostics. Also, some modern con-
trollers allow measurement and control parameters to be collected in the
controller for diagnostic support. This trend information collected in the
Fieldbus device and controllers may be accessed without aliasing or jitter
and thus used within the control system for diagnostic support, as illus-
trated in Figure 4-25.
Scalable System
Workstation (s)
Diagnostic
Application
Communication Network
Scalable
Controller
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Trend Blocks
Trend objects
I/O File
Smart or Fieldbus
traditional
4-20ma TT
FT 1-1
9-1
References
1. Harris, T., “Assessment of Control Loop Performance,” Can. J. Chem. Eng.,
1989, 67(10):856-861.
2. Desborough, L., and Harris, T.J, “Performance Assessment Measures for
Univariate Feedback Control,” Can. J. Chem. Eng., 1992, 70:1186.
3. Desborough, L., and Harris, T.J, “Performance Assessment Measures for
Univariate Feedforward/Feedback Control,” Can. J. Chem. Eng., 1993, 71:605.
4. Beaverstock, C. Malcolm, and Martin, Peter G, “Performance Control
Apparatus and Method in a Processing Plant,” US Patent Number 5,134,574,
July 28, 1992.
5. Rhinehart, R. Russell, “A Cusum type on-line filter,” Process Control and
Quality, 2 (1992) 169-179.
6. Harris, T., Boudreau, F., and Macgregor, J. F., “Performance Assessment of
Multivariable Feedback Controllers,” Automatica, 1996, 32(11):1505-1518.
7. Huang, B., Shah, S.L., and Kwok, K.Y., “Good, Bad or Optimal? Performance
Assessment of MIMO Processes,” Automatica, 1997, 33(6): 1175-1183.
8. Qin, S.J., “Control Performance Monitoring – A Review and Assessment,”
NSF/NIST Workshop, New Orleans, March 6-8, 1998.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
In modern control systems, expert system technology is playing an ever-
increasing role in assisting the operator in the detection and management
of abnormal situations in a plant. With the introduction of distributed con-
trol systems in the late 1970’s, the basic control systems of many process
plants went through major changes in organization and operation. In
many cases, the introduction of distributed control allowed control func-
tions to be concentrated into a few control rooms. The traditional control
panels for operator interface to the process were replaced with keyboards
and monitors. There were few limits on the amount of information that
could be accessed and displayed at these operator stations. These systems
allowed an operator to make changes and see the process alarms associ-
ated with his area of responsibility. In some cases, the system was
designed to allow all information about the plant to be accessed from any
terminal within the system [5.1]. As a result of this technology change, and
the increasing pressure on companies to increase productivity, the scope of
control that an operator was responsible for changed dramatically.
In one pulp and paper mill the introduction of a distributed control system
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
allowed three control rooms to be consolidated into one and for one oper-
ator to do the job formerly done by three [5.2]. In some process areas an
operator is responsible for as many as thousands of measurements and
hundreds of motors and control loops in addition to various subsystems in
a process area. Thus, it has become increasingly difficult for an operator to
be aware of all conditions in the plant. During normal operations, there is
insufficient time for an operator to examine all measurements in his area
163
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Opportunity Assessment
Expert systems have been successfully applied in a variety of applications.
Within the process industry, there is significant benefit in using this tech-
nology for abnormal situation management. In assessing the potential
benefits of this technology, the following questions should be asked:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Examples
Alarm Screening
Expert systems are used in areas of the refining industry for abnormal sit-
uation management. One such use is in the prevention of alarm flooding.
For example, the regeneration unit of the hydrocracking process is vital to
plant operation and production. The interactive nature of the large num-
ber of measurements and control loops associated with this unit means
that a failure of one piece of equipment may result in the operator being
presented with many alarms: the original failure plus the alarms associ-
ated with measurements that the equipment affects. Under these condi-
tions, it is of great help to the operator if the alarm associated with the
failure is clearly presented and other alarms resulting from this event are
only logged, not presented to the operator. An expert system is used to
look for specific equipment failures and to automatically suppress other
alarms triggered by the failure. Under normal operating conditions, all
alarms would be active; the expert system only suppresses alarms when
specific operating conditions are detected.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
trips, a number of other upstream and downstream alarms are also gener-
ated that would make it difficult for the operator to quickly identify the
problem areas, as illustrated in Figure 5-1.
Fault Detection
An oil field may contain hundreds of wells. The early detection of an
abnormal condition such as blocked flow in the wellhead may avoid dam-
age to the associated pump. Because such conditions are often indicated
by a combination of measurement values, the traditional value or devia-
tion alarming cannot be used to alert the operator of them. However, by
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
using an expert system to monitor the wells, the conditions that indicate
abnormal operation are detected and brought to the operator’s attention.
To implement the expert system, facts would be defined for the measure-
ments that are included in a production control system. To detect blocked
flow, one rule would be written that examines the conditions that indicate
blocked flow; e.g., oil flow and the pump pressure. By using the variable
definitions in both the left and right portions of the expert rule, it is possi-
ble to monitor all the wells with one rule. As wells are added to or
removed from the system, the only change required would be to update
the facts; the rule would not have to be modified. When a blocked-flow
condition is detected, the rule is designed to write to parameters in the
control system that cause the detected condition to be alarmed and dis-
played at the operator interface. An example of how this would be dis-
played to the operator is shown in Figure 5-3.
Application
General Procedure
1. Identify the areas to be addressed by the expert system:
a. Select applications that have significant impact on plant
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
operations.
Application Details
The following details were developed for the application of expert sys-
tems for abnormal situation management. To achieve the best results, the
user should adhere to the guidelines specified in this section.