The Control Room Operator PDF
The Control Room Operator PDF
If you enter a control room, the quietness is something you will notice first. The
work in control rooms during routine operations is silent. After you shut the door,
the sounds of producing steel, food, or pharmaceutical products, refining oil, or
producing energy are kept outside. The tranquillity of the atmosphere is intensified
by the shaded atmosphere of the room, in which PC screens flicker in black, blue
and green, showing filigree displays of pipes, valves and numbers. Workers alone,
in pairs or in teams watch the displays arranged on one, two, three or more screens
in a focused manner, talk to each other in soft tones, pointing to a certain part of the
displayed plant, moving the computer mouse to a detail, perhaps altering a value. In
most control rooms I have visited, the outside world, the outside weather, the
technical construction of the production process, the converted materials, the
physical, chemical or biological process steps as well as the workers operating
the plant are viewed through the lens of the PC screens (Fig. 2.1).
On the surface, the job of a control room operator in routine situations does not
appear to be very spectacular. Compared to jobs which have been examined over
the last century by industrial psychologists and human factors and ergonomics
specialists, which emphasise physical ergonomics (anthropometric, biomechanical,
physiological factors, factors related to posture such as sitting and standing, manual
handling of material), a control room is clean, silent and tidy, and the work in a
control room does not require hard physical labour or coping with heat, cold,
dangerous substances, assembly line pace-based time pressure or motor dexterity.
Nevertheless, process control plants are assumed to notably challenge human
factors research (Moray 1997).
A control room can be defined as a location designed for an operator to be in
control of a process (Hollnagel and Woods 2005). In the case of process industries,
the location is a physical room in a physical building (in contrast to a cockpit that is
moving). The meaning of control in this context is to minimise or eliminate
unwanted process variablities; the process is a continuous activity. The process
A. Kluge, The Acquisition of Knowledge and Skills for Taskwork and Teamwork to 11
Control Complex Technical Systems, DOI 10.1007/978-94-007-5049-4_2,
© Springer Science+Business Media Dordrecht 2014
12 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
Fig. 2.1 Control room at German NPP (Photo courtesy of GfS/KSG, Essen, Germany)
has its own dynamics and hence changes if left alone (Hollnagel and Woods 2005).
The control room is a room with a view to the past, present and the future
(Hollnagel and Woods 2005). The view to the past is necessary to understand the
current situation, to build up expectation, and to anticipate what may lie ahead
(Hollnagel and Woods 2005)
Vicente’s (2007) and Vicente et al.’s (2004) description of a control room of a
Canadian NPP is a rather representative example of a control room in general. The
control room for the plant has four control units (each controlling its own reactor).
The single operator runs a unit together with other personnel serving support roles.
Each control unit occupies a demarcated workspace within a single, large room that
is completely open and has no barriers to visibility. The operator of each unit can
see the panels and alarms of all other units, allowing him/her to follow and monitor
activities on other units and maintain an overall awareness of plant activity (Vicente
2007, p. 91). An example of a German NPP that illustrates Vicente’s descriptions
(2007) is displayed in Fig. 2.2.
Not only in an NPP control room but also in control rooms in refineries, the units
include control panels, an operator desk with one or more telephones, a printer, and
bookshelves upon which to place procedure documents and other operation docu-
ments. Alarms are presented on computer screens, which light up and provide an
audio signal (buzzer) if an alarm condition occurs. In many control rooms, an
operator monitors 3–4 screens placed on his desk, on which physical schematics,
trend displays, and bar chart displays etc. are presented. In some systems, screens
show 1,000 detailed displays and 20 system-oriented overview displays (Veland
and Eikas 2007).
What do control room operators control? As introduced above, control room
operators control material and energy flows, which are made to interact with and
2.2 Defining the Term “Complex” in a Complex Technical System 13
Fig. 2.2 NPP control room in Germany (Photo by GfS/KSG, Essen, Germany), because the room
is windowless, the control room teams have hung up a poster with the outside view (in the back).
Files with standard operating procedures on the shelves
Fig. 2.3 Coker plant in the Gelsenkirchen Horst refinery at night, [Link]
liveassets/bp_internet/germany/STAGING/home_assets/images/raffinerie_verarbeitung/raffinerie_
[Link] (retrieved April 8th 2013)
Fig. 2.4 Photo of a control room in a steel plant (with window) control room at HKM
(Hüttenwerke Krupp Mannesmann) (Photo courtesy of HKM Duisburg)
sharing (Grauel et al. 2012) between control room operators and maintenance
personnel in the plant for collaborative troubleshooting.
In contrast, the control rooms for controlling continuous casting in the steel
industry are much closer to the production process, which is extremely hot, noisy
and dangerous for the workers, and which is not under moment-to-moment manual
control. Along the length of the process, there are a series of local control stations
for different tasks along the line (Moray 1997) and operators can directly see the
casting process and the molten steel. There is a subordinate control room consid-
erably above the floor of the plant enabling the controller to directly inspect/oversee
the entire plant through its window (Figs. 2.4 and 2.5). In Fig. 2.6, the window does
not allow the process to be monitored, but does allow the outside weather condi-
tions to be monitored in order to be able to proactively consider weather impacts on
the process.
The more the control room is isolated from the plant to be monitored and
controlled, the more the operator has to rely on the information presented by the
screens and displays. Non-transparency, as in the case when operators are isolated
from the operations being controlled, is also due to the keyhole effect (Woods
et al. 1990; Woods 1984). The operator might get lost in the large number of (up to
thousands) of displays which he/she is able to call up, rendering him/her unable to
maintain a broad overview, and becoming disoriented, fixated or lost in the display
structure (Kim and Seong 2009; Woods et al. 1990).
2.2 Defining the Term “Complex” in a Complex Technical System 17
Fig. 2.5 Example photo of a control room in a steel plant (HKM) with window, casting operation
HKM (Photo courtesy of HKM Duisburg)
Fig. 2.6 Control room at BP Gelsenkirchen/Ruhr Oel GmbH (Photo courtesy of BP Gelsenkir-
chen/Ruhr Oel GmbH)
mediated by a Human Machine Interface (HMI) that informs the operator about the
states of the plant. Only part of the relevant information is made available to an
operator, who is controlling the ‘outer-loop’ variables, for example sets a set point
of a desired temperature of blast furnace, whereas automated feedback loops
control the ‘inner loop’, for example provides the amount of energy to the furnace
required to reach the desired temperature (Wickens and Hollands 2000). The
operator monitors the result produced by the automated process, adjusts the set
point as required and may “trim” the control characteristics for optimum efficacy
(Crossman 1974).
Additionally, the automated process might also be non-transparent in itself.
Although some process control plants include rather simple operations such as
baking or pasteurisation, with more transparent processes, other industrial systems
are the most complex (interconnected, dynamic) ever built, in which physics and
chemistry are only imperfectly understood and in which unforeseen events can
therefore occur under special conditions of abnormal operations, with the risk of
potentially catastrophic releases of toxic material and energy (Moray 1997, p. 1945;
Perrow 1984).
With regard to non-transparency in terms of the physical visibility of the process,
the process in an NPP is the least visible, followed by petrochemical refineries and
steel production, which is assumed to be more visible compared to the other two
(Moray 1987).
The combination of dynamic effects and non-transparency is also apparent in
that the process variables that are controlled and regulated are reacting slowly and
have long time constraints (Wickens and Hollands 2000), leading to delayed
feedback with regard to the actions taken by the operator. The control action
taken may not produce a visible system response for seconds or minutes. In contrast,
dynamic effects and non-transparency can be become immediately apparent in
cases in which a warning indicates the existence of a system failure. The warning
can quickly lead to an exponentially growing number of hundreds of subsequent
warnings which – although they transparently indicate a problem – taken together
will lead to non-transparency in the current moment. As outlined by Wickens and
Hollands (2000), from the operator’s point of view, one warning alone is often not
interpretable: “This unfortunate state of affairs” (Wickens and Hollands 2000,
p. 530) occurs due to the vast interconnectedness that one primal failure will
drive conditions at other parts of the plant out of their normal operating range so
rapidly that within seconds or minutes, scores of warning lights and buzzers create a
buzzing-flashing condition. A severe failure in an NPP can potentially cause
500 annunciators to change status in the first minute and more than 800 within
the first 2 min (Wickens and Hollands 2000).
Additionally, the human operator must simultaneously pursue multiple and
even contradictory objectives, so-called conflicting goals, such as achieving
production and safety goals in parallel (Kluge et al. 2008; Reason 2008; Verschuur
et al. 1996; Wickens and Hollands 2000). A human operator in a control room is
confronted with a number of different goal facets to be weighted and coordinated
(Funke 2010). As Crossman (1974) formulates, what the operator is trying to
2.2 Defining the Term “Complex” in a Complex Technical System 19
achieve is what the management wants him/her to achieve and represents the
characteristics of multiple goals. The operator
• has to keep the process running as closely as possible to a given condition
(regulation or stabilisation),
• has to adjust the process to give the best results according to criteria such as
yield, quality, minimum use of power, least lost time (optimisation),
• has to avoid breakdowns as far as possible,
• has to regain normal running as soon as possible, and minimise loss of material
or risk of serious damage if a breakdown has occurred (Crossman 1974, p. 7).
With regard to conflicting goals, Hansez and Chmiel (2010) address the general
problem that production and safety are often not valued equally in practice, for
example “the visibility of production over safety, imbalances in the resources
allocated to each, and the rewards available, such as praises or bonuses for
achieving production targets” (Hansez and Chmiel 2010, p. 268). Especially
when the pressure for production is on, there is potential for safety to be
compromised. Particularly in cases of non-routine/normal and abnormal situations
(see below), the operator is faced with the choice of what do to, taking three not
always compatible goals into consideration (Wickens and Hollands 2000):
1. Actions have to ensure system safety,
2. Actions should not jeopardise system economy and efficacy,
3. Actions should be taken that localise and correct the fault.
Goals might be incompatible because, for example, taking a plant off line to
ensure safety will lead to a potential sacrifice of economy, mainly because of a
costly loss of production while the plant is offline and a costly start-up of the plant
after a shutdown to localise the failure correctly and in a timely manner.
This shows that the growing technological potential is seized upon and exploited
to meet performance goals or efficiency pressures (Hollnagel and Woods 2005), for
example reduced production costs and improved product quality. But, once the
technology potential is exploited, this generally leads to an increase in system
complexity, subsequently leading to increased task complexity (Hollnagel and
Woods 2005; Perrow 1984). Increased system complexity together with an
increased task complexity results in more opportunities for malfunctions and
more cases in which actions have unexpected and adverse consequences (Hollnagel
and Woods 2005). Additionally, the striving for higher efficiency brings the system
closer to the limits of safe performance, which leads to a higher risk. In turn, higher
risks are countered by applying various kinds of automated safety and warning
systems, which in turn again lead to an even greater risk (Hollnagel and Woods
2005).
Finally, in many HROs, small crews are responsible for overall system opera-
tions, in terms of controlling multiple systems and decision making concerning
system functioning (Carvallo et al. 2005; Reinartz 1993; Reinartz and Reinartz
1992; Vicente et al. 2004). In continuous process systems too, these systems are
controlled by multiple agents such as the control room operators, plant floor
20 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
Fig. 2.7 Field operators discussing issues with the control room crew (Photo courtesy of BP
Gelsenkirchen/Ruhr Oel GmbH)
Fig. 2.8 BP employee in the Emsland crude oil refinery on his tour during the nightshift,
[Link]
raffinerie_verarbeitung/23_imagebroschuere.jpg (retrieved April 8th 2013)
Table. 2.1 Plant operations team roles (Based on Bullemer et al. 1997)
Team role Description
Console operator Is responsible for controlling the process via the DCS, monitors and controls
plant, responsible for coordinating the actions of field operators, keeping
abreast of the maintenance activities in the field.
He/she is the focal point of communication between various distributed
operations personnel throughout the complex task because he/she has
the central view and control via the DCS.
Field operator Responsible for his/her own plant area, often also qualified for other areas
(to rotate between areas and monitor other areas), supports maintenance
activities in the field, serves as human sensor, who checks or validates
the correctness of the sensors, to ensure the view of the process is
accurate. They identify potential problems with the process equipment,
initiate preventive maintenance, take periodic product samples, prepare
and warm up equipment, are responsible for directing maintenance
personnel to the appropriate worksite. In a disturbance, they are the first
“on the scene” and provide a critical diagnosis and mitigating response
role in disturbance situations management by assessing the situation
(e.g. confirming/refuting DCS data) or by taking actions (e.g. fire fight-
ing); he/she can also support the console operator with assistance.
Shift leader Is responsible for overseeing the field and console operator in the detailed
monitoring of the process and ensuring the execution of the relevant
preventive maintenance (daily routine duties), is a senior operations staff
member, also in charge of the field, e.g. noting equipment problems and
verifying sensor readings, responsible for filling out shift log book,
during non-routine/normal and abnormal situations, shift leader supports
console operator and calls for backups.
Operations Responsible for productive and safe operations of the complex (complex is
superintendent typically run by multiple shift teams); responsibilities: Monitoring and
reporting of budget and costs, safety reporting and documentation,
environmental compliance, incident reporting, training, production
reporting to upper plant management, tracking and meeting higher-level
plant objectives.
Shift coordinator Plays the role of operation teams coordinator and management interface
between operations superintendent and operations staff.
Site planner Responsible for tracking possible market opportunities (e.g. high demands,
high price, scheduled shipments, weather conditions) that may arise
along with planning maintenance and turnarounds.
Process engineer Responsible for generating daily production orders for each process unit
(developed by site planner), troubleshoots process unit problems.
Control engineer Maintains control tuning, objectives and develops improved control, often
troubleshoots process and control-related problems after operations have
been stabilised by operators.
Maintenance Responsible for coordination of maintenance activities for plant units,
coordinator coordinates periodic preventive maintenance and requests put in by
operations team, orders material, determines whether contractors need to
be hired
Maintenance Responsible for maintaining and repairing all process equipment.
technician
DCS distributed control system
22 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
comprised of two licensed senior reactor operators, one of whom is the supervisor,
and two licensed reactor operators who share the duties of monitoring and control-
ling the plant (Gaddy and Wachtel 1992, p. 383). Additionally, a shift technical
advisor with an engineering background is available but is not directly involved in
the team (Gaddy and Wachtel 1992, p. 383).
Control rooms are therefore called multi-agent systems (Woods et al. 1990).
Consequently, added to the features of technical complexity described so far is the
complexity of relationships, which is called social complexity (Dörner 1989/2003)
or crew coordination complexity, which results from the interconnectedness
between multiple agents through coordination requirements. The dynamic control
aspect of the continuous process is coupled with the need to coordinate multiple
highly interactive processes imposing high coordination demands (Hagemann
et al. 2012; Roth and Woods 1988; Waller et al. 2004).
A high level of communication between the multiple agents is required to
coordinate activities and to avoid working at cross purposes (Roth and Woods
1988, p. 54; Stachowski et al. 2009, see Fig. 2.7). The human operators who are
responsible for separate but strongly coupled units of the plant also need to be aware
of their own actions with regard to the consequences they will bring about in
another operator’s units. Breakdowns in coordination across these units of respon-
sibility may contribute to unnecessary trouble, near shutdowns or complete shut-
downs (Roth and Woods 1988, p. 59).
If one looks, for example, at refineries, the console operator, who is controlling
the process via the distributed control system (DCS), works as a team member in a
plant operation team (Bullemer et al. 1997). The plant operation team in refineries
and petrochemical plants consists of several plant roles as listed in Table 2.1. A
prototypical operations shift team consists of a shift leader, a console operator, and
two to five field operators (Bullemer et al. 1997). During the weekdays, many
maintenance projects are going on, and the engineers, craftsmen, and management
personnel are all available to interact with the shift team.
To sum up, the characteristics of a complex technical system are listed in Table 2.2.
A complex technical system is characterised by the interconnectedness of a large
number of variables and system parts, in which variables can change dynamically in
terms of their own state, and in which structure and dynamics of the system are only
partly disclosed to the operator (non-transparency), who is confronted with multiple
goals that need to weighted and coordinated (conflicting goals), and who has to
coordinate his/her activities with other interconnected agents (crew coordination
complexity).
2.2 Defining the Term “Complex” in a Complex Technical System 23
requires thinking about the appropriate balance between mass and energy. Finally,
the mental model must enable thinking on an even more abstract level, defined in
terms of concepts like plant safety, human risk and company profits (Wickens and
Hollands 2000). These thoughts will be taken up in Chap. 3 and taken a step further
for the derivation of knowledge and skill requirements.
After having described the physical workplace as well as the plants which are
usually controlled in process control, in the following, we look at what a control
room operator does.
As Woods et al. (1987) describe, one of the earliest processes under human control
was the making of and tending to fire: “Those responsible for a fire had to add
chunks of wood of an acceptable size and condition, at the correct time and in the
proper amount, to maintain the fire so that heating and cooking could take place”
(Woods et al. 1987, p. 1725). Control of this process was considered to be an art,
relying on the operator’s skills to sense process conditions directly and to perform
appropriate control actions in order to adapt to the requirements. Over time, and
affected by industrialisation, processes became larger and products and processes
had to meet predefined standards, leading to the introduction of regulators or
feedback controllers and a decrease in the direct sensing and experiencing of
process states. The human operator has progressed from direct sensing and control
of the process (the fires) to the situation in control rooms today, which is
characterised by indirect knowledge of the process through instruments fed by
sensors and computed measurements and computer control of most elements of
the process (Woods et al. 1987).
In Table 2.3., the operator’s tasks in process control based on the work of Kragt
and Landweert (1974), Woods et al. (1987), Moray (1997, p. 1948), Wickens and
Hollands (2000), and Vicente (2007) as well as on our own interviews in continuous
process industries (Kluge et al. 2008) are listed and grouped according to the
categorisation introduced by Ormerod et al. (1998) for task analysis.
I personally often find it very helpful if one contrasts the activity which one
specifically wants to look at with another activity in order to clarify the differences,
for example the comparison between the tasks of a control room operator and the
tasks upon which industrial and organisational psychology has concentrated over
the decades, namely mass production. In comparison to work in mass production,
control room operators do not work according to a definite work cycle, there is
usually no need for physical exertion and no emphasis on speed, meaning that it is
inappropriate to apply financial incentive schemes based on piecework measure-
ment because of the continuous flow of production (Crossman 1974). Although the
operator’s tasks are less physically effortful, occasionally, the mental effort
2.2 Defining the Term “Complex” in a Complex Technical System 25
Table 2.3 The operator’s tasks grouped according to sub-goal template method categories
Monitoring
During normal operation, the process must be monitored.
Decision
Disturbances must be detected and their consequences must be predicted.
Any such disturbances must be counteracted.
If faults occur, they must be detected.
Diagnose process problems: the causes of faults must be diagnosed.
Appropriate countermeasures to control the effects of the faults must be selected.
Communication
Read: operating procedures must be consulted as needed.
Receive information/read: databases of information about possible options may need to be
consulted.
Record: a record must be kept of significant events.
Give information: significant events must be communicated to other members of the crew and
where appropriate to management and maintenance, so that operations may be coordinated and
required maintenance operations are undertaken at appropriate times.
Action
Scheduled testing of routine equipment to ensure that backup and safety systems are in an
acceptable state.
Changes may be made to the system either during normal or abnormal operations in the light of
observations of the system state in order to prevent or compensate for drifts and faults.
Changes may be made manually or by changing the program of automated controllers.
Perform emergency shutdown or other control actions to avoid dangerous accidents, or cooperate
with automated system for this purpose.
Combining action and communication
Special actions may be needed during the handover at the end at the shift, or during special
conditions such as start-up or shutdown.
Combining monitoring and action
Appropriate strategies must be adopted to support both safety and productivity.
Introduce long-term changes and adjustments to the system so that it will tend to evolve toward a
more efficient system.
Combining monitoring, action and communication
After detecting some disturbances or irregularities, operator asks (calls) maintenance worker
(on the telephone) to go to a particular component of the plant for a special inspection and to
give feedback.
Skill maintenancea
Undertake training and retraining to ensure the retention and improvement of skills.
Take a walk through the unit to maintain a “process feel” by directly observing plant components
(if applicable, Fig. 2.8).
a
Skill maintenance is not included by Ormerod et al. (1998) but is listed in several publications
increases during start-ups, shutdowns and breakdowns. Due to the greater distances
between workplaces and the remote control, the operator is under less close
supervision, for example by the supervisors, but has more direct contact with
technical staff and managers, who ask for status information about the plant in
order to integrate the activities of many people at many levels of the plant, from
management to maintenance workers (Moray 1997). Shift work is common because
26 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
of the high financial costs of the plant or of waste of material involved if the plant is
shut down, for example during the night or at weekends. This also means more
responsibility for the operators on night shifts when the engineering staff are less
available on site (Crossman 1974).
Digression: Macroergonomics – Task-relevant differences in process industries
The list of tasks for which the operator is responsible includes monitoring and
controlling, in terms of action taking. But what does the operator actually control
when “everything is automated”? In this digression, I would like to describe the
particularities of production in the process industry, which in turn provides impor-
tant hints regarding knowledge and skill acquisition and the subsequent training
development, because here, fine differences can be highly relevant to training.
The process industries range from continuous facilities in the petrochemical
industry (Fig. 2.9) to large-batch manufacturing in steel production and glass
manufacturing, to small-batch manufacturing in the food and pharmaceutical
industry (van Donk and Fransoo 2006). Process industries share the characteristic
that they handle non-discrete materials (Dennis and Meredith 2000b). “Process
industries are businesses that add value to materials by mixing, separating, forming,
or chemical reactions. Processes may be either continuous or batch (bold type
added by author) and generally require rigid process control and high capital
investment” (Wallace 1984, p. 28). Process industries often initiate their flows
with only a few raw materials and subsequently process a variety of blending and
resplitting operations, which means that many products are produced from a few
kinds of raw material (Fransoo and Rutten 1994, p. 49).
The mixing, separating, forming and chemical reactions are operations that are
usually performed on non-discrete products and materials. Commercial chemical
processing involves chemical conversions and physical operations and operators
also have to operate the process in such a way that the plant is also kept from
corroding (Austin 1984), which is why maintenance and servicing plays a very
important role in these processes.
These processes can only be performed efficiently using large installation as
introduced above, which tend to be an immense investment. If large quantities are
demanded, this justifies continuous production. If the demand is low, the invest-
ment into a large installation is not worthwhile, and batchwise production is used
(Fransoo and Rutten 1994).
Harmful impurities in raw materials must be controlled and product purities
monitored (Austin 1984). Material might be forms of gases, liquids, slurries, pulps,
crystals, powders, pellets, films, and/or semi-solids which can only be tracked by
weight and volume (Dennis and Meredith 2000a). Process industries often obtain
their raw materials from mining or agriculture industries (Fransoo and Rutten
1994). These raw materials have natural variations in quality, for example crude
oils from different oil fields have different sulphur contents and different pro-
portions of naphtha, distillates, and fuel oils (Figs. 2.10 and 2.11). The production
plans and operating schedules need to account for this variability (Dennis and
Meredith 2000a). Second, material variability associated with natural raw materials
2.2 Defining the Term “Complex” in a Complex Technical System 27
Fig. 2.9 BP operates the second largest refinery system in Germany (Pictured: cracker plant of the
Ruhr oil refinery in Gelsenkirchen, [Link]
STAGING/home_assets/images/presse/raffinerie_verarbeitung/bild_14696.jpg) (retrieved April
8th 2013)
Fig. 2.10 In the aromatics and olefin plant of the Ruhr oil refinery in Gelsenkirchen, e.g. plastic
is produced, [Link]
images/presse/raffinerie_verarbeitung/bild_14690.jpg (retrieved April 8th 2013)
28 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
Fig. 2.11 In the distillation plant in the refinery, crude oil is further processed, e.g. into
petrol, [Link]
raffinerie_verarbeitung/A8_Destillation_HighRes.jpg (retrieved April 8th 2013)
result in uncertainty about the yield and potency until the process has started, for
example in the chemical industry. Yield is the fraction of raw material recovered as
the main or desired product (e.g. in the synthesis of ammonia, the yield is above
approx. 98 %), and conversion is the fraction changed into something else, for
example by-products or other products (Austin 1984), for instance the conversion
of ammonia is limited to about 14 % (per pass), which means that 86 % of the
charge does not react and must be recirculated. Conversion is also used to indicate
the amount changed by a single pass through a technical subsystem when multiple
passes are used (Austin 1984).
The variability in the quality of raw materials might determine which products
will be produced (Rice and Norback 1987). Variations in raw material quality, for
example moisture content, acidity, colour, viscosity or concentration of active
ingredient, can also lead to variations in recipes for producing, for example in
2.2 Defining the Term “Complex” in a Complex Technical System 29
Fig. 2.12 Typology for process industries by Fransoo and Rutten (1994, p. 52)
Table 2.4 Characteristics of process/flow versus batch mix industries (Fransoo and Rutten 1994,
p. 53)
Process/flow business Batch/mix businesses
High production speed, short throughput time Long lead time, much work in process
Clear determination of capacity, one routine for all Capacity is not well defined (different
products, no volume flexibility configurations, complex routings)
Low product complexity More complex products
Low added value High added value
Strong impact of changeover times Less impact of changeover times
Small number of product steps Large number of production steps
Limited number of products Large number of products
of products requires the use of the same, general type of equipment, routings are
more diverse. Series of installations are rebuilt and reconnected to make a certain
type of process possible (retrofitting), lead times are longer and the work in progress
is higher (Fransoo and Rutten 1994). Typically, batch processes are used to
manufacture a large number of different products, with a number of grades with
minor differences. Frequent product and process changes are constituent character-
istic of batch/mix processes, which allow relatively flexible process adjustments
(ASM Consortium 2012).
Austin (1984) explains that early chemical processing was usually done in
batches and much continues to be done in that way. Only with some exceptions
do continuous processes require smaller, less expensive and less material in process
than batch processes, and have more uniform operating conditions and products
(Austin 1984). Continuous processes require concise control of flows and condi-
tions, in which computer control has proven to be most valuable (Austin 1984).
Small quantities of chemicals are usually made by batch/mix processes. When
markets enlarge, operations change continuous processing, as the reduction in
plant costs per unit of production is often the major force behind the change. In
summary, process/flow and batch/mix industries are contrasted in Table 2.4.
End of digression
When employing the term complex task, I was confronted with the issue of working
out the central features of a complex task from the psychological literature of
cognitive psychology, cognitive engineering psychology and human factors,
because the term complex task is predominantly used without a clear definition.
Frequently, the terms complex task and complex skill are also used synonymously
(e.g. Lee and Anderson 2001).
For this book, which addresses issues of knowledge and skill acquisition in an
applied organisational setting for HROs, the definition of a complex task from an
instructional perspective by Sweller (2006) is additionally valuable. A complex
task defined by Sweller (2006) is characterised by a single construct called “element
interactivity”. An element is assumed to be everything that needs to be understood
or learned (Sweller 2006, p.13), for example the parts and elements of a refinery as
well as the chemical processes involved.
To understand the meaning of element interactivity, it is helpful to briefly
address mental models here. As briefly introduced above, these are generally used
to describe a person’s representation of some physical system, and are based on an
analog representation of causal relationships and interactions between plant com-
ponents. Mental models are defined as “mechanisms whereby humans are able to
generate descriptions of system purpose and form explanations of a system func-
tioning and observed system states, and prediction of future states” (Rouse and
Morris 1985, p. 7; Endsley 2006). As will be explained in Chap. 3, mental models
play a fundamental role in controlling complex technical systems (e.g. Kragt and
Landweert 1974; Wickens and Hollands 2000), because performance in an
organisational context is supposed to be goal-directed (see above “conflicting
goals”), for example goals such as production maximisation with the least possible
resources needed. Mental models can help to inertly visualise performance strate-
gies and their consequences in relation to the organisational goals. Mental models
embody stored long-term knowledge about the system represented, which can be
called on to direct applications, for example in non-routine/normal and non-routine/
abnormal situations (see below).
When the concern is with acquiring mental models, if elements that need to be
understood and learned, for example the process in a refinery unit, interact greatly
with each other, they have to be processed and considered simultaneously. There-
fore, in cases of high element interactivity, they exceed the limits of the human
working memory capacity (Sweller 2006). Working memory holds only the most
recently activated, or conscious, portion of long-term memory, and it moves these
activated elements in and out of brief, temporary memory storage (Dosher 2003;
Sternberg 2009).
The complexity in terms of high element interactivity is not synonymous with
task difficulty, although it does affect task difficulty. According to Sweller (2006),
for instance, for an apprentice in a refinery, learning a large number of chemical
elements in the periodic table is probably difficult in the sense that it is effortful,
because many elements must be learned. However, it does not contain high element
interactivity, elements do not need to be considered simultaneously, and therefore it
is not a complex task.
Furthermore, a complex task according to Fisch (2004) needs to be distinguished
from a complicated task. Playing chess is a complicated task, because one has to
34 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
learn and apply the rules for each pawn in the game, but it is not considered
complex as it is
• not characterised by non-transparency and is in turn considered as transparent
(the playing field is visible to everyone, the number of figures is clearly defined,
the rules are known by both players in advance),
• not characterised by interconnectivity (the rule on how the knight is allowed to
move does not depend on where the queen is or does not change because a pawn
has been eliminated) and is
• not characterised by dynamic effects (the chess figures do not move around of
their own accord while the player is still thinking about his next move).
Looking at the manifold occupations in HROs, it becomes clear that there is no such
thing as “the” complex task. One complex task, such as process control, can be
quite different from another complex task, such as piloting.
What we can say overall as a commonality of different applications of complex
tasks, that which is a generalised lowest common denominator, is that a complex
task is composed of various part-tasks. This does not emerge explicitly from the
precise definition of a complex task, but rather implicitly from the descriptions
above as well as from training approaches examined to date, in which a distinction
was drawn between part-task and whole-task training (e.g. Patrick 1992). One
assumes that a complex task (as a whole task) can be broken down into parts, for
example by means of a task decomposition (Frederiksen and White 1989).
A part-task frequently consists of several steps or sequences. Mostly, the part-
tasks are performed in parallel and have to be integrated into a joint flow of action.
A coordination of the part-tasks ensues through attention selection, attention
switching, and attention sharing (Wickens and McCarley 2008). Finally, in
HROs, which form the focus of this book, workers performing complex tasks are
working in teams and also have to coordinate and orchestrate their individual tasks
2.4 Conditions for Knowledge and Skill Application: Routine. . . 35
into an interdependent team task (Roth and Woods 1988) as outlined in the section
on Collaborative complex problem solving (Sect. 4.4.1) in non-routine/abnormal
situations. The characteristics of a complex task are listed in Table 2.5.
In summary, a complex task can be decomposed into part-tasks that include
sequences of steps, which need to be integrated and coordinated based on atten-
tional processes and need to be orchestrated based on the simultaneous processing
of knowledge elements (mental model) into a interdependent team task to meet the
organisational goals.
In the following chapter, the concern is with the situational conditions under
which the control room operator performs his or her tasks. These situational condi-
tions, the routine, non-routine/normal and non-routine/abnormal situations still
belong on the one hand to organisational and task analysis (see Preface), but equally
provide indications of which conditions need to be considered for transfer, which are
in turn important for the derivation of training objectives and evaluation criteria.
In this book, I will distinguish between routine and non-routine as well as between
non-routine/normal and non-routine/abnormal situations, in which in the latter case
it is no longer possible to continue operating a plant using normal procedures
(Fig. 2.13). Although widely used, the terms routine, non-routine, normal and
abnormal are not well defined in the human factors and ergonomics publications.
Based on the often used distinction between the two poles of routine and
nonroutine/abnormal situations, process control tasks are characterised as “hours of
intolerable boredom punctuated by a few minutes of pure hell” (Wickens and Hollands
2000, p. 517), or “99 % boredom and 1 % sheer terror” (Vicente et al. 2004, p. 362).
The “hours of intolerable boredom” (although a little overstated) are seen as the
times in which the human operator is monitoring a plant that is automatically
controlled. This is the routine situation, routine control and regulation of the process
which is well handled by Standard Operating Procedures (SOPs). The “pure hell”
refers to the task of timely detection, diagnosis, and corrective action in situations in
36 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
which infrequent malfunctions occur that can be fixed by using SOPs (non-routine/
normal) or for which operators have no procedures at hand (non-routine/abnormal).
In terms of deriving strategies for learning and instruction later on, it is relevant
to distinguish routine from non-routine tasks as well as normal from abnormal
situations as the conditions under which the operator has to perform his/her tasks
(Fig. 2.13).
Conditions for knowledge and skill application in routine situations
Routine situations as defined by Wickens and Hollands (2000) require normal
control and regulation of the process which is well handled by Standard Operating
Procedures (SOPs). Normal situations include tasks such as process monitoring, or
scheduled testing of routine equipment. Routine tasks are rule-based behaviour
(Rasmussen and Jensen 1974; Rasmussen 1990). Most of the time, routine situa-
tions occur, in which the automation works well and the process is well handled by
the operator through SOPs. The main task is to monitor system instruments and
periodically adjust control settings to maintain production quantities within certain
boundaries (Reinartz 1993; Wickens and Hollands 2000).
In this book, routine stands for a property of the task, in the sense of frequency
with which it is performed. Routine therefore stands for the number of repetitions
per day, week or year. Moreover, routine stands for a defined, unchanging process.
Additionally, from an organisational point of view, Ahuja and Carley (1999) define
the degree of routineness as a function of the extent to which the task contains no or
low variety (Perrow 1967), a small number of exceptions over time (Daft and
Macintosh 1981) and therefore represent predictability and sameness (Ahuja and
Carley 1999). Organisational routines in terms of SOPs develop in response to
recurring questions (Gersick and Hackman 1990).
2.4 Conditions for Knowledge and Skill Application: Routine. . . 37
example is the case of the tsunami that swept over the NPP of Fukushima). In such
cases, knowledge-based behaviour is required (Rasmussen and Jensen 1974; Ras-
mussen 1990), which expresses itself in complex problem solving (Funke and
Frensch 2007; Fischer et al. 2012; Reinartz 1993) and dynamic decision making
(Brehmer 1992). An abnormal situation is considered to be a problem because the
human operator has several goals (see definition of “multiple goals” above) but
does not know how these goals can be reached. If the operator cannot go from the
given situation to the desired situation simply by predefined actions (e.g. SOPs),
“there has to be a recourse to thinking” (Duncker 1945, p. 1; Fischer et al 2012).
Based on the work by Brehmer (1992) and Edwards (1962), dynamic decision
making (DDM) “has been characterized by multiple, interdependent, and real-time
decisions, occurring in an environment that changes independently as a function of
a sequence of actions” (Gonzales et al. 2003, p. 591).
In this book, abnormal situations are what Stachowski et al. (2009, p. 1536) and
Gladstein and Reilly (1985), in line with Hermann (1963), define as a “crisis
situation”, which is (a) ambiguous and includes (b) unanticipated major
(c) threats to system survival coupled with (d) limited time to respond (Hermann
1963). Non-routine/abnormal tasks are less predictable and require creativity
(Ahuja and Carley 1999). Abnormal situations “are low-probability, high-impact
events that threaten the reliability and accountability of organizations and are
characterized by ambiguity of cause, effect, and means of resolution”
(Yu et al. 2008, p. 452 based on Pearson and Clair 1998). They are unusual,
out-of-the-ordinary, or atypical (Weinger and Slagle 2002, p. 59). Ambiguity is
correlated with uncertainty, incomplete and noisy information (Vicente et al. 2004).
Grote (2009) distinguishes between several types of uncertainty, such as:
• Source of uncertainty: Incomplete information, inadequate understanding,
undifferentiated alternatives
• Content of uncertainty: State uncertainty, effect uncertainty, response
uncertainty
• Lack of control: Lack of transparency, lack of predictability and lack of
influence.
The main problem in this respect is that in case of the situation in which the
system state is uncertain (Vicente et al. 2004), it is unclear which SOPs there even
are, and if there is no SOP, which actions lead to a suitable solution.
Looking at the disasters and accidents of the past few years, such as the
“Deepwater Horizon” in 2010 and Fukushima 2011, it becomes clear that such
non-routine/abnormal situations contain these aforementioned uncertainties, which
can also occur simultaneously. A dramatic example of the requirement is provided
by the disaster management in Fukushima in 2011. The plant personnel had to
handle the situation with “loss of all the safety systems, loss of practically all the
instrumentation, necessity to cope with simultaneous severe accidents on four
plants, lack of human resources, lack of equipment, lack of light in the installations,
and general conditions of the installation after the tsunami and after damage of the
fuel resulted in hydrogen explosions and high levels of radiation” (IAEA Report
2011, p. 43).
2.4 Conditions for Knowledge and Skill Application: Routine. . . 39
Table 2.6 Summary and delimitation of the terms routine, non-routine/normal and non-routine/
abnormal situation
Conditions for transfer Description
Routine situations Require routine control and regulation of the process
Based on rule-based behaviour
The situation is well handled by Standard Operating Procedures (SOPs)
e.g. “daily business”, plant monitoring and control
Non-routine/normal Require drawing on skills which have not been used for a longer period
situations of time,
Rule-based behaviour
The situation is well handled by Standard Operating Procedures (SOPs)
e.g. “exceptional business”, fault repair or start-up of plant, but is still
rule-based behaviour
Non-routine/abnormal Require problem-solving skills and knowledge-based behaviour
situations Situation is (a) ambiguous and includes (b) unanticipated major
(c) threats to system survival coupled with (d) limited time to
respond
e.g. low-probability, high-impact situation, an explosion in a subunit of
the plant caused by a safety-related rule violation or natural disasters
such as earthquakes, tsunami.
Table 2.7 Operational modes and critical systems perspective defined by the ASM (Bullemer and
Laberge 2010)
Operational Operational
modes Plant states Critical systems goals Plant activities
Emergency Disaster Area emergency response Minimise Fire fighting
system impact
Accident Site emergency response First aid rescue
system
Abnormal Out of Physical and mechanical Bring to Evacuation
control containment system safe
Safety shutdown state
Protective systems
Hardwired emergency alarms
Abnormal DCS alarm system Return to Manual control &
Decision support system normal troubleshooting
Process equipment
Normal Normal DCS, automatic controls Keep Preventative monitor-
Plant management systems normal ing & testing
DCS distributed control system
and performance level (Bjork and Bjork 2006; Burke and Hutchins 2007; Farr
1987).
The distinction between normal and abnormal is equally a psychological one and
refers not to the plant state (as in the ASM or IAEA definition in Tables 2.7 and 2.8),
but rather to the familiarity to the human operator. It refers to whether a task has, in
principle, already been trained and executed and for which there is an SOP which
one could use (¼ normal), which requires a so-called temporal transfer, or whether
there was no training for this task and also no SOPs (¼ abnormal), which then
requires an adaptive transfer (Kluge et al 2010).
From a continuous flow operations perspective (e.g. of refineries and petrochem-
ical plants), the distinction between normal and abnormal is a different one and in
terms of plant states, critical systems, operational goals and plant activities as
displayed in Table 2.7.
The consequences of abnormal situations, for example in a chemical plant,
depend on the nature of the materials, for example hazardous vs. non-hazardous
chemicals, solids, liquids or gases; flammable vs. non-flammable substance being
processed (ASM Consortium 2012). The definition in Nuclear Safety is different
(IAEA 2007) and deviates from the ASM Definition. The IAEA (2007) distin-
guishes between “Operational states” and “Accident conditions” (Table 2.8).
Normal operation in NPP is defined as operation within specified operational
limits and conditions, which includes start-up, power operation, shutting down,
maintenance, testing and refuelling. Accident conditions are defined as deviations
from normal operation that are more severe than anticipated operational occur-
rences, including design basis accidents and severe accidents, for example major
fuel failure or loss of coolant accident. Accident Management includes prevention
of escalation of the event into a severe accident, mitigation of consequences of a
2.4 Conditions for Knowledge and Skill Application: Routine. . . 41
Table 2.8 Plant states defined by the IAEA (2007) for NPP
Plant states Characteristics
Operational states Normal operation
Operation within specified operational limits and
conditions (includes startup, power opera-
tion, shutting down, maintenance, testing and
refuelling)
Anticipated operational occurrencesa
Operational process deviates from normal oper-
ations, which is expected to occur at least
once during the operating lifetime of a facil-
ity, but which in view of appropriate design
provision does not cause any significant
damage to items important to safety or lead
to accident conditions (e.g. loss of normal
electrical power, faults such as turbine trip,
malfunction of individual items of a nor-
mally running plant, failure of function of
single items of control equipment, loss of
power to main coolant pump)
Accident conditions Within design basis Design basis accidents (is designed against a
accidents facility and for which the damage to the fuel
and the release of radioactive material are
kept within authorised limits)
Not design basis accidents, but encompassed by
them
Beyond design basis Severe accidents (more severe than design basis
accidents. . . accidents)
. . .Without severe accidents
a
Some organisations use the term abnormal situation instead of anticipated operational occur-
rences (IAEA 2007, p. 145)
severe accident and achieving a long-term safe and stable state, and is defined as the
taking of actions during the evolution of a beyond design basis accident (IAEA
2007, p. 145).
In summary, this means that the terms routine, non-routine, normal and abnor-
mal from the human factors and the operations perspective are also differently
viewed and defined according to the respective branch. In this book, the starting
point is the consideration of required knowledge and skills, and situations and
conditions under which they need to be applied.
To give some examples and an outlook on the coming chapters, it is important
that as a training designer, one is, or becomes, one is aware of what routine,
non-routine/normal, and non-routine/abnormal situations are for the organisation
for which the training is conceived. Which SOPs exist? Which processes are rather
frequent, and which rather rare? In batch/mix processes, the start-up, for instance, is
more routine than in continuous/flow industries. Which tasks are performed every
42 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
day, every week, or only once a year or once every 10 years? And what serious
consequences can arise if a procedure is not correctly mastered?
Answers to these questions and the distinction between routine, non-routine/
normal and non-routine/abnormal are important, for example, in order to later
conduct a so-called DIF analysis (Difficulty-Frequency-Importance analysis, Buck-
ley and Caple 2007), which, in turn, is important in order to define training method,
duration or repetition (see Chaps. 4 and 5).
Moreover, from the distinction between routine, non-routine/normal and
non-routine/abnormal, it can be derived under which mental workload conditions
an operator has to perform his/her task. Waller et al. (2004) assume routine tasks to
be moderate-workload and non-routine to be high-workload situations. Addition-
ally, I assume non-routine/abnormal situations to be situations with high mental
workload under stress. Therefore, additionally, the answers to the question of what
non-routine/normal and non-routine/abnormal situations are need to be used to
consider particular training methods such as stress exposure training (Driskell and
Johnston 1998; Driskell et al. 2008, see Chaps. 4 and 5).
In addition to the cognitive aspects of dealing with abnormal situations on a
knowledge-based level as introduced above, the handling of abnormal situations
requires coping with high stress. The purpose of Stress Exposure Training based on
Driskell et al. (1998, 2001, 2008) is to provide the operator with the skills and tools
necessary to maintain effective performance when operating in high-stress situa-
tions (Salas et al. 2006). This training is especially important when the conse-
quences of errors are high, as stress increases the likelihood of errors.
After “setting the scene” by introducing and describing complex technical
systems, the task, duties and responsibilities of operators and operator crews and
conditions under which performance has to be shown, in Chap. 3, I go into detail
regarding the aspects which I have so far only touched on by way of example, by
deriving knowledge and skills that need to be acquired for performing complex
tasks in routine, non-routine/normal and non-routine/abnormal situations.
References
Ahuja, M. K., & Carley, K. M. (1999). Network structure in virtual organizations. Organization
Science, 10, 741–757.
Arthur, W., Bennett, W., Stanush, P. L., & McNelly, T. L. (1998). Factors that influence skill
decay and retention: A quantitative review and analysis. Human Performance, 11, 57–101.
ASM Consortium. (2012). Process factors. Retrieved November 12, 2012. [Link]
[Link]/defined/sources/Pages/[Link]
Austin, G. T. (1984). Shreve’s chemical process industries (5th ed.). New York: McGraw-Hill.
Bainbridge, L. (1983). Ironies of automation. Increasing levels of automation can increase, rather
than decrease, the problems of supporting the human operator. Automatica, 19, 775–779.
Bainbridge, L. (1995). Processes underlying human performance: Complex tasks. Retrieved
September 7, 2012. [Link]
Bainbridge, L. (1998). Planning the training of a complex skill. Retrieved January 11, 2012. http://
[Link]/Papers/[Link]
References 43
Bjork, R. A., & Bjork, E. L. (2006). Optimizing treatment and instruction: Implications of the new
theory of disuse. In L. G. Nilsson & N. Ohta (Eds.), Memory and society. Psychological
perspectives (pp. 109–134). Hove, UK: Psychology Press.
Blech, C., & Funke, J. (2005). DYNAMIS review: An overview about applications of the DYNAMIS
approach in cognitive psychology (Research Report). Heidelberg: Department of Psychology,
University of Heidelberg.
Brehmer, B. (1992). Dynamic decision making: Human control of complex systems. Acta
Psychologica, 81, 211–241.
Brehmer, B., & Dörner, D. (1993). Experiments with computer-simulated microworlds: Escaping
both the narrow straits of the laboratory and the deep blue sea of the field study. Computers in
Human Behavior, 9, 171–184.
Buckley, R., & Caple, J. (1990/revised 5th edition 2007). The theory and practice of training.
London: Kogan Page.
Bullemer, P., & Laberge, J. (2010, November 30). Abnormal situation management and the
human side of process safety. Paper presented at the ERTC 15th annual meeting, Istanbul,
Turkey. Retrieved November 12, 2012. [Link]
HumanSideofSafety_ERTC%20Forum_Bullemer_30Nov10.pdf
Bullemer, P. T., Cochran, T., Harp, S., & Miller, C. (1997). Managing abnormal situations II:
Collaborative decision support for operations personnel. ASM Consortium. Retrieved
November 11, 2012. [Link]
%[Link]
Burke, L. A., & Hutchins, H. M. (2007). Training transfer: An integrative literature review. Human
Resource Development Review, 6, 263–296.
Carvallo, P. V. R., dos Santos, I. L., & Vidal, M. C. R. (2005). Nuclear power plant shift
supervisors’ decision making during microincidents. International Journal of Industrial Ergo-
nomics, 35, 619–644.
Connor, S. J. (1986). The process industry thesaurus. Falls Church: American Production and
Inventory Control Society.
Craik, K. (1943). The nature of explanation. Cambridge: University Press.
Crossman, E. R. F. W. (1974). Automation and skill. In E. Edwards & F. P. Lees (Eds.), The human
operator in process control (pp. 1–25). London: Taylor & Francis.
Daft, R. M., & Macintosh, N. (1981). A tentative exploration into the amount and equivocality of
information processing in organizational work units. Administrative Science Quarterly, 26,
207–224.
De Keyser, V. (1995). Time in ergonomics research. Ergonomics, 38, 1639–1661.
Dennis, D. R., & Meredith, J. R. (2000a). An analysis of process industry production and inventory
management systems. Journal of Operations Management, 18, 683–699.
Dennis, D. R., & Meredith, J. R. (2000b). An empirical analysis of process industry transformation
systems. Management Science, 46, 1085–1099.
Dörner, D. (1989/2003). Die Logik des Misslingens. Strategisches Denken in komplexen
Situationen [The logic of failure. Strategic thinking in complex situations] (11th ed.).
Reinbeck: rororo.
Dosher, B. A. (2003). Working memory. In L. Nadel (Ed.), Encyclopedia of cognitive science
(pp. 569–577). London: Nature Publishing Group.
Driskell, J. E., & Johnston, J. H. (1998). Stress exposure training. In J. A. Cannon-Bowers &
E. Salas (Eds.), Making decisions under stress. Implications for individual and team training
(pp. 191–217). Washington, DC: APA (Reprinted in 2006).
Driskell, J. E., Copper, C., & Moran, A. (1994). Does mental practice enhance performance?
Journal of Applied Psychology, 79, 481–492.
Driskell, J. E., Johnston, J. H., & Salas, E. (2001). Does stress training generalize to novel
situations? Human Factors, 43, 99–110.
44 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
Driskell, J. E., Salas, E., Johnston, J. H., & Wollert, T. N. (2008). Stress exposure training: An
event-based approach. In P. A. Hancock & J. L. Szalma (Eds.), Performance under stress
(pp. 271–286). Aldershot: Ashgate.
Duncker, K. (1945). The structure and dynamics of problem-solving processes. Psychological
Monographs, 58(5), 1–112.
Edwards, W. (1962). Dynamic decision theory and probabilistic information processing. Human
Factors: The Journal of the Human Factors and Ergonomics, 4, 59–74. doi:10.1177/
001872086200400201.
Emery, F. E. (1959). Characteristics of socio-technical systems (Tavistock Documents # 527),
London. Abridged in F. E. Emery, The emergence of a new paradigm of work. Canberra:
Center for Continuing Education.
Endsley, M. R. (2006). Expertise and situation awareness. In K. A. Ericsson, N. Charness, P. J.
Feltovich, & R. R. Hoffmann (Eds.), The Cambridge handbook of expertise and expert
performance (pp. 633–652). Cambridge: Cambridge University Press.
Farr, M. J. (1987). The long-term retention of knowledge and skills. A cognitive and instructional
perspective. New York: Springer.
Fisch, R. (2004). Was tun? – Hinweise zum praktischen Umgang mit komplexen Aufgaben und
Entscheidungen (Titel des Kapitels auf S. 319)/Was tun angesichts komplexer Aufgaben (Titel
im Inhaltsverzeichnis, S. 6) [What to do? Guidelines for the practical management of complex
tasks and decisions]. In R. Fisch & D. Beck (Hrsg./Eds.), Komplexit€ atsmanagement. Methoden
zum Umgang mit komplexen Aufgabenstellungen in Wirtschaft, Regierung und Verwaltung
(pp. 319–345). Wiesbaden: VS Verlag für Sozialwissenschaften.
Fischer, A., Greiff, S., & Funke, J. (2012). The process of solving complex problems. The Journal
of Problem Solving, 4, 19–42.
Ford, J. K., Quinones, M. A., Sego, D. J., & Speer -Sorra, J. (1992). Factors affecting the
opportunity to perform trained tasks on the job. Personnel Psychology, 45, 511–527.
Fransoo, J. C., & Rutten, W. G. M. M. (1994). A typology of production control situations in
process industries. International Journal of Operations and Production Management, 18,
47–57.
Frederiksen, J. R., & White, B. Y. (1989). An approach to training based upon principled task
decomposition. Acta Psychologica, 71, 89–146.
Funke, J. (1985). Steuerung dynamischer Systeme durch Aufbau und Anwendung subjektiver
Kausalmodelle [Control of dynamic systems via construction and application of subjective
causal models]. Zeitschrift f€ur Psychologie, 193, 443–465.
Funke, J. (2010). Complex problem solving: A case for complex cognition? Cognitive Processing,
11, 133–142.
Funke, J., & Frensch, P. A. (2007). Complex problem solving: The European perspective-10 years
after (pp. 25–47). Mahwah: Lawrence Erlbaum Associates.
Gaddy, C. D., & Wachtel, J. A. (1992). Team skills training in nuclear power plant operations. In
R. W. Swezey & E. Salas (Eds.), Teams: Their training and performance (pp. 379–396).
Norwood: Alex.
Gersick, C. J., & Hackman, R. (1990). Habitual routines in task performing groups. Organizational
Behavior and Human Decision Process, 47, 65–97.
Gladstein, D. L., & Reilly, N. P. (1985). Group decision making under threat: The Tycoon game.
Academy of Management Journal, 28, 613–627.
Gonzales, C., Lerch, J. F., & Lebiere, C. (2003). Instance-based learning in dynamic decision
making. Cognitive Science, 27, 591–635.
Grauel, B., Kluge, A., & Adolph, L. (2012). Analyse vorausgehender Bedingungen f€ ur die
Unterst€ utzung makrokognitiver Prozesse in Teams in der industriellen Instandhaltung. Paper
presented at the 2nd workshop “Kognitive Systeme”, Universität Duisburg-Essen, 18–20
September 2012.
Grote, G. (2009). Management of uncertainty. Theory and application in the design of systems and
organizations. Dordrecht: Springer.
References 45
Hagemann, V., Kluge, A., & Ritzmann, S. (2012). Flexibility under complexity: Work contexts,
task profiles and team processes of high responsibility teams. Employee Relations, 34,
322–338.
Hammerton, M. (1967). Measures for the efficiency of simulators as training devices. Ergonomics,
10, 63–65.
Hansez, I., & Chmiel, N. (2010). Safety behavior: Job demands, job resources, and perceived
management commitment to safety. Journal of Occupational Health Psychology, 15, 267–278.
Hermann, C. F. (1963). Some consequences of crisis which limit the viability of organizations.
Administrative Science Quarterly, 8, 61–82.
Hollnagel, E., & Woods, D. D. (2005). Joint cognitive systems. Foundations of cognitive systems
engineering. Boca Raton: Taylor & Francis.
IAEA. (2007). IAEA safety glossary. Terminology used in nuclear safety and radiation protection
(2007 edn). Retrieved November 13, 2012. [Link]
PDF/Pub1290_web.pdf
IAEA. (2011). IAEA international fact finding expert mission of the Fukushima Dai-ichi NPP
accident following the great east Japan earthquake and tsunami. Report to the IAEA member
states. Retrieved December 3, 2012, from [Link]
2011/cn200/documentation/cn200_-final-fukushima-mission_report.pdf
Johnson, A. (in press). Procedural memory and skill acquisition. In A. F. Healy & R. W. Proctor
(Vol. Eds.), & I. B. Weiner (Ed.-in-Chief), Handbook of psychology: Vol. 4. Experimental
psychology (2nd edn). Hoboken: Wiley.
Johnson-Laird, P. N. (1983). Mental models. Towards a cognitive science of language, inference,
and consciousness. Cambridge: Cambridge University Press.
Kerstholt, J. H., & Raaijmakers, J. G. W. (1997). Decision making in dynamic task environments.
In R. Ranyard, W. R. Crozier, & O. Svenson (Eds.), Decision making. Cognitive models and
explanations (pp. 205–217). London: Routledge.
Kim, J. H., & Seong, P. H. (2009). Human factors engineering in large-scale digital control
systems. In H. P. Seong (Ed.), Reliability and risk issues in large scale safety-critical digital
control systems (Springer series in reliability engineering, III, pp. 163–195). London: Springer.
doi:10.1007/978-1-84800-384-2_8.
Kluge, A., Schüler, K., & Burkolter, D. (2008). Simulatortrainings für Prozesskontrolltätigkeiten
am Beispiel von Raffinerien: Psychologische Trainingsprinzipien in der Praxis. Zeitschrift f€ ur
Arbeitswissenschaft, 62(2), 97–109.
Kluge, A., Sauer, J., Burkolter, D., & Ritzmann, S. (2010). Designing training for temporal and
adaptive transfer: A comparative evaluation of three training methods for process control tasks.
Journal of Educational Computing Research, 43, 327–353.
Kluge, A., Burkolter, D., & Frank, B. (2012). “Being prepared for the infrequent”: A comparative
study of two refresher training approaches and their effects on temporal and adaptive transfer in
a process control task. In Proceedings of the Human Factors and Ergonomics Society 56th
annual conference (pp. 2437–2441), Boston. Thousand Oaks: SAGE.
Kluge, A., Grauel, B., & Burkolter, D. (2013). Job aids: How does the quality of a procedural aid
alone and combined with a decision aid affect motivation and performance in process control?
Applied Ergonomics, 44, 285–296.
Kluwe, R. H. (1997). Acquisition of knowledge in the control of a simulated technical system. Le
Travail Human, 60, 61–85.
Kragt, H., & Landweert, J. A. (1974). Mental skills in process control. In E. Edwards & F. P. Lees
(Eds.), The human operator in process control (pp. 135–145). London: Taylor & Francis.
Lee, F. J., & Anderson, J. R. (2001). Does learning a complex task have to be complex? A study in
learning decomposition. Cognitive Psychology, 42, 267–316.
Loukopoulos, L. D., Dismukes, R. K., & Barshi, I. (2009). The multitasking myth. Handling
complexity in real-world operations. Farnham: Ashgate.
Moray, N. (1987). Intelligent aids, mental models, and the theory of machines. International
Journal of Man-Machine Studies, 27(5), 619–629.
46 2 Controlling Complex Technical Systems: The Control Room Operator’s. . .
Moray, N. (1996, October). A taxonomy and theory of mental models. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 40(4), 164–168. Sage.
Moray, N. (1997). Human factors in process control. In G. Salvendy (Ed.), Handbook of human
factors and ergonomics (pp. 1944–1971). New York: Wiley.
Ormerod, T. C., Richardson, J., & Shepherd, A. (1998). Enhancing the usability of a task analysis
method: A notation and environment for requirements specification. Ergonomics, 41(11),
1642–1663.
Patrick, J. (1992). Training: Research and practice. San Diego: Academic.
Pearson, C. M., & Clair, J. A. (1998). Reframing crisis management. Academy of Management
Review, 23, 59–76.
Perrow, C. (1967). A framework for the comparative analysis of organizations. American Socio-
logical Review, 32, 194–208.
Perrow, C. (1984). Normal accidents: Living with high risk technology. New York: Basic Books
(Reprint 1999, by Princeton University Press, Princeton).
Proctor, R. W., & Dutta, A. (1995). Skill acquisition and human performance. Thousand Oaks:
Sage.
Proctor, R. W., & van Zandt, T. (2008). Human factors in simple and complex systems (2nd ed.).
Boca Raton: CRC Press.
Proctor, R. W., & Vu, K.-P. L. (2006/reprint 2009). Laboratory studies of training, skill acquisi-
tion, and retention. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The
Cambridge handbook of expertise and expert performance (pp. 265–286). Cambridge: Cam-
bridge University Press.
Rasmussen, J. (1990). Mental models and the control of action in complex environments. In
D. Ackermann & M. J. Tauber (Eds.), Mental models and human computer-interaction 1
(pp. 41–69). Amsterdam: North-Holland.
Rasmussen, J., & Jensen, A. (1974). Mental procedures in real-life tasks: A case study of electronic
troubleshooting. Ergonomics, 17, 293–307.
Reason, J. (2008). The human contribution. Unsafe acts, accidents, and heroic recoveries. Surrey:
Ashgate.
Reinartz, S. J. (1993). An empirical study of team behaviour in a complex and dynamic problem-
solving context: A discussion of methodological and analytical aspects. Ergonomics, 36,
1281–1290.
Reinartz, S. J., & Reinartz, G. (1992). Verbal communication in collective control of simulated
nuclear power plant incidents. Reliability Engineering and System Safety, 36, 245–251.
Rice, J. W., & Norback, J. P. (1987). Process industries production planning using matrix data
structures. Production and Inventory Management Journal, 28, 15–23.
Rippin, D. W. T. (1991). Batch process planning. Chemical Engineering, 98, 100–107.
Roth, E. M., & Woods, D. D. (1988). Aiding human performance I: Cognitive analysis. Le Travail
Humain, 51, 39–64.
Rouse, W. B., & Morris, N. M. (1985). On looking into the black box: Prospects and limits in the
search for mental models (No. DTIC#AD-A159080). Atlanta: Center for Man-Machine system
Research, Georgia Institute of Technology.
Rußwinkel, N., Urbas, L., & Thüring, M. (2011). Predicting temporal errors in complex task
environments: A computational and experimental approach. Cognitive Systems Research, 12,
336–354.
Salas, E., Wilson, K. A., Priest, A., & Guthrie, J. W. (2006). Design, delivery, and evaluation of
training systems. In G. Salvendy (Ed.), Handbook of human factors and ergonomics
(pp. 472–512). Hoboken: Wiley.
Schneider, W. (1999). Automaticity. In R. A. Wilson & F. C. Keil (Eds.), The MIT encyclopedia of
the cognitive science (pp. 63–64). Cambridge, MA: MIT Press.
Stachowski, A. A., Kaplan, S. A., & Waller, M. J. (2009). The benefits of flexible team interaction
during crisis. Journal of Applied Psychology, 94, 1536–1543.
References 47
Sterman, J. D. (1994). Learning in and about complex systems. System Dynamics Review, 10,
291–330.
Sternberg, R. J. (2009). Cognitive psychology. Wadsworth: Cengage Learning.
Sweller, J. (2006). How the human cognitive system deals with complexity. In J. Elen & R. E.
Clark (Eds.), Handling complexity in learning environments. Theory and research (pp. 13–27).
Amsterdam: Elsevier.
Tesluk, P. E., & Jacobs, R. R. (1998). Toward an integrated model of work experience. Personnel
Psychology, 51, 321–355.
Van Donk, D. P., & Fransoo, J. C. (2006). Operations management research in process industries.
Journal of Operations Management, 24, 211–214.
Veland, O., & Eikas, M. (2007). A novel design for an ultra-large screen display for industrial
process control. In M. J. Dainoff (Ed.), Ergonomics and health aspects. HCII 2007 (LNCS
4566, pp. 349–358). Berlin: Springer.
Verschuur, W., Hudson, P., & Parker, D. (1996). Violations of rules and procedures: Results of
item analysis and test of the behavioural model. Field study NAM and shell expro Aberdeen.
Leiden: Report Leiden University of SIP.
Vicente, K. J. (2007). Monitoring a nuclear power plant. In F. Kramer, D. A. Wiegmann, &
A. Kirlik (Eds.), Attention. From theory to practice (pp. 90–99). Oxford: Oxford University
Press.
Vicente, K. J., Mumaw, R. J., & Roth, E. M. (2004). Operator monitoring in a complex dynamic
work environment: A qualitative cognitive model based on field observations. Theoretical
Issues in Ergonomic Science, 5, 359–384.
Vidulich, M. A. (2003). Mental workload and situation awareness. Essential concepts in aviation
psychology. In P. S. Tsang & M. A. Vidulich (Eds.), Principles and practice of aviation
psychology (pp. 115–147). Mahwah: Lawrence Erlbaum.
Walker, G. H., Stanton, N. A., Salmon, P. M., Jenkins, D. P., & Rafferty, L. (2010). Translating the
concepts of complexity to the field of ergonomics. Ergonomics, 53, 1175–1186.
Wallace, T. F. (1984). APICS dictionary (5th ed.). Falls Church: American Production and
Inventory Control Society.
Waller, M. J., Gupta, N., & Giambatista, R. C. (2004). Effects of adaptive behaviors and shared
mental models on control crew performance. Management Science, 50, 1534–1544.
Weinger, M. B., & Slagle, J. (2002). Human factors research in anesthesia patient safety:
Techniques to elucidate factors affecting clinical task performance and decision making.
Journal of the American Medical Informatics Association, 9, 58–63.
Wickens, C. D., & Hollands, J. G. (2000). Engineering psychology and human performance (3rd
ed.). Upper Saddle River: Prentice Hall.
Wickens, C. D., & McCarley, J. S. (2008). Applied attention theory. Boca Raton: CRC Press.
Wilson, J. R., & Rutherford, A. (1989). Mental models: Theory and application in human factors.
Human Factors: The Journal of the Human Factors and Ergonomics Society, 31(6), 617–634.
Woods, D. D. (1984). Visual momentum: A concept to improve the cognitive coupling of person
and computer. International Journal of Man-Machine Studies, 21, 229–244.
Woods, D. D., O’Brien, J. F., & Hanes, L. F. (1987). Human factors challenges in process control:
The case of nuclear power plants. In G. Salvendy (Ed.), Handbook of human factors
(pp. 1725–1770). New York: Wiles.
Woods, D. D., Roth, E. M., Stubler, W. F., & Mumaw, R. J. (1990). Navigating through large
display networks in dynamic control applications. In Proceedings of the Human Factors and
Ergonomics 34th annual meeting (pp. 396–399). doi:10.1177/154193129003400435.
Woodward, J. (1965). Industrial organization: Theory and practice. London: Oxford University
Press.
Yu, T., Sengul, M., & Lester, R. H. (2008). Misery loves company: The spread of negative impacts
resulting from organizational crisis. Academy of Management Review, 33, 452–472.
[Link]
First-shot performance is crucial because non-routine tasks often occur under high-stakes conditions where errors can lead to significant safety risks, operational disruptions, and financial losses. Operators have no margin for errors or second attempts, so their initial response must be as accurate as possible to prevent adverse outcomes .
HROs face challenges such as ensuring effective communication, attentional switch, and coordination among team members. Each worker must integrate their part-tasks seamlessly into a larger, interdependent task structure, making it crucial to align individual contributions with the organizational goal to mitigate errors and enhance performance .
Non-routine situations require operators to draw upon rarely used skills and procedures with less automaticity, demanding higher attentional resources, conscious control, and problem-solving capabilities. These tasks often necessitate 'first-shot' performance with no room for error, significantly increasing mental workload compared to routine tasks .
Complex tasks in HROs are composed of various part-tasks that need integration and coordination, which heavily burdens working memory. Operators must process multiple interconnected variables simultaneously, requiring effective mental models and instructional support to prevent cognitive overload and optimize performance .
Distinguishing these situations is crucial for developing effective training and response strategies. Routine tasks are repetitive and managed by standard procedures, while non-routine/normal tasks are less frequent but supported by existing SOPs. Non-routine/abnormal situations lack such procedures, necessitating adaptive skills and quick decision-making, which are critical for managing unexpected malfunctions and ensuring safety in high-stake environments .
SOPs are fundamental in defining routine tasks as they provide structured guidelines for monitoring, controlling, and regulating processes, which are repetitive and predictable. This standardization ensures consistency and efficiency in handling frequent tasks, allowing operators to perform with high automaticity .
Mental models facilitate task performance by enabling operators to create representations of complex, dynamic systems, allowing them to predict outcomes, plan actions, and adapt to changes effectively. These models help operators understand interdependencies and streamline decision-making processes in high-stakes environments .
Attentional processes contribute to complex tasks by enabling the coordination and integration of part-tasks into a joint flow of action. This involves attention selection, switching, and sharing, ensuring workers can manage and synchronize individual tasks into interdependent team tasks to accomplish organizational goals .
Non-routine situations are economically significant because they interrupt regular production, resulting in capacity loss and financial cost. In the petrochemical industry, these situations can cause up to 10 billion dollars in annual production losses due to their potential to disrupt operations and the resource-intensive response they require .
Theories of automatization illustrate the efficiency of routine tasks processed with high automaticity requiring minimal conscious effort. In contrast, non-routine tasks are less automatic, requiring more cognitive resources and deliberate processing because they are infrequent and less predictable, impacting error rates and performance under pressure .