0% found this document useful (0 votes)
48 views26 pages

Network Protocols For Networked Control Systems

The document discusses network protocols for networked control systems (NCSs), highlighting their advantages over traditional wired systems, such as improved reliability and simpler integration. It examines the differences between data networks and control networks, particularly focusing on time delay uncertainties and bandwidth limitations. The chapter also analyzes various medium access control protocols used in control networks, including Ethernet, ControlNet, and DeviceNet, and presents a survey of popular industrial fieldbuses.

Uploaded by

bsivakumar7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views26 pages

Network Protocols For Networked Control Systems

The document discusses network protocols for networked control systems (NCSs), highlighting their advantages over traditional wired systems, such as improved reliability and simpler integration. It examines the differences between data networks and control networks, particularly focusing on time delay uncertainties and bandwidth limitations. The chapter also analyzes various medium access control protocols used in control networks, including Ethernet, ControlNet, and DeviceNet, and presents a survey of popular industrial fieldbuses.

Uploaded by

bsivakumar7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

See discussions, stats, and author profiles for this publication at: [Link]

net/publication/225893419

Network Protocols for Networked Control Systems

Chapter · January 2005


DOI: 10.1007/0-8176-4404-0_28

CITATIONS READS
23 1,100

3 authors, including:

Feng-Li Lian
National Taiwan University
157 PUBLICATIONS 2,954 CITATIONS

SEE PROFILE

All content following this page was uploaded by Feng-Li Lian on 01 June 2014.

The user has requested enhancement of the downloaded file.


Network Protocols for Networked Control
Systems

F.-L. Lian,1 J. R. Moyne,2 and D. M. Tilbury2


1
Electrical Engineering Department, National Taiwan University, No. 1, Sec. 4,
Roosevelt Road, Taipei, 106, Taiwan fengli@[Link]
2
Mechanical Engineering Department, University of Michigan, 2350 Hayward St.,
Ann Arbor, MI 48103, U.S.A. {moyne,tilbury}@[Link]

1 Introduction
Control systems with networked communication, called networked control sys-
tems (NCSs), provide several advantages over point-to-point wired systems
such as improvement in reliability through reduced volume of wiring, sim-
pler systems integration, easier troubleshooting and maintenance, and the
possibility for distributed processing. There are two types of communication
networks. Data networks are characterized by large data packets, relatively
infrequent bursty transmission, and high data rates; they generally do not
have hard real-time constraints. Control networks, in contrast, must shuttle
countless small but frequent packets among a relatively large set of nodes to
meet the time-critical requirements. The key element that distinguishes con-
trol networks from data networks is the capability to support real-time or
time-critical applications [19].
The change of communication architecture from point-to-point to common-
bus, however, introduces different forms of time delay uncertainties between
sensors, actuators, and controllers. These time delays come from the time
sharing of the communication medium as well as the extra time required for
physical signal coding and communication processing. The characteristics of
time delays may be constant, bounded, or even random, depending on the
network protocols adopted and the chosen hardware. This type of time delay
could potentially degrade a system’s performance and possibly cause system
instability.
Thus, the disadvantages of an NCS include the limited bandwidth for com-
munication and the delays that occur when sending messages over a network.
In this chapter, we discuss the sources of delay in common communication
networks used for control systems, and show how they can be computed and
analyzed.
Several factors affect the availability and utilization of the network band-
width: the sampling rates at which the various devices send information over
652 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

the network, the number of elements that require synchronous operation, the
method of synchronization between requesters and providers (such as polling),
the data or message size of the information, physical factors such as network
length, and the medium access control sublayer protocol that controls the
information transmission [7]. There are three main types of medium access
control used in control networks: random access with retransmission when
collisions occur (e.g., Ethernet and most wireless mechanisms), time-division
multiplexing (such as master-slave or token-passing), and random access with
prioritization for collision avoidance (e.g., Controller Area Network (CAN)).
Within each of these three categories, there are numerous network protocols
that have been defined and used. For each type of protocol, we study the
key parameters of the corresponding network when used in a control situ-
ation, including network utilization, magnitude of the expected time delay,
and characteristics of time delays. Simulation results are presented for several
different scenarios, and the advantages and disadvantages of each network
type are summarized. The focus is on one of the most common protocols in
each category; the analysis for other protocols in the same category can be
addressed in a similar fashion.
A survey of the types of control networks used in industry shows a wide
variety of networks in use; see Table 1. The networks are classified according
to type: random access (RA) with collision detection (CD) or collision avoid-
ance (CA), or time-division multiplexed (TDM) using token-passing (TP) or
master-slave (MS).

Table 1. Worldwide most popular fieldbuses [18]. Note that the totals are more
than 100% because many companies use more than one type of bus. Wireless was
not included in the original survey, but its usage is growing quickly.

Network Type Users Application domain


Ethernet RA/CD 50% Various
Profibus TDM/(TP and MS) 26% Process control
CAN-based RA/CA 25% Automotive, process
Modbus TDM/MS 22% Various
ControlNet TDM/TP 14% Plant bus
ASI TDM/MS 9% Building systems
Interbus-S TDM/MS 7% Manufacturing
Fieldbus Foundation TDM/TP 7% Chemical industry
Wireless (e.g., IEEE 802.11) RA/CA Unknown Various
Network Protocols for Networked Control Systems 653

2 Control Network Basics


In this section, we discuss the medium access control (MAC) sublayer protocol
of three types of control networks. We focus our discussion on one of the
common networks of each type: Ethernet (including hub, switch, and wireless
varieties, which will be defined later), ControlNet (a token-passing network),
and DeviceNet (a CAN-based network).3 The MAC sublayer protocol, which
describes the protocol for obtaining access to the network, is responsible for
satisfying the time-critical/real-time response requirement over the network
and for the quality and reliability of the communication between network
nodes [8]. Our discussion and comparison thus focus on the MAC sublayer
protocols.
For control network operation, the message connection type must be spec-
ified. Practically, there are three types of message connections: strobe, poll,
and change of state (COS)/cyclic. In a strobe connection, the master device
broadcasts a strobed message to a group of devices and these devices respond
with their current condition. In this case, all devices are considered to sam-
ple new information at the same time. In a poll connection, the master sends
individual messages to the polled devices and requests update information
from them. Devices only respond with new signals after they have received a
poll message. COS/cyclic devices send out messages either when their status
is changed (COS) or periodically (cyclic). Although the COS/cyclic connec-
tion seems most appropriate from a traditional control systems point of view,
strobe and poll are commonly used in industrial control networks [4].

2.1 Ethernet networks (CSMA)

Ethernet generally uses the carrier sense multiple access (CSMA) with CD
or CA mechanisms for resolving contention on the communication medium.
There are three common flavors of Ethernet: (1) hub-based Ethernet, which
is common in office environments and is the most widely implemented form
of Ethernet, (2) switched Ethernet, which is more common in manufacturing
and control environments, and (3) wireless Ethernet.

Hub-based Ethernet (CSMA/CD)

Hub-based Ethernet uses hub(s) to interconnect the devices on a network.


When a packet comes into one hub interface, the hub simply broadcasts the
packet to all other hub interfaces. Hence, all of the devices on the same network

3
Note that Ethernet is not a complete protocol solution but only a MAC sub-
layer definition, whereas ControlNet and DeviceNet are complete protocol solutions.
Following popular usage, we use the term “Ethernet” to refer to Ethernet-based
complete network solutions. These include industrial Ethernet solutions such as
Modbus/TCP, PROFINET, and EtherNet/IP.
654 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

receive the same packet simultaneously, and message collisions are possible.
Collisions are dealt with utilizing the CSMA/CD protocol as specified in the
IEEE 802.3 network standard [1, 2, 21].
This protocol operates as follows: when a node wants to transmit, it lis-
tens to the network. If the network is busy, the node waits until the network is
idle; otherwise it transmits immediately. If two or more nodes listen to the idle
network and decide to transmit simultaneously, the messages of these trans-
mitting nodes collide and the messages are corrupted. While transmitting, a
node must also listen to detect a message collision. On detecting a collision
between two or more messages, a transmitting node stops transmitting and
waits a random length of time to retry its transmission. This random time
is determined by the standard binary exponential backoff (BEB) algorithm:
the retransmission time is randomly chosen between 0 and (2i − 1) slot times,
where i denotes the ith collision event detected by the node and one slot time
is the minimum time needed for a round-trip transmission. However, after
10 collisions have been reached, the interval is fixed at a maximum of 1023
slots. After 16 collisions, the node stops attempting to transmit and reports
failure back to the node microprocessor. Further recovery may be attempted
in higher layers [21].

Bytes 7 1 6 6 2 0-1500 0-46 4

Start of Destination Source Data


Preamble Data Pad Checksum
Delimiter Address Address Length

Overhead = 22 Bytes 46-1500 Bytes OH = 4 Bytes

Fig. 1. Ethernet (CSMA/CD) frame format

The Ethernet frame format is shown in Fig. 1 [21]. The total overhead is
26 (=22+4) bytes. The data packet frame size is between 46 and 1500 bytes.
There is a nonzero minimum data size requirement because the standard states
that valid frames must be at least 64 bytes long, from destination address to
checksum (72 bytes including preamble and start of delimiter). If the data
portion of a frame is less than 46 bytes, the pad field is used to fill out
the frame to the minimum size. There are two reasons for this minimum size
limitation. First, it makes it easier to distinguish valid frames from “garbage.”
When a transceiver detects a collision, it truncates the current frame, which
means that stray bits and pieces of frames frequently appear on the cable.
Second, it prevents a node from completing the transmission of a short frame
before the first bit has reached the far end of the cable, where it may collide
with another frame. For a 10-Mbps Ethernet with a maximum length of 2500
m and four repeaters, the minimum allowed frame time or slot time is 51.2
μs, which is the time required to transmit 64 bytes at 10 Mbps [21].
Network Protocols for Networked Control Systems 655

Advantages: Because of low medium access overhead, Ethernet uses a simple


algorithm for operation of the network and has almost no delay at low network
loads [24]. No communication bandwidth is used to gain access to the network
compared with the token bus or token ring protocol. Ethernet used as a control
network commonly uses the 10 Mbps standard (e.g., Modbus/TCP); high-
speed (100 Mbps or even 1 Gbps) Ethernet is mainly used in data networks
[21].
Disadvantages: Ethernet is a nondeterministic protocol and does not sup-
port any message prioritization. At high network loads, message collisions
are a major problem because they greatly affect data throughput and time
delays may become unbounded [24]. The Ethernet capture effect existing in
the standard BEB algorithm, in which a node transmits packets exclusively
for a prolonged time despite other nodes waiting for medium access, causes
unfairness and substantial performance degradation [20]. Based on the BEB
algorithm, a message may be discarded after a series of collisions; therefore,
end-to-end communication is not guaranteed. Because of the required mini-
mum valid frame size, Ethernet uses a large message size to transmit a small
amount of data.
Several solutions have been proposed for using this form of Ethernet in con-
trol applications. For example, every message could be time-stamped before it
is sent. This requires clock synchronization, however, which is not easy to ac-
complish, especially with a network of this type [6]. Various schemes based on
deterministic retransmission delays for the collided packets of a CSMA/CD
protocol result in an upper-bounded delay for all the transmitted packets.
However, this is achieved at the expense of inferior performance to CSMA/CD
at low to moderate channel utilization in terms of delay throughput [8]. Other
solutions also try to prioritize CSMA/CD (e.g., LonWorks) to improve the re-
sponse time of critical packets [14]. To a large extent these solutions have been
rendered moot with the proliferation of switched Ethernet as described below.
On the other hand, many of the same issues reappear with the migration to
wireless Ethernet for control.

Switched Ethernet (CSMA/CA)

Switched Ethernet utilizes switches to subdivide the network architecture,


thereby avoiding collisions, increasing network efficiency, and improving deter-
minism. It is widely used in manufacturing applications. The main difference
between switch-based and hub-based Ethernet networks is the intelligence of
forwarding packets. Hubs simply pass on incoming traffic from any port to all
other ports, whereas switches learn the topology of the network and forward
packets to the destination port only. In a star-like network layout, every node
is connected with a single cable to the switch. Thus, collisions can no longer
occur on any network cable.
Switches employ the cut-through or store-and-forward technique to for-
ward packets from one port to another, using per-port buffers for packets
656 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

waiting to be sent on that port. Switches with cut-through first read the
MAC address and then forward the packet to the destination port accord-
ing to the MAC address of the destination and the forwarding table on the
switch. On the other hand, switches with store-and-forward examine the com-
plete packet first. Using the cyclic redundancy check (CRC) code, the switch
will first verify that the frame has been correctly transmitted before forward-
ing the packet to the destination port. If there is an error, the frame will be
discarded. Store-and-forward switches are slower, but will not forward any
corrupted packets.
Although there are no message collisions on the networks, congestion may
occur inside the switch when one port suddenly receives a large number of
packets from the other ports. Three main queueing principles are implemented
inside the switch in this case. They are first-in-first-out (FIFO) queue, pri-
ority queue, and per-flow queue. The FIFO queue is a traditional method
that is fair and simple. However, if the network traffic is heavy, the quality of
service cannot be guaranteed. In the priority queueing scheme, the network
manager reads some of the data frames to distinguish which queues will be
more important. Hence, the packets can be classified into different levels of
queues. Queues with high priority will be processed first followed by queues
with low priority until the buffer is empty. With the per-flow queueing oper-
ation, queues are assigned different levels of priority (or weights). All queues
are then processed one by one according to priority; thus, the queues with
higher priority will generally have higher performance and could potentially
block queues with lower priority.
Examples of timing analysis and performance evaluation of switched Eth-
ernet can be found in [9, 23].

Wireless Ethernet (CSMA/CA)

Wireless Ethernet, based on the IEEE 802.11 standard, can replace wired
Ethernet in a transparent way since it implements the two lowest layers of
the International Standards Organization (ISO)/Open Systems Interconnec-
tion (OSI) model. Besides the physical layer, the biggest difference between
802.11 and 802.3 is in the medium access control. Unlike wired Ethernet nodes,
wireless stations cannot “hear” a collision. A collision avoidance mechanism
is used but cannot entirely prevent collisions. Thus, after a packet has been
successfully received by its destination node, the receiver sends a short ac-
knowledgment packet (ACK) back to the original sender. If the sender does
not receive an ACK packet, it assumes that the transmission was unsuccessful
and retransmits.
The collision avoidance mechanism in 802.11 works as follows. If a network
node wants to send while the network is busy, it sets its backoff counter to a
randomly chosen value. Once the network is idle, the node waits first for an
interframe space and then for this backoff time before attempting to send. If
another node accesses the network during that time, it must wait again for
Network Protocols for Networked Control Systems 657

another idle interval. In this way, the node with the lowest backoff time sends
first. Certain messages (e.g., ACK) may start transmitting after a shorter
interframe space, thus they have a higher priority. Collisions may still occur
because of the random nature of the backoff time; it is possible for two nodes
to have the same backoff time.
Several refinements to the protocol also exist. Nodes may reserve the net-
work either by sending a request to send (RTS) message or by breaking a
large message into many smaller messages (fragmentation); each successive
message can be sent after the smallest interframe time. If there is a single
master node on the network, the master can poll all the nodes and effectively
create a TDM contention-free network.

2.2 TDM networks

Time-division multiplexing can be accomplished in one of two ways. In a


master-slave network, a single master polls multiple slaves. Slaves can only
send data over the network when requested by the master. In this way, there
are no collisions, since the data transmissions are carefully scheduled by the
master. In a token-passing network, there are multiple masters, or peers. The
token bus protocol (e.g., IEEE 802.4) allows a linear, multidrop, tree-shaped,
or segmented topology [24]. The node that currently has the token is allowed
to send data. When it is finished sending data, or the maximum token holding
time has expired, it “passes” the token to the next logical node on the network.
If a node has no message to send, it just passes the token to the successor node.
The physical location of the successor is not important because the token is
sent to the logical neighbor. Collision of data frames does not occur, as only
one node can transmit at a time. Most token-passing protocols guarantee a
maximum time between network accesses for each node, and most also have
provisions to regenerate the token if the token holder stops transmitting and
does not pass the token to its successor. In many cases, nodes can also be
added dynamically to the bus and can request to be dropped from the logical
ring.
ASI, Bitbus, and Interbus-S are typical examples of master-slave networks,
while Profibus and ControlNet are typical examples of token-passing networks.
Each peer node in a Profibus network can also behave like a master and
communicate with a set of slave nodes during the time it holds the token.
These are deterministic networks because the maximum waiting time before
sending a message frame can be characterized by the token rotation time.
The nodes in the token bus network are arranged logically into a ring, and, in
the case of ControlNet, each node knows the address of its predecessor and its
successor. During operation of the network, the node with the token transmits
data frames until either it runs out of data frames to transmit or the time it
has held the token reaches the limit. The node then regenerates the token and
transmits it to its logical successor on the network.
658 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

ControlNet

Bytes 2 1 1 0-510 2 1

Start of Source End


Preamble MAC ID LPackets CRC
Delimiter Delimiter

Overhead = 4 Bytes OH = 3 Bytes

LPacket LPacket ........ LPacket

Size Control Tag Data

Byte 1 1 2 or More 0-506

Fig. 2. The message frame of ControlNet (token bus)

The ControlNet protocol is used here as a case study of the operation of


a typical token-passing network. The message frame format of ControlNet is
shown in Fig. 2 [3]. The total overhead is 7 bytes, including preamble, start
delimiter, source MAC ID, CRC, and end delimiter. The data packet frame,
also called the link packet (Lpacket) frame, may include several Lpackets that
contain size, control, tag, and data fields with total frame size between 0 and
510 bytes. The individual destination address is specified within the tag field.
The size field specifies the number of byte pairs (from 3 to 255) contained in
an individual Lpacket, including the size, control, tag, and link data fields.
The ControlNet protocol adopts an implicit token-passing mechanism and
assigns a unique MAC ID (from 1 to 99) to each node. As in general token-
passing buses, the node with the token can send data; however, there is no
real token passing around the network. Instead, each node monitors the source
MAC ID of each message frame received. At the end of a message frame, each
node sets an “implicit token register” to the received source MAC ID + 1. If
the implicit token register is equal to the node’s own MAC ID, that node may
now transmit messages. All nodes have the same value in their implicit token
registers, preventing collisions on the medium. If a node has no data to send,
it just sends a message with an empty Lpacket field, called a null frame.
The length of a cycle, called the network update time (NUT) in ControlNet
or the token rotation time (TRT) in general, is divided into three major parts:
scheduled, unscheduled, and guardband, as shown in Fig. 3. During the sched-
uled part of an NUT, each node can transmit time-critical/scheduled data by
obtaining the implicit token from 0 to S. During the unscheduled part of an
NUT, nodes 0 to U share the opportunity to transmit non-time-critical data
in a round-robin fashion until the allocated unscheduled duration is expired.
Network Protocols for Networked Control Systems 659

When the guardband time is reached, all nodes stop transmitting, and only
the node with the lowest MAC ID, called the “moderator,” can transmit a
maintenance message, called the “moderator frame,” which accomplishes the
synchronization of all timers inside each node and the publishing of critical
link parameters such as NUT, node time, S, U , etc. If the moderator frame is
not heard for two consecutive NUTs, the node with the lowest MAC ID will
begin transmitting the moderator frame in the guardband of the third NUT.
Moreover, if a moderator node notices that another node has a lower MAC
ID than its own, it immediately cancels its moderator role.

Time

0 0 0
1 1 1
2 2 2
3 3 3
4 4 4

0
S S 1 S
2
7 8 9
8 9 10
9 11
10 U 12

Network Update Time (NUT)


Scheduled Unscheduled Guardband

Fig. 3. Medium access during scheduled, unscheduled, and guardband time

Advantages: The token bus protocol is a deterministic protocol that pro-


vides excellent throughput and efficiency at high network loads [8,24]. During
network operation, the token bus can dynamically add nodes to or remove
nodes from the network. This contrasts with the token ring case, where the
nodes physically form a ring and cannot be added or removed dynamically
[24]. Scheduled and unscheduled segments in each NUT cycle make Control-
Net suitable for both time-critical and non-time-critical messages.
Disadvantages: Although the token bus protocol is efficient and determinis-
tic at high network loads, at low channel traffic its performance cannot match
that of contention protocols. In general, when there are many nodes in one
logical ring, a large percentage of the network time is used in passing the
token between nodes when data traffic is light [8].
660 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

3 CAN-Based Networks: DeviceNet


CAN is a serial communication protocol developed mainly for applications in
the automotive industry but also capable of offering good performance in other
time-critical industrial applications. The CAN protocol is optimized for short
messages and uses a CSMA/arbitration on message priority (AMP) medium
access method. Thus, the protocol is message oriented, and each message
has a specific priority that is used to arbitrate access to the bus in case of
simultaneous transmission. The bit stream of a transmission is synchronized
on the start bit, and the arbitration is performed on the following message
identifier, in which a logic zero is dominant over a logic one. A node that
wants to transmit a message waits until the bus is free and then starts to
send the identifier of its message bit by bit. Conflicts for access to the bus
are solved during transmission by an arbitration process at the bit level of
the arbitration field, which is the initial part of each frame. Hence, if two
devices want to send messages at the same time, they first continue to send
the message frames and then listen to the network. If one of them receives a
bit different from the one it sends out, it loses the right to continue to send its
message, and the other wins the arbitration. With this method, an ongoing
transmission is never corrupted.
In a CAN-based network, data are transmitted and received using message
frames that carry data from a transmitting node to one or more receiving
nodes. Transmitted data do not necessarily contain addresses of either the
source or the destination of the message. Instead, each message is labeled
by an identifier that is unique throughout the network. All other nodes on
the network receive the message and accept or reject it, depending on the
configuration of mask filters for the identifier. This mode of operation is known
as multicast.
DeviceNet is an example of a technology based on the CAN specification
that has received considerable acceptance in device-level manufacturing ap-
plications. The DeviceNet specification is based on the standard CAN (11-bit
identifier only)4 with an additional application and physical layer specification
[4, 17].
The frame format of DeviceNet is shown in Fig. 4 [4]. The total overhead
is 47 bits, which includes start of frame (SOF), arbitration (11-bit identifier),
control, CRC, acknowledgment (ACK), end of frame (EOF), and intermission
(INT) fields. The size of a data field is between 0 and 8 bytes. The DeviceNet
protocol uses the arbitration field to provide source and destination addressing
as well as message prioritization.
Advantages: CAN is a deterministic protocol optimized for short messages.
The message priority is specified in the arbitration field. Higher priority mes-
sages always gain access to the medium during arbitration. Therefore, the
transmission delay for higher priority messages can be guaranteed.
4
The CAN protocol supports two message frame formats: standard CAN (version
2.0A, 11-bit identifier) and extended CAN (version 2.0B, 29-bit identifier).
Network Protocols for Networked Control Systems 661
Message Frame

Bus Idle Arbitration Field Control Data Field CRC Field ACK EOF Int Bus Idle

11-Bit Identifier r1 r0 DLC Data (0-8 Bytes) 15 Bits


SOF RTR Delimiter Delimiter
Slot

Fig. 4. The message frame format of DeviceNet (standard CAN format)

Disadvantages: The major disadvantage of CAN compared with the other


networks is the slow data rate (maximum of 500 Kbps). Thus, the through-
put is limited compared with other control networks. The bit synchronization
requirement of the CAN protocol also limits the maximum length of a De-
viceNet network. CAN is also not suitable for transmission of messages of
large data sizes, although it does support fragmentation of data that is more
than 8 bytes.

4 Timing Components
The important time delays that should be considered in an NCS analysis
are the sensor-to-controller and controller-to-actuator end-to-end delays. In
an NCS, message transmission delay can be broken into two parts: device
delay and network delay. The device delay includes the time delays at the
source and destination nodes. The time delay at the source node includes the
preprocessing time, Tpre , and the waiting time, Twait . The time delay at the
destination node is only the postprocessing time, Tpost . The network time
delay includes the total transmission time of a message and the propagation
delay of the network. The total time delay can be expressed by the following
equation:

Tdelay = Tpre + Twait + Ttx + Tpost . (1)

The key components of each time delay are shown in Fig. 5 and will be dis-
cussed in the following subsections.

4.1 Pre- and postprocessing times at source and destination nodes

The preprocessing time at the source node is the time needed to acquire
data from the external environment and encode it into the appropriate net-
work data format. There may be one processor performing both functions, or
multiple processors; we define the total elapsed time required as the pre- or
postprocessing time. This time depends on the device software and hardware
characteristics. In many cases, it may be assumed that the preprocessing time
662 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
Node A Node B

Tpre Tpre
Application Layer Application Layer
Tpost Tpost
Twait Twait
Data Link Layer Data Link Layer

Physical Layer Physical Layer

Ttx

Ttx

Fig. 5. A timing diagram showing time spent sending a message from a source node
to a destination node

is constant or negligible. However, this assumption is not true in general; in


fact, there may be noticeable differences in processing time characteristics
between similar devices, and these delays may be significant.
The postprocessing time at the destination node is the time taken to de-
code the network data into the physical data format and output it to the
external environment.

4.2 Experimental investigation of pre- and postprocessing times

In practical applications, it is very difficult to identify each individual timing


component discussed above. Instead, by monitoring the time-stamped traf-
fic of the request-response messaging on a DeviceNet network, we can show
the characteristics of processing times, i.e., the sum of the preprocessing and
postprocessing times of one device.
In the experimental setup, there is only one master and one slave connected
to the network and the master continuously polls this slave. Referring to
Fig. 5, let Node A be the master and Node B be the slave. Here, there is no
other network traffic other than the request-response messages between the
master and slave, i.e., Twait = 0, and the request-response frequency is set low
enough that no messages are queued up at the sender buffer. By monitoring
the message traffic on the network medium and time-stamping each message,
we can further calculate the processing time of each request-response, i.e.,
Tpost + Tpre , after subtracting the transmission time.
Fig. 6 shows the histogram of 400 samples of four typical DeviceNet de-
vice processing times [11]. The devices are standard I/O types, such as those
used for limit switches. The (right) solid and (left) dashed lines are the maxi-
mum and minimum values of the processing times, respectively. The histogram
plots indicate the nondeterministic processing times of different network de-
vices and their variance. Devices 1 and 3 have a similar functionality of discrete
inputs/outputs, but different numbers of input/output modules. Device 3 pro-
vides several augmentable modules and hence has more processing units and
a higher computation load. Device 1, on the other hand, has only one unit.
Device 2 has a fairly consistent processing time, i.e., a low variance. Note that
Network Protocols for Networked Control Systems 663

the smallest time that can be recorded is 1 μs. The uniform distribution of
processing time at Device 4 is due to the fact that it has an internal sampling
time which is mismatched with the request frequency. Hence, the processing
time recorded here is the sum of the actual processing time and the waiting
time inside the device. Device 4 also provides more complex functionality and
has a longer processing time than the others.

t = 134 Device 1 t = 174 t = 206 Device 2 t = 545


400 min max
400 min max
Number of Messages

Number of Messages
300 300

200 200

100 100

0 0
130 140 150 160 170 200 300 400 500
Processing Time (μs) Processing Time (μs)
t = 399 Device 3 t = 948 t = 1550 Device 4 t = 11005
100 min max
40 min max
Number of Messages

Number of Messages

80
30
60
20
40
10
20

0 0
400 600 800 1000 2000 4000 6000 800010000
Processing Time (μs) Processing Time (μs)

Fig. 6. Processing time histogram of four typical DeviceNet devices

A key point that can be taken from the data presented in Fig. 6 is that the
device processing time can be substantial in the overall calculation of Tdelay .
In fact, this delay often dominates over network delays. Thus, in designing
NCSs, device delay and delay variability should be considered as important
factors when choosing components.

4.3 Transmission time on network channel

The transmission time is the most deterministic parameter in a network sys-


tem because it only depends on the data rate, the message size, and the
distance between two nodes. The formula for transmission time can be de-
scribed as follows. Ttx = Tf rame + Tprop . Tf rame is the time required to send
the packet across the network, and Tprop is the propagation time between any
two devices. Since the typical transmission speed in a communication medium
is 2 × 108 m/s, the propagation time Tprop is negligible on a small scale. In
the worst case, the propagation delays from one end to the other of the net-
work cable for these three control networks are Tprop = 25.6 μs for Ethernet
(2500 m), Tprop = 10 μs for ControlNet (1000 m), and Tprop = 1 μs for De-
viceNet (100 m). The length in parentheses represents the typical maximum
664 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

cable length used. The propagation delay is not easily characterized because
the distance between the source and destination nodes is not constant among
different transmissions. For comparison, we will assume that the propagation
times of these three network types are the same, say, Tprop = 1 μs (100 m).
Note that Tprop in DeviceNet is generally less than one bit time because De-
viceNet is a bit-synchronized network. Hence, the maximum cable length is
used to guarantee the bit synchronization among nodes.
The frame time, Tf rame , depends on the size of the data, the overhead,
any padding, and the bit time. Let Ndata be the size of the data in terms of
bytes, Novhd be the number of bytes used as overhead, Npad be the number
of bytes used to pad the remaining part of the frame to meet the minimum
frame size requirement, and Nstuff be the number of bytes used in a stuffing
mechanism (on some protocols). The frame time can then be expressed by the
following equation:

Tf rame = [Ndata + Novhd + Npad + Nstuff ] × 8 × Tbit . (2)

The values Ndata , Novhd , Npad , and Nstuff 5 can be explicitly described for the
Ethernet, ControlNet, and DeviceNet protocols, see [10].

4.4 Waiting time at source nodes

A message may spend time waiting in the queue at the sender’s buffer and
could be blocked from transmitting by other messages on the network. De-
pending on the amount of data the source node must send and the traffic on
the network, the waiting time may be significant. The main factors affect-
ing waiting time are network protocol, message connection type, and network
traffic load. For example, consider the strobe message connection in Fig. 7. If
Slave 1 is sending a message, the other 8 devices must wait until the network
medium is free. In a CAN-based DeviceNet network, it can be expected that
Slave 9 will encounter the most waiting time because it has a lower priority
on this priority-based network. However, in any network, there will be a non-
trivial waiting time after a strobe, depending on the number of devices that
will respond to the strobe.
The blocking time, which is the time a message must wait once a node is
ready to send it, depends on the network protocol and is a major factor in the
determinism and performance of a control network. It includes waiting time
while other nodes are sending messages and the time needed to resend the
message if a collision occurs.

5
The bit-stuffing mechanism in DeviceNet is as follows: if more than 5 bits in
a row are ‘1’, then a ‘0’ is added and vice versa. Ethernet and ControlNet use
Manchester biphase encoding, and, therefore, do not require bit stuffing.
Network Protocols for Networked Control Systems 665

Tpost Tpost Tpost


Master Slave1 Slave2 Slave9
Tpre Tpre Tpre

Ttx Twait Twait Twait

Ttx

Fig. 7. Waiting time diagram

Ethernet blocking time

We first consider the blocking time for Ethernet, which includes time taken by
collisions with other messages and the subsequent time waiting to be retrans-
mitted. The BEB algorithm described in Section 2.1 indicates a probabilistic
waiting time. An exact analysis of expected blocking time delay for Ether-
net is very difficult [10]. At a high level, the expected blocking time can be
described by the following equation:


16
E{Tblock } = E{Tk } + Tresid , (3)
k=1

where Tresid denotes the residual time until the network is idle, and E{Tk }
is the expected time of the kth collision. E{Tk } depends on the number of
backlogged and unbacklogged nodes as well as the message arrival rate at each
node. For the 16th collision, the node discards this message and reports an
error message to the higher level processing units [21]. It can be seen that
Tblock is not deterministic and may be unbounded due to the discarding of
messages.

ControlNet blocking time

In ControlNet, if a node wants to send a message, it must wait to receive the


token from the logically previous node. Therefore, the blocking time, Tblock ,
can be expressed by the transmission times and token rotation times of pre-
vious nodes. The general formula for Tblock can be described by the following
equation:
 (j)
 (j,n )
Tblock = Tresid + Ttoken + min(Ttx j , Tnode ) + Tguard , (4)
j∈Nnoqueue j∈Nqueue

where Tresid is the residual time needed by the current node to finish trans-
mitting, Nnoqueue and Nqueue denote the sets of nodes without messages and
with messages in the queues, respectively, and Tguard is the time spent on
the guardband period, as defined earlier. For example, if node 10 is wait-
ing for the token, node 4 is holding the token and sending messages, and
666 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

nodes 6, 7, and 8 have messages in their queues, then Nnoqueue = {5, 9} and
Nqueue = {4, 6, 7, 8}. Let nj denote the number of messages queued in the
jth node and let Tnode be the maximum possible time (i.e., token holding
time) assigned to each node to fully utilize the network channel. For example,
in ControlNet Tnode = 827.2 μs, which is a function of the maximum data
size, overhead frame size, and other network parameters. Ttoken is the token
passing time, which depends on the time needed to transmit a token and the
propagation time from node i − 1 to node i. ControlNet uses an implicit to-
ken, and Ttoken is simply the sum of Tf rame with zero data size and Tprop . If
a new message is queued for sending at a node while that node is holding the
(j,n )
token, then Tblock = Ttx j , where j is the node number. In the worst case,
if there are N master nodes on the bus and each one has multiple messages
to send, which means that each node uses the maximum token holding time,
 (i,n )
then Tblock = i∈Nnode \{j} min(Ttx i , Tnode ), where the min function is used
because, even if it has more messages to send, a node cannot hold the token
(j,n )
longer than Tnode (i.e., Ttx j ≤ Tnode ). ControlNet is a deterministic network
because the maximum time delay is bounded and can be characterized by (4).
If the periods of each node and message are known, we can explicitly describe
the sets Nnoqueue and Nqueue and nj . Hence, Tblock in (4) can be determined
explicitly.

DeviceNet blocking time

The blocking time, Tblock , in DeviceNet can be described by the following


equation [22]:

(k)
 (k−1)
Tblock + Tbit (j)
Tblock = Tresid +  (j)
 Ttx , (5)
∀j∈Nhp Tperi

where k is the iteration index of obtaining steady-state Tblock , Tresid is the


residual time needed by the current node to finish transmitting, Nhp is the
(j)
set of nodes with higher priority than the waiting node, Tperi is the period
of the jth node, and x denotes the smallest integer number that is greater
than or equal to x. The summation denotes the time needed to send all the
higher priority messages. While a low priority node is waiting for the channel
to become available, it is possible for other high priority nodes to be queued,
in which case the low priority node loses the arbitration again. This situation
accumulates the total blocking time. The worst-case Tresid under a low traffic
load is
(j)
Tresid = max Ttx , (6)
∀j∈Nnode

where Nnode is the set of nodes on the network. However, because of the
priority-arbitration mechanism, low priority message transmission may not
be deterministic or bounded under high loading.
Network Protocols for Networked Control Systems 667

Fig. 8 shows experimental data of the waiting time of nine identical devices
on a DeviceNet network. These devices have a very low variance of processing
time. We collected 200 pairs of messages (request and response). Each sym-
bol denotes the mean, and the distance between the upper and lower bars
equals two standard deviations. If these bars are over the limit (maximum
or minimum), then the value of the limit is used instead. It can be seen in
Fig. 8 that the average waiting time is proportional to the node number (i.e.,
priority). Also, the first few devices have a larger variance than the others,
because the variance of processing time occasionally allows a lower priority
device to access the idle network before a higher priority one.

1000

800
Waiting Time (μs)

600

400

200

0 2 4 6 8 10
Node Number

Fig. 8. Nine identical devices with strobed message connection

5 Network Comparisons

In this section, comparisons are drawn between the three types of control net-
works using the three networks that have been discussed in detail: Ethernet,
ControlNet (token bus), and DeviceNet (CAN). The parameters for these net-
works are shown in Table 2. After summarizing the theoretical and simulation
results for these three networks, we show some experimental results for time
delays and throughput in wireless Ethernet.

5.1 Data transmission

One method for comparing control networks is by the time taken to transmit
data and the efficiency of the data transmission.
As shown in Fig. 9(a), the transmission time for DeviceNet is longer than
the others because of the lower data rate (500 Kbps). Ethernet requires less
transmission time on larger data sizes (>20 bytes) compared with the others.
668 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

Table 2. Typical system parameters of control networks

Ethernet ControlNet DeviceNet


a
Data rate (Mbps) 10 5 0.5
Bit time (μs) 0.1 0.2 2
Max. length (m) 2500 1000 100
Max. data size (byte) 1500 504 8
b c
Min. message size (byte) 72 7 47/8d
Max. number of nodes >1000 99 64
Typical Tx speed (m/s) coaxial cable: 2 × 10
8

a: typical data rate; b: zero data size;


c: including the preamble and start of delimiter fields;
d: DeviceNet overhead is 47 bits.

Although ControlNet uses less time to transmit the same amount of data, it
needs some time (NUT) to gain access to the network.
The data coding efficiency (see Fig. 9(b)) is defined by the ratio of the data
size and the message size (i.e., the total number of bytes used to transmit the
data). For small data sizes, DeviceNet is the best among these three types
and Ethernet is the worst (due to its large minimum message size). For large
data sizes, ControlNet and Ethernet are better than DeviceNet (DeviceNet
is only 58% efficient due to its small maximum message size, but ControlNet
and Ethernet are near 98% efficient). For control systems, the data size is
generally small. Therefore, the above analysis suggests that DeviceNet may
be preferable in spite of the slow data rate. Before making that decision,
however, the average and total time delay and the throughput of the network
must be investigated.

5.2 Case study of 10-node NCS

In this section, we use a case study of an NCS to compare the three different
control networks. The system has 10 nodes, each with 8 bytes of data to send
every period. MATLAB6 is used to simulate the MAC sublayer protocols of
the three control networks. Network parameters such as the number of nodes,
the message periods, and message sizes can be specified in the simulation
model. In our study, these network parameters are constant. The simulation
program records the time delay history of each message and calculates network
performance statistics such as the average time delay seen by messages on the
network, the efficiency and utilization of the network, and the number of
messages that remain unsent at the end of the simulation run.
6
MATLAB is a technical computing software developed by The MathWorks, Inc.
Network Protocols for Networked Control Systems 669
6
10 1
Ethernet
10
5 ControlNet 0.8

Data Coding Efficiency


Transmission Time ( μs)

DeviceNet
4
10 0.6

3
10 0.4

2 Ethernet
10 0.2 ControlNet
DeviceNet
1
10 0 1 2 3 4
0 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10
Data Size (bytes) Data Size (bytes)
(a) Transmission time (b) Data coding efficiency

Fig. 9. A comparison of transmission time and data coding efficiency versus the
data size for three control networks

Based on the three different types of message connections (poll, strobe,


and cyclic), we consider the following three releasing policies. The first policy,
which we call the “zero releasing policy,” assumes that every node tries to
send its first message at t = 0 and sends a new message every period. This
type of situation occurs when a system powers up and there has been no
prescheduling of messages or when there is a strobe request from the master.
The second policy, the “random releasing policy,” assumes a random start
time for each node; each node still sends a new message every period. The
possible situation for this releasing policy is the cyclic messaging, where no
preschedule is done. In the third policy, called “scheduled releasing policy,”
the start-sending time is scheduled to occur (to the extent possible) when the
network is available to the node; this occurs in a polled connection or with a
well-scheduled cyclic policy.
In addition to varying the release policy, we can also change the period of
each node to demonstrate the effect of traffic load on the network. For each
releasing policy and period, we simulate the system and calculate the average
time delays of these 10 nodes. We then compare the simulation results to the
analytic results described in Section 4. For ControlNet and DeviceNet, the
maximum time delay can be explicitly determined. For Ethernet, the expected
value of the time delay can be computed using the BEB algorithm once the
releasing policy is known.
The simulation results for a message period of 5000 μs are summarized in
Fig. 10. The zero releasing policy has the longest average delay in every net-
work because all nodes experience contention when trying to send messages.
Although the Ethernet data rate is much faster than that of DeviceNet, the
delays due to collisions and the large required message size combine to in-
crease the average time delay for Ethernet in this case. For a typical random
releasing policy, average time delays are reduced because not all nodes try to
670 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
700
9000
Zero Releasing Policy
Zero Relasing Policy Random Releasing Policy
8000 600
Random Relasing Policy Scheduled Releasing Policy
7000 Scheduled Relasing Policy Estimated Max (472)
Estimated Mean (1091) 500 Estimated Min (32)
6000
Time Delay (μs)

Mean + 2*STD (2896)

Time Delay (μs)


400
5000

4000 300

3000
200
2000
100
1000

0 0
0 20 40 60 80 100 0 20 40 60 80 100
Message Number of Each Node (Period=5000μs) Message Number of Each Node (Period=5000μs)

(a) Ethernet (b) ControlNet

3500
Zero Releasing Policy
Random Releasing Policy
3000 Scheduled Releasing Policy
Estimated Max (2442)
2500 Estimated Min (222)
Time Delay (μs)

2000

1500

1000

500

0
0 20 40 60 80 100
Message Number of Each Node (Period=5000μs)

(c) DeviceNet

Fig. 10. Message time delay associated with three releasing policies (10-node case).
The estimated mean, maximum, and minimum values are computed from the net-
work analysis for the zero and scheduled releasing policies.

send messages (or experience network contention) at the same time, although
some contention still exists. The scheduled releasing policy makes the best use
of each individual network; the time delay of this releasing policy is only the
transmission time.
In Ethernet, shown in Fig. 10(a), the zero and random releasing policies
demonstrate its nondeterministic time delay, even though the traffic load is
not saturated. Fig. 10(b) shows that the message time delay of ControlNet is
bounded for all releasing policies; we can estimate the lower and upper bounds
based on the formulae derived in Section 4. Due to the asynchronicity between
the message period and the token rotation period, these time delays exhibit a
linear trend with respect to the message number. The simulation results for
DeviceNet, shown in Fig. 10(c), demonstrate that every node in DeviceNet has
a constant time delay which depends only on the node number. The estimated
mean time delay (1091 μs) for Ethernet in Fig. 10(a) is computed for the case
of the zero releasing policy from (3), and the variance is taken as twice the
Network Protocols for Networked Control Systems 671

standard deviation. The maximum and minimum time delays for ControlNet
and DeviceNet are computed from (4) and (5).

5.3 Wireless Ethernet throughput and delays

In addition to time delays, the difference between the theoretical data rate
and the practical throughput of a control network should be considered. For
example, raw data rates for 802.11 wireless networks range from 11 to 54
Mbits/sec. The actual throughput of the network, however, is lower due to
both the overhead associated with the interframe spaces, ACK, and other
protocol support transmissions, and to the actual implementation of the net-
work adapter. Although 802.11a and 802.11g have the same raw data rate,
the throughput is lower for 802.11g because its backwards compatibility with
802.11b requires that the interframe spaces be as long as they would be on the
802.11b network. Computed and measured throughputs are shown in Table 3
[5]. The experiments were conducted by continually sending more traffic on
the network until a further setpoint increase in traffic resulted in no additional
throughput.

Table 3. Maximum throughputs for different 802.11 wireless Ethernet networks.


All data rates and throughputs are in Mbit/sec.

Network type 802.11a 802.11g 802.11b


Nominal data rate 54 54 11
Theoretical throughput 26.46 17.28 6.49
Measured throughput 23.2 13.6 3.6

Experiments conducted to measure the time delays on wireless networks


are summarized in Table 4 and Fig. 11 [5]. Data packets were sent from the
client to the server and back again, with varying amounts of cross-traffic on
the network. The send and receive times on both machines were time-stamped.
The packet left the client at time ta and arrived at the server at time tb ; then
left the server at time tc and arrived at the client at time td . The sum of the
pre- and postprocessing times and the transmission time on the network for
both messages can be computed as (assuming that the two nodes are identical)

2 ∗ Tdelay = 2 ∗ (Tpre + Twait + Ttx + Tpost )


= td − ta − (tc − tb ).

Note that this measurement does not require that the clocks on the client and
server be synchronized. Since the delays at the two nodes can be different,
it is this sum of the two delays that is plotted in Fig. 11 and tabulated in
Table 4.
672 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

Two different types of data packets were considered: User Datagram Pro-
tocol (UDP), and object linking and embedding (OLE) for Process Control
(OPC). UDP is a commonly used connectionless protocol that runs on top
of Ethernet, often utilized for broadcasting. UDP packets carry only a data
load of 50 bytes. OPC is an application-to-application communication pro-
tocol primarily utilized in manufacturing to communicate data values. OPC
requires extra overhead to support this application layer; consequently, the
OPC packets carry the maximum packet load of 512 data bytes. For compari-
son purposes, the frame times (including the overheads) are computed for the
different packets.

Table 4. Computed frame times and experimentally measured delays on wireless


networks; all times in ms.

Network type 802.11a 802.11g 802.11b


Frame time (UDP), computed 0.011 0.011 0.055
Median delay (UDP), measured 0.346 0.452 1.733
Frame time (OPC), computed 0.080 0.080 0.391
Median delay (OPC), measured 2.335 2.425 3.692

6 Conclusions and Future Work

The features of three candidate control networks — Ethernet (CSMA/CD),


ControlNet (Token Bus), and DeviceNet (CAN) — were discussed in detail.
With respect to Ethernet, which is becoming more and more prevalent in
control network applications, we described and contrasted the three main
implementation types: hub-based, switched, and wireless. For all protocols
we first described the MAC mechanisms, which are responsible for satisfying
both the time-critical/real-time response requirement over the network and
the quality and reliability of communication between devices on the network.
We then focused on exploring timing parameters related to end-to-end de-
livery of information over the networks. These timing parameters, which will
ultimately influence control applications, are affected by the network data
rate, the periods of messages, the data or message size of the information,
and the communication protocol. For each protocol, we studied the key per-
formance parameters of the corresponding network when used in a control
situation, including the magnitude and characteristics of the expected and
measured time delays. Simulation results were presented for several different
scenarios. The timing analyses and comparisons of message time delay given
in this chapter should be useful for designers of NCSs. For example, the ba-
sic differentiation of the network approaches will help the designer to match
Network Protocols for Networked Control Systems 673
1 1
0.9 0.9
0.8 0.8
0.7 0.7
frequency of delay

frequency of delay
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1

0 0.4 0.8 1.2 1.6 2 2.4 2.8 0 0.4 0.8 1.2 1.6 2 2.4 2.8
delay (ms) delay (ms)

(a) UDP delays, 3Mbit/sec cross-traffic (b) UDP delays, 22Mbit/sec cross-traffic
1 1
0.9 0.9
0.8 0.8
0.7 0.7
frequency of delay

frequency of delay
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1

1 1.5 2 2.5 3 3.5 4 4.5 5 1 1.5 2 2.5 3 3.5 4 4.5 5


delay (ms) delay (ms)

(c) OPC delays, 3Mbit/sec cross-traffic (d) OPC delays, 22Mbit/sec cross-traffic

Fig. 11. Distributions of packet delays for different values of cross-traffic throughput
on a 802.11a network

basic requirement rankings against network approaches. Also, analyses such


as device delay experiments reveal to the designer the importance of device
delay and device delay variability in NCS design.
Control systems typically send small amounts of data periodically, but
require guaranteed transmission and bounded time delay for the messages.
The suitability of network protocols for use in control systems is greatly in-
fluenced by these two criteria. Although Ethernet (including hub-based and
wireless) has seen widespread use in many data transmission applications
and can support high data rates up to 1 Gbps, it may not be suitable as
the communication medium for some control systems when compared with
deterministic network systems. However, because of its high data rate, Eth-
ernet can be used for aperiodic/non-time-critical and large data size commu-
nication, such as communication between workstations or machine cells. For
machine-level communication with controllers, sensors, and actuators, deter-
ministic networks are generally more suitable for meeting the characteristics
and requirements of control systems. For control systems with short and/or
prioritized messages, CAN-based protocols such as DeviceNet demonstrate
better performance. The scheduled and unscheduled messaging capabilities in
ControlNet make it suitable for time-critical and non-time-critical messages.
ControlNet is also suitable for large data size message transmission.
674 F.-L. Lian, J. R. Moyne, and D. M. Tilbury

Future NCS research efforts are expected to focus on controller design for
NCSs, which can differ significantly from the design of traditional central-
ized control systems. For example, controller design optimized to the delay
expected in an NCS is explored in [12], and balancing quality of service and
quality of performance (QoP) in control networks can be effected using tech-
niques such as deadbanding [15].
Another body of future NCS research will focus on the utilization of Eth-
ernet for control [16], with a special emphasis on wireless Ethernet. While
wireless Ethernet is beginning to proliferate in manufacturing diagnostics, its
acceptance as an NCS enabler has been very slow to occur due to issues of
reliability, performance, and security [13]. However, the enormous flexibility,
cost savings, and reliability benefits that could potentially be achieved with
wireless systems will continue to drive wireless NCS research, with a focus not
only on control system design, but also on higher performing, more reliable,
and more secure networks for control. Its is easily conceivable that, within 10
years, wireless will be the preferred medium for NCSs.

Acknowledgement. Many of the results described in this chapter are described in


more detail in [10] and [11]. The authors would like to thank Alexander Duschau-
Wicke, a student at the University of Kaiserslautern in Germany, who visited the
University of Michigan in 2004 and performed the wireless Ethernet experiments
described in Section 5.3.

References
1. D. Bertsekas and R. Gallager. Data Networks. Prentice-Hall, Englewood Cliffs,
NJ, second edition, 1992.
2. B. J. Casey. Implementing Ethernet in the industrial environment. In Proceed-
ings of IEEE Industry Applications Society Annual Meeting, volume 2, pages
1469–1477, Seattle, WA, October 1990.
3. ControlNet specifications, 1998.
4. DeviceNet specifications, 1997.
5. A. Duschau-Wicke. Wireless monitoring and integration of control networks
using OPC. Technical report, NSF Engineering Research Center for Recon-
figurable Manufacturing Systems, University of Michigan, 2004. Studienarbeit
report for Technische Universit at Kaiserslautern.
6. J. Eidson and W. Cole. Ethernet rules closed-loop system. InTech, pages 39–42,
June 1998.
7. Y. Koren, Z. J. Pasek, A. G. Ulsoy, and U. Benchetrit. Real-time open control
architectures and system performance. CIRP Annals—Manufacturing Technol-
ogy, 45(1):377–380, 1996.
8. S. A. Koubias and G. D. Papadopoulos. Modern fieldbus communication archi-
tectures for real-time industrial applications. Computers in Industry, 26:243–
252, August 1995.
9. K. C. Lee and S. Lee. Performance evaluation of switched Ethernet for networked
control systems. In Proceedings of IEEE Conference of the Industrial Electronics
Society, volume 4, pages 3170–3175, November 2002.
Network Protocols for Networked Control Systems 675

10. F.-L. Lian, J. R. Moyne, and D. M. Tilbury. Performance evaluation of con-


trol networks: Ethernet, ControlNet, and DeviceNet. IEEE Control Systems
Magazine, 21(1):66–83, February 2001.
11. F.-L. Lian, J. R. Moyne, and D. M. Tilbury. Network design consideration for
distributed control systems. IEEE Transactions on Control Systems Technology,
10(2):297–307, March 2002.
12. F.-L. Lian, J. R. Moyne, and D. M. Tilbury. Time-delay modeling and opti-
mal controller design for networked control systems. International Journal of
Control, 76(6):591–606, April 2003.
13. J. Moyne, J. Korsakas, and D. M. Tilbury. Reconfigurable factory testbed
(RFT): A distributed testbed for reconfigurable manufacturing systems. In
Proceedings of the Japan-USA Symposium on Flexible Automation, Denver, CO,
July 2004.
14. J. Moyne, N. Najafi, D. Judd, and A. Stock. Analysis of sensor/actuator bus
interoperability standard alternatives for semiconductor manufacturing. In Sen-
sors Expo Conference Proceedings, Cleveland, OH, September 1994.
15. P. G. Otanez, J. R. Moyne, and D. M. Tilbury. Using deadbands to reduce
communication in networked control systems. In Proceedings of the American
Control Conference, pages 3015–3020, Anchorage, AK, May 2002.
16. P. G. Otanez, J. T. Parrott, J. R. Moyne, and D. M. Tilbury. The implications of
Ethernet as a control network. In Proceedings of the Global Powertrain Congress,
Ann Arbor, MI, September 2002.
17. G. Paula. Building a better fieldbus. Mechanical Engineering, pages 90–92,
June 1997.
18. J. Pinto. Fieldbus—conflicting “standards” emerge, but interoperability
is still elusive. Design Engineering, UK, October 1999. Available at
[Link]
19. R. S. Raji. Smart networks for control. IEEE Spectrum, 31(6):49–55, June 1994.
20. K. K. Ramakrishnan and H. Yang. The Ethernet capture effect: Analysis and
solution. In Proceedings of the 19th Conference on Local Computer Networks,
pages 228–240, Minneapolis, MN, October 1994.
21. A. S. Tanenbaum. Computer Networks. Prentice-Hall, Upper Saddle River, NJ,
third edition, 1996.
22. K. Tindell, A. Burns, and A. J. Wellings. Calculating controller area network
(CAN) message response times. Control Engineering Practice, 3(8):1163–1169,
August 1995.
23. E. Vonnahme, S. Ruping, and U. Ruckert. Measurements in switched Ethernet
networks used for automation systems. In Proceedings of IEEE International
Workshop on Factory Communication Systems, pages 231–238, September 2000.
24. J. D. Wheelis. Process control communications: Token bus, CSMA/CD, or token
ring? ISA Transactions, 32(2):193–198, July 1993.

View publication stats

You might also like