Network Protocols For Networked Control Systems
Network Protocols For Networked Control Systems
net/publication/225893419
CITATIONS READS
23 1,100
3 authors, including:
Feng-Li Lian
National Taiwan University
157 PUBLICATIONS 2,954 CITATIONS
SEE PROFILE
All content following this page was uploaded by Feng-Li Lian on 01 June 2014.
1 Introduction
Control systems with networked communication, called networked control sys-
tems (NCSs), provide several advantages over point-to-point wired systems
such as improvement in reliability through reduced volume of wiring, sim-
pler systems integration, easier troubleshooting and maintenance, and the
possibility for distributed processing. There are two types of communication
networks. Data networks are characterized by large data packets, relatively
infrequent bursty transmission, and high data rates; they generally do not
have hard real-time constraints. Control networks, in contrast, must shuttle
countless small but frequent packets among a relatively large set of nodes to
meet the time-critical requirements. The key element that distinguishes con-
trol networks from data networks is the capability to support real-time or
time-critical applications [19].
The change of communication architecture from point-to-point to common-
bus, however, introduces different forms of time delay uncertainties between
sensors, actuators, and controllers. These time delays come from the time
sharing of the communication medium as well as the extra time required for
physical signal coding and communication processing. The characteristics of
time delays may be constant, bounded, or even random, depending on the
network protocols adopted and the chosen hardware. This type of time delay
could potentially degrade a system’s performance and possibly cause system
instability.
Thus, the disadvantages of an NCS include the limited bandwidth for com-
munication and the delays that occur when sending messages over a network.
In this chapter, we discuss the sources of delay in common communication
networks used for control systems, and show how they can be computed and
analyzed.
Several factors affect the availability and utilization of the network band-
width: the sampling rates at which the various devices send information over
652 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
the network, the number of elements that require synchronous operation, the
method of synchronization between requesters and providers (such as polling),
the data or message size of the information, physical factors such as network
length, and the medium access control sublayer protocol that controls the
information transmission [7]. There are three main types of medium access
control used in control networks: random access with retransmission when
collisions occur (e.g., Ethernet and most wireless mechanisms), time-division
multiplexing (such as master-slave or token-passing), and random access with
prioritization for collision avoidance (e.g., Controller Area Network (CAN)).
Within each of these three categories, there are numerous network protocols
that have been defined and used. For each type of protocol, we study the
key parameters of the corresponding network when used in a control situ-
ation, including network utilization, magnitude of the expected time delay,
and characteristics of time delays. Simulation results are presented for several
different scenarios, and the advantages and disadvantages of each network
type are summarized. The focus is on one of the most common protocols in
each category; the analysis for other protocols in the same category can be
addressed in a similar fashion.
A survey of the types of control networks used in industry shows a wide
variety of networks in use; see Table 1. The networks are classified according
to type: random access (RA) with collision detection (CD) or collision avoid-
ance (CA), or time-division multiplexed (TDM) using token-passing (TP) or
master-slave (MS).
Table 1. Worldwide most popular fieldbuses [18]. Note that the totals are more
than 100% because many companies use more than one type of bus. Wireless was
not included in the original survey, but its usage is growing quickly.
Ethernet generally uses the carrier sense multiple access (CSMA) with CD
or CA mechanisms for resolving contention on the communication medium.
There are three common flavors of Ethernet: (1) hub-based Ethernet, which
is common in office environments and is the most widely implemented form
of Ethernet, (2) switched Ethernet, which is more common in manufacturing
and control environments, and (3) wireless Ethernet.
3
Note that Ethernet is not a complete protocol solution but only a MAC sub-
layer definition, whereas ControlNet and DeviceNet are complete protocol solutions.
Following popular usage, we use the term “Ethernet” to refer to Ethernet-based
complete network solutions. These include industrial Ethernet solutions such as
Modbus/TCP, PROFINET, and EtherNet/IP.
654 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
receive the same packet simultaneously, and message collisions are possible.
Collisions are dealt with utilizing the CSMA/CD protocol as specified in the
IEEE 802.3 network standard [1, 2, 21].
This protocol operates as follows: when a node wants to transmit, it lis-
tens to the network. If the network is busy, the node waits until the network is
idle; otherwise it transmits immediately. If two or more nodes listen to the idle
network and decide to transmit simultaneously, the messages of these trans-
mitting nodes collide and the messages are corrupted. While transmitting, a
node must also listen to detect a message collision. On detecting a collision
between two or more messages, a transmitting node stops transmitting and
waits a random length of time to retry its transmission. This random time
is determined by the standard binary exponential backoff (BEB) algorithm:
the retransmission time is randomly chosen between 0 and (2i − 1) slot times,
where i denotes the ith collision event detected by the node and one slot time
is the minimum time needed for a round-trip transmission. However, after
10 collisions have been reached, the interval is fixed at a maximum of 1023
slots. After 16 collisions, the node stops attempting to transmit and reports
failure back to the node microprocessor. Further recovery may be attempted
in higher layers [21].
The Ethernet frame format is shown in Fig. 1 [21]. The total overhead is
26 (=22+4) bytes. The data packet frame size is between 46 and 1500 bytes.
There is a nonzero minimum data size requirement because the standard states
that valid frames must be at least 64 bytes long, from destination address to
checksum (72 bytes including preamble and start of delimiter). If the data
portion of a frame is less than 46 bytes, the pad field is used to fill out
the frame to the minimum size. There are two reasons for this minimum size
limitation. First, it makes it easier to distinguish valid frames from “garbage.”
When a transceiver detects a collision, it truncates the current frame, which
means that stray bits and pieces of frames frequently appear on the cable.
Second, it prevents a node from completing the transmission of a short frame
before the first bit has reached the far end of the cable, where it may collide
with another frame. For a 10-Mbps Ethernet with a maximum length of 2500
m and four repeaters, the minimum allowed frame time or slot time is 51.2
μs, which is the time required to transmit 64 bytes at 10 Mbps [21].
Network Protocols for Networked Control Systems 655
waiting to be sent on that port. Switches with cut-through first read the
MAC address and then forward the packet to the destination port accord-
ing to the MAC address of the destination and the forwarding table on the
switch. On the other hand, switches with store-and-forward examine the com-
plete packet first. Using the cyclic redundancy check (CRC) code, the switch
will first verify that the frame has been correctly transmitted before forward-
ing the packet to the destination port. If there is an error, the frame will be
discarded. Store-and-forward switches are slower, but will not forward any
corrupted packets.
Although there are no message collisions on the networks, congestion may
occur inside the switch when one port suddenly receives a large number of
packets from the other ports. Three main queueing principles are implemented
inside the switch in this case. They are first-in-first-out (FIFO) queue, pri-
ority queue, and per-flow queue. The FIFO queue is a traditional method
that is fair and simple. However, if the network traffic is heavy, the quality of
service cannot be guaranteed. In the priority queueing scheme, the network
manager reads some of the data frames to distinguish which queues will be
more important. Hence, the packets can be classified into different levels of
queues. Queues with high priority will be processed first followed by queues
with low priority until the buffer is empty. With the per-flow queueing oper-
ation, queues are assigned different levels of priority (or weights). All queues
are then processed one by one according to priority; thus, the queues with
higher priority will generally have higher performance and could potentially
block queues with lower priority.
Examples of timing analysis and performance evaluation of switched Eth-
ernet can be found in [9, 23].
Wireless Ethernet, based on the IEEE 802.11 standard, can replace wired
Ethernet in a transparent way since it implements the two lowest layers of
the International Standards Organization (ISO)/Open Systems Interconnec-
tion (OSI) model. Besides the physical layer, the biggest difference between
802.11 and 802.3 is in the medium access control. Unlike wired Ethernet nodes,
wireless stations cannot “hear” a collision. A collision avoidance mechanism
is used but cannot entirely prevent collisions. Thus, after a packet has been
successfully received by its destination node, the receiver sends a short ac-
knowledgment packet (ACK) back to the original sender. If the sender does
not receive an ACK packet, it assumes that the transmission was unsuccessful
and retransmits.
The collision avoidance mechanism in 802.11 works as follows. If a network
node wants to send while the network is busy, it sets its backoff counter to a
randomly chosen value. Once the network is idle, the node waits first for an
interframe space and then for this backoff time before attempting to send. If
another node accesses the network during that time, it must wait again for
Network Protocols for Networked Control Systems 657
another idle interval. In this way, the node with the lowest backoff time sends
first. Certain messages (e.g., ACK) may start transmitting after a shorter
interframe space, thus they have a higher priority. Collisions may still occur
because of the random nature of the backoff time; it is possible for two nodes
to have the same backoff time.
Several refinements to the protocol also exist. Nodes may reserve the net-
work either by sending a request to send (RTS) message or by breaking a
large message into many smaller messages (fragmentation); each successive
message can be sent after the smallest interframe time. If there is a single
master node on the network, the master can poll all the nodes and effectively
create a TDM contention-free network.
ControlNet
Bytes 2 1 1 0-510 2 1
When the guardband time is reached, all nodes stop transmitting, and only
the node with the lowest MAC ID, called the “moderator,” can transmit a
maintenance message, called the “moderator frame,” which accomplishes the
synchronization of all timers inside each node and the publishing of critical
link parameters such as NUT, node time, S, U , etc. If the moderator frame is
not heard for two consecutive NUTs, the node with the lowest MAC ID will
begin transmitting the moderator frame in the guardband of the third NUT.
Moreover, if a moderator node notices that another node has a lower MAC
ID than its own, it immediately cancels its moderator role.
Time
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
0
S S 1 S
2
7 8 9
8 9 10
9 11
10 U 12
Bus Idle Arbitration Field Control Data Field CRC Field ACK EOF Int Bus Idle
4 Timing Components
The important time delays that should be considered in an NCS analysis
are the sensor-to-controller and controller-to-actuator end-to-end delays. In
an NCS, message transmission delay can be broken into two parts: device
delay and network delay. The device delay includes the time delays at the
source and destination nodes. The time delay at the source node includes the
preprocessing time, Tpre , and the waiting time, Twait . The time delay at the
destination node is only the postprocessing time, Tpost . The network time
delay includes the total transmission time of a message and the propagation
delay of the network. The total time delay can be expressed by the following
equation:
The key components of each time delay are shown in Fig. 5 and will be dis-
cussed in the following subsections.
The preprocessing time at the source node is the time needed to acquire
data from the external environment and encode it into the appropriate net-
work data format. There may be one processor performing both functions, or
multiple processors; we define the total elapsed time required as the pre- or
postprocessing time. This time depends on the device software and hardware
characteristics. In many cases, it may be assumed that the preprocessing time
662 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
Node A Node B
Tpre Tpre
Application Layer Application Layer
Tpost Tpost
Twait Twait
Data Link Layer Data Link Layer
Ttx
Ttx
Fig. 5. A timing diagram showing time spent sending a message from a source node
to a destination node
the smallest time that can be recorded is 1 μs. The uniform distribution of
processing time at Device 4 is due to the fact that it has an internal sampling
time which is mismatched with the request frequency. Hence, the processing
time recorded here is the sum of the actual processing time and the waiting
time inside the device. Device 4 also provides more complex functionality and
has a longer processing time than the others.
Number of Messages
300 300
200 200
100 100
0 0
130 140 150 160 170 200 300 400 500
Processing Time (μs) Processing Time (μs)
t = 399 Device 3 t = 948 t = 1550 Device 4 t = 11005
100 min max
40 min max
Number of Messages
Number of Messages
80
30
60
20
40
10
20
0 0
400 600 800 1000 2000 4000 6000 800010000
Processing Time (μs) Processing Time (μs)
A key point that can be taken from the data presented in Fig. 6 is that the
device processing time can be substantial in the overall calculation of Tdelay .
In fact, this delay often dominates over network delays. Thus, in designing
NCSs, device delay and delay variability should be considered as important
factors when choosing components.
cable length used. The propagation delay is not easily characterized because
the distance between the source and destination nodes is not constant among
different transmissions. For comparison, we will assume that the propagation
times of these three network types are the same, say, Tprop = 1 μs (100 m).
Note that Tprop in DeviceNet is generally less than one bit time because De-
viceNet is a bit-synchronized network. Hence, the maximum cable length is
used to guarantee the bit synchronization among nodes.
The frame time, Tf rame , depends on the size of the data, the overhead,
any padding, and the bit time. Let Ndata be the size of the data in terms of
bytes, Novhd be the number of bytes used as overhead, Npad be the number
of bytes used to pad the remaining part of the frame to meet the minimum
frame size requirement, and Nstuff be the number of bytes used in a stuffing
mechanism (on some protocols). The frame time can then be expressed by the
following equation:
The values Ndata , Novhd , Npad , and Nstuff 5 can be explicitly described for the
Ethernet, ControlNet, and DeviceNet protocols, see [10].
A message may spend time waiting in the queue at the sender’s buffer and
could be blocked from transmitting by other messages on the network. De-
pending on the amount of data the source node must send and the traffic on
the network, the waiting time may be significant. The main factors affect-
ing waiting time are network protocol, message connection type, and network
traffic load. For example, consider the strobe message connection in Fig. 7. If
Slave 1 is sending a message, the other 8 devices must wait until the network
medium is free. In a CAN-based DeviceNet network, it can be expected that
Slave 9 will encounter the most waiting time because it has a lower priority
on this priority-based network. However, in any network, there will be a non-
trivial waiting time after a strobe, depending on the number of devices that
will respond to the strobe.
The blocking time, which is the time a message must wait once a node is
ready to send it, depends on the network protocol and is a major factor in the
determinism and performance of a control network. It includes waiting time
while other nodes are sending messages and the time needed to resend the
message if a collision occurs.
5
The bit-stuffing mechanism in DeviceNet is as follows: if more than 5 bits in
a row are ‘1’, then a ‘0’ is added and vice versa. Ethernet and ControlNet use
Manchester biphase encoding, and, therefore, do not require bit stuffing.
Network Protocols for Networked Control Systems 665
Ttx
We first consider the blocking time for Ethernet, which includes time taken by
collisions with other messages and the subsequent time waiting to be retrans-
mitted. The BEB algorithm described in Section 2.1 indicates a probabilistic
waiting time. An exact analysis of expected blocking time delay for Ether-
net is very difficult [10]. At a high level, the expected blocking time can be
described by the following equation:
16
E{Tblock } = E{Tk } + Tresid , (3)
k=1
where Tresid denotes the residual time until the network is idle, and E{Tk }
is the expected time of the kth collision. E{Tk } depends on the number of
backlogged and unbacklogged nodes as well as the message arrival rate at each
node. For the 16th collision, the node discards this message and reports an
error message to the higher level processing units [21]. It can be seen that
Tblock is not deterministic and may be unbounded due to the discarding of
messages.
where Tresid is the residual time needed by the current node to finish trans-
mitting, Nnoqueue and Nqueue denote the sets of nodes without messages and
with messages in the queues, respectively, and Tguard is the time spent on
the guardband period, as defined earlier. For example, if node 10 is wait-
ing for the token, node 4 is holding the token and sending messages, and
666 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
nodes 6, 7, and 8 have messages in their queues, then Nnoqueue = {5, 9} and
Nqueue = {4, 6, 7, 8}. Let nj denote the number of messages queued in the
jth node and let Tnode be the maximum possible time (i.e., token holding
time) assigned to each node to fully utilize the network channel. For example,
in ControlNet Tnode = 827.2 μs, which is a function of the maximum data
size, overhead frame size, and other network parameters. Ttoken is the token
passing time, which depends on the time needed to transmit a token and the
propagation time from node i − 1 to node i. ControlNet uses an implicit to-
ken, and Ttoken is simply the sum of Tf rame with zero data size and Tprop . If
a new message is queued for sending at a node while that node is holding the
(j,n )
token, then Tblock = Ttx j , where j is the node number. In the worst case,
if there are N master nodes on the bus and each one has multiple messages
to send, which means that each node uses the maximum token holding time,
(i,n )
then Tblock = i∈Nnode \{j} min(Ttx i , Tnode ), where the min function is used
because, even if it has more messages to send, a node cannot hold the token
(j,n )
longer than Tnode (i.e., Ttx j ≤ Tnode ). ControlNet is a deterministic network
because the maximum time delay is bounded and can be characterized by (4).
If the periods of each node and message are known, we can explicitly describe
the sets Nnoqueue and Nqueue and nj . Hence, Tblock in (4) can be determined
explicitly.
(k)
(k−1)
Tblock + Tbit (j)
Tblock = Tresid + (j)
Ttx , (5)
∀j∈Nhp Tperi
where Nnode is the set of nodes on the network. However, because of the
priority-arbitration mechanism, low priority message transmission may not
be deterministic or bounded under high loading.
Network Protocols for Networked Control Systems 667
Fig. 8 shows experimental data of the waiting time of nine identical devices
on a DeviceNet network. These devices have a very low variance of processing
time. We collected 200 pairs of messages (request and response). Each sym-
bol denotes the mean, and the distance between the upper and lower bars
equals two standard deviations. If these bars are over the limit (maximum
or minimum), then the value of the limit is used instead. It can be seen in
Fig. 8 that the average waiting time is proportional to the node number (i.e.,
priority). Also, the first few devices have a larger variance than the others,
because the variance of processing time occasionally allows a lower priority
device to access the idle network before a higher priority one.
1000
800
Waiting Time (μs)
600
400
200
0 2 4 6 8 10
Node Number
5 Network Comparisons
In this section, comparisons are drawn between the three types of control net-
works using the three networks that have been discussed in detail: Ethernet,
ControlNet (token bus), and DeviceNet (CAN). The parameters for these net-
works are shown in Table 2. After summarizing the theoretical and simulation
results for these three networks, we show some experimental results for time
delays and throughput in wireless Ethernet.
One method for comparing control networks is by the time taken to transmit
data and the efficiency of the data transmission.
As shown in Fig. 9(a), the transmission time for DeviceNet is longer than
the others because of the lower data rate (500 Kbps). Ethernet requires less
transmission time on larger data sizes (>20 bytes) compared with the others.
668 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
Although ControlNet uses less time to transmit the same amount of data, it
needs some time (NUT) to gain access to the network.
The data coding efficiency (see Fig. 9(b)) is defined by the ratio of the data
size and the message size (i.e., the total number of bytes used to transmit the
data). For small data sizes, DeviceNet is the best among these three types
and Ethernet is the worst (due to its large minimum message size). For large
data sizes, ControlNet and Ethernet are better than DeviceNet (DeviceNet
is only 58% efficient due to its small maximum message size, but ControlNet
and Ethernet are near 98% efficient). For control systems, the data size is
generally small. Therefore, the above analysis suggests that DeviceNet may
be preferable in spite of the slow data rate. Before making that decision,
however, the average and total time delay and the throughput of the network
must be investigated.
In this section, we use a case study of an NCS to compare the three different
control networks. The system has 10 nodes, each with 8 bytes of data to send
every period. MATLAB6 is used to simulate the MAC sublayer protocols of
the three control networks. Network parameters such as the number of nodes,
the message periods, and message sizes can be specified in the simulation
model. In our study, these network parameters are constant. The simulation
program records the time delay history of each message and calculates network
performance statistics such as the average time delay seen by messages on the
network, the efficiency and utilization of the network, and the number of
messages that remain unsent at the end of the simulation run.
6
MATLAB is a technical computing software developed by The MathWorks, Inc.
Network Protocols for Networked Control Systems 669
6
10 1
Ethernet
10
5 ControlNet 0.8
DeviceNet
4
10 0.6
3
10 0.4
2 Ethernet
10 0.2 ControlNet
DeviceNet
1
10 0 1 2 3 4
0 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10
Data Size (bytes) Data Size (bytes)
(a) Transmission time (b) Data coding efficiency
Fig. 9. A comparison of transmission time and data coding efficiency versus the
data size for three control networks
4000 300
3000
200
2000
100
1000
0 0
0 20 40 60 80 100 0 20 40 60 80 100
Message Number of Each Node (Period=5000μs) Message Number of Each Node (Period=5000μs)
3500
Zero Releasing Policy
Random Releasing Policy
3000 Scheduled Releasing Policy
Estimated Max (2442)
2500 Estimated Min (222)
Time Delay (μs)
2000
1500
1000
500
0
0 20 40 60 80 100
Message Number of Each Node (Period=5000μs)
(c) DeviceNet
Fig. 10. Message time delay associated with three releasing policies (10-node case).
The estimated mean, maximum, and minimum values are computed from the net-
work analysis for the zero and scheduled releasing policies.
send messages (or experience network contention) at the same time, although
some contention still exists. The scheduled releasing policy makes the best use
of each individual network; the time delay of this releasing policy is only the
transmission time.
In Ethernet, shown in Fig. 10(a), the zero and random releasing policies
demonstrate its nondeterministic time delay, even though the traffic load is
not saturated. Fig. 10(b) shows that the message time delay of ControlNet is
bounded for all releasing policies; we can estimate the lower and upper bounds
based on the formulae derived in Section 4. Due to the asynchronicity between
the message period and the token rotation period, these time delays exhibit a
linear trend with respect to the message number. The simulation results for
DeviceNet, shown in Fig. 10(c), demonstrate that every node in DeviceNet has
a constant time delay which depends only on the node number. The estimated
mean time delay (1091 μs) for Ethernet in Fig. 10(a) is computed for the case
of the zero releasing policy from (3), and the variance is taken as twice the
Network Protocols for Networked Control Systems 671
standard deviation. The maximum and minimum time delays for ControlNet
and DeviceNet are computed from (4) and (5).
In addition to time delays, the difference between the theoretical data rate
and the practical throughput of a control network should be considered. For
example, raw data rates for 802.11 wireless networks range from 11 to 54
Mbits/sec. The actual throughput of the network, however, is lower due to
both the overhead associated with the interframe spaces, ACK, and other
protocol support transmissions, and to the actual implementation of the net-
work adapter. Although 802.11a and 802.11g have the same raw data rate,
the throughput is lower for 802.11g because its backwards compatibility with
802.11b requires that the interframe spaces be as long as they would be on the
802.11b network. Computed and measured throughputs are shown in Table 3
[5]. The experiments were conducted by continually sending more traffic on
the network until a further setpoint increase in traffic resulted in no additional
throughput.
Note that this measurement does not require that the clocks on the client and
server be synchronized. Since the delays at the two nodes can be different,
it is this sum of the two delays that is plotted in Fig. 11 and tabulated in
Table 4.
672 F.-L. Lian, J. R. Moyne, and D. M. Tilbury
Two different types of data packets were considered: User Datagram Pro-
tocol (UDP), and object linking and embedding (OLE) for Process Control
(OPC). UDP is a commonly used connectionless protocol that runs on top
of Ethernet, often utilized for broadcasting. UDP packets carry only a data
load of 50 bytes. OPC is an application-to-application communication pro-
tocol primarily utilized in manufacturing to communicate data values. OPC
requires extra overhead to support this application layer; consequently, the
OPC packets carry the maximum packet load of 512 data bytes. For compari-
son purposes, the frame times (including the overheads) are computed for the
different packets.
frequency of delay
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0.4 0.8 1.2 1.6 2 2.4 2.8 0 0.4 0.8 1.2 1.6 2 2.4 2.8
delay (ms) delay (ms)
(a) UDP delays, 3Mbit/sec cross-traffic (b) UDP delays, 22Mbit/sec cross-traffic
1 1
0.9 0.9
0.8 0.8
0.7 0.7
frequency of delay
frequency of delay
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
(c) OPC delays, 3Mbit/sec cross-traffic (d) OPC delays, 22Mbit/sec cross-traffic
Fig. 11. Distributions of packet delays for different values of cross-traffic throughput
on a 802.11a network
Future NCS research efforts are expected to focus on controller design for
NCSs, which can differ significantly from the design of traditional central-
ized control systems. For example, controller design optimized to the delay
expected in an NCS is explored in [12], and balancing quality of service and
quality of performance (QoP) in control networks can be effected using tech-
niques such as deadbanding [15].
Another body of future NCS research will focus on the utilization of Eth-
ernet for control [16], with a special emphasis on wireless Ethernet. While
wireless Ethernet is beginning to proliferate in manufacturing diagnostics, its
acceptance as an NCS enabler has been very slow to occur due to issues of
reliability, performance, and security [13]. However, the enormous flexibility,
cost savings, and reliability benefits that could potentially be achieved with
wireless systems will continue to drive wireless NCS research, with a focus not
only on control system design, but also on higher performing, more reliable,
and more secure networks for control. Its is easily conceivable that, within 10
years, wireless will be the preferred medium for NCSs.
References
1. D. Bertsekas and R. Gallager. Data Networks. Prentice-Hall, Englewood Cliffs,
NJ, second edition, 1992.
2. B. J. Casey. Implementing Ethernet in the industrial environment. In Proceed-
ings of IEEE Industry Applications Society Annual Meeting, volume 2, pages
1469–1477, Seattle, WA, October 1990.
3. ControlNet specifications, 1998.
4. DeviceNet specifications, 1997.
5. A. Duschau-Wicke. Wireless monitoring and integration of control networks
using OPC. Technical report, NSF Engineering Research Center for Recon-
figurable Manufacturing Systems, University of Michigan, 2004. Studienarbeit
report for Technische Universit at Kaiserslautern.
6. J. Eidson and W. Cole. Ethernet rules closed-loop system. InTech, pages 39–42,
June 1998.
7. Y. Koren, Z. J. Pasek, A. G. Ulsoy, and U. Benchetrit. Real-time open control
architectures and system performance. CIRP Annals—Manufacturing Technol-
ogy, 45(1):377–380, 1996.
8. S. A. Koubias and G. D. Papadopoulos. Modern fieldbus communication archi-
tectures for real-time industrial applications. Computers in Industry, 26:243–
252, August 1995.
9. K. C. Lee and S. Lee. Performance evaluation of switched Ethernet for networked
control systems. In Proceedings of IEEE Conference of the Industrial Electronics
Society, volume 4, pages 3170–3175, November 2002.
Network Protocols for Networked Control Systems 675