Unit 2 Clock-Driven Scheduling: 5.1 Notations and Assumptions
Unit 2 Clock-Driven Scheduling: 5.1 Notations and Assumptions
UNIT 2
Clock-Driven Scheduling
5.1 NOTATIONS AND ASSUMPTIONS
The clock-driven approach to scheduling is applicable only when the system is by and large
deterministic. For this reason, we assume a restricted periodic task model. The following are
the restrictive assumptions-
There are n periodic tasks in the system. As long as the system stays in an operation mode, n
is fixed.
The parameters of all periodic tasks are known a priori. In particular, variations in the
interrelease times of jobs in any periodic task are negligibly small.
In other words, for all practical purposes, each job in Ti is released pi units of time after the
previous job in Ti .
Each job Ji,k is ready for execution at its release time ri,k .
We refer to a periodic task Ti with phase φi , period pi , execution time ei , and relative
deadline Di by the 4-tuple (φi , pi , ei , Di ). For example, (1, 10, 3, 6) is a periodic task whose
phase is 1, period is 10, execution time is 3, and relative deadline is 6.
Therefore the first job in this task is released and ready at time 1 and must be completed by
time 7; the second job is ready at 11 and must be completed by 17, and so on. Each of these
jobs executes for at most 3 units of time.
The utilization of this task is 0.3. By default, the phase of each task is 0, and its relative
deadline is equal to its period. We will omit the elements of the tuple that have their default
values. As examples, both (10, 3, 6) and (10, 3) have zero phase. Their relative deadlines are
6 and 10, respectively.
Whenever the parameters of jobs with hard deadlines are known before the system begins to
execute, a straightforward way to ensure that they meet their deadlines is to construct a static
schedule of the jobs off-line.
This schedule specifies exactly when each job executes. During run time, the scheduler
dispatches the jobs according to this schedule. Hence, as long as no job ever overruns all
deadlines are surely met.
As an example, we consider a system that contains four independent periodic tasks. They are
T1 = (4, 1), T2 = (5, 1.8), T3 = (20, 1), and T4 = (20, 2). Their utilizations are 0.25, 0.36,
0.05, and 0.1, respectively, and the total utilization is 0.76. Some intervals, such as (3.8, 4),
(5, 6), and (10.8, 12), are not used by the periodic tasks. These intervals can be used to
execute aperiodic jobs.
Upon receiving a timer interrupt at tk , the scheduler sets the timer to expire at tk+1 and
prepares the task T (tk) for execution. It then suspends itself, letting the task have the
processor and execute.When the timer expires again,the scheduler repeats this operation.
We call a periodic static schedule a cyclic schedule. Again, this approach to scheduling hard
real-time jobs is called the clock-driven or time-driven approach because each scheduling
decision is made at a specific time, independent of events.
Every frame has length f ; f is the frame size. Because scheduling decisions are made only at
the beginning of every frame, there is no preemption within each frame. The phase of each
periodic task is a nonnegative integer multiple of the frame size.
To keep the length of the cyclic schedule as short as possible, the frame size f should be
chosen so that it divides H, the length of the hyperperiod of the system. This condition is met
when f divides the period pi of at least one task Ti , that is,
We let F denote this number and call a hyperperiod that begins at the beginning of the (kF +
1)st frame, for any k = 0, 1, . . . , a major cycle.
Figure 5–4 illustrates the suitable range of f for a task Ti = (pi , ei , Di ). When f is in this
range, there is at least one frame between the release time and deadline of every job in the
task. In this figure, t denotes the beginning of a frame (called the kth frame) in which a job in
Ti is released, and t _ denotes the release time of this job.
We need to consider two cases: t_ > t and t _ = t. If t _ is later than t , as shown in this
figure, we want the (k +1)st frame to be in the interval between the release time t _ and the
deadline t _+ Di of this job. For this to be true, we must have t +2 f equal to or earlier than t _
+ Di , that is, 2 f − (t _ − t) ≤ Di .
Because the difference t _ − t is at least equal to the greatest common divisor gcd(pi , f ) of pi
and f , this condition is met if the following inequality holds:
In addition to scheduling, the cyclic executive also checks for overruns at the beginning of each frame.
If the cyclic executive finds the periodic task server still executing the last job at the time of a clock
interrupt, a frame overrun occurs.
Some slice(s) scheduled in the previous frame has executed longer than the time allocated to it by the
precomputed cyclic schedule. The cyclic executive takes an appropriate action to recover from this
frame overrun.
The slack (time) available in the frame is equal to f − xk at the beginning of the frame. When
an aperiodic job executes ahead of slices of periodic tasks, it consumes the slack in the frame.
After y units of slack time are used by aperiodic jobs, the available slack is reduced to
f − xk − y.
The cyclic executive can let aperiodic jobs execute in frame k as long as there is slack, that is,
the available slack f − xk − y in the frame is larger than 0.
Figure 5–8 gives an illustrative example. Figure 5–8(a) shows the first major cycle in the
cyclic schedule of the periodic tasks.
Figure 5–8(b) shows three aperiodic jobs A1, A2, and A3. Their release times are immediately
before 4, 9.5. and 10.5, and their execution times are 1.5, 0.5 and 2, respectively.
Figure 5–8(c) shows when the aperiodic jobs execute if we use the cyclic executive shown in
Figure 5–7, which schedules aperiodic jobs after the slices of periodic tasks in each frame are
completed. The execution of A1 starts at time 7. It does not complete at time 8 when the
frame ends and is, therefore, preempted. It is resumed at time 10 after both slices in the next
frame complete. Consequently, its response time is 6.5.
A2 executes after A1 completes and has a response time equal to 1.5. Similarly, A3 follows A2
and is preempted once and completes at the end of the following frame. The response time of
A3 is 5.5. The average response time of these three jobs is 4.5.
Figure 5–8(d) shows what happens if the cyclic executive does slack stealing. At time 4, the
cyclic executive finds A1 in the aperiodic job queue, and there is 1 unit of slack. Hence it lets
A1 execute. At time 5, there is no more slack. It preempts A1 and lets the periodic task server
execute the job slices scheduled in the frame.
At the beginning of the next frame, the available slack is 2. It resumes A1, which completes at time
8.5. At the time, the first slice in the current block is executed, since the aperiodic job queue is empty.
When the job completes at time 10, the cyclic executive finds the aperiodic job queue empty and lets
the periodic task server execute the next job slice in the current block. At time 11, it finds A3 and lets
the job execute during the last unit of time in the frame, as well as in the beginning of the next frame.
The job completes by time 13. According to this schedule, the response times of the jobs are 4.5, 0.5,
and 2.5, with an average of 2.5.
To express average response time in terms of these parameters, let us consider a system in
which there are na aperiodic tasks. The jobs in each aperiodic task have the same interarrival-
time and execution time distributions and the same response-time requirement. Suppose that
the average rate of arrival of aperiodic jobs in the ith aperiodic task is λi jobs per unit of time.
The sum λ of λi over all i = 1, 2, . . . a is the total number of aperiodic job arrivals per unit of
time. The mean and the mean square values of the execution times of jobs in the ith aperiodic
task are E[βi ] and E[βi2], respectively.
Let ui denote the average utilization of the ith task; it is the average fraction of processor time
required by all the jobs in the task. ui is equal to λi E[βi ]. We call the sum UA of ui over all
aperiodic tasks the total average utilization of aperiodic tasks; it is the average fraction of
processor time required by all the aperiodic tasks in the system.
Let U be the total utilization of all the periodic tasks. 1−U is the fraction of time that is
available for the execution of aperiodic jobs. We call it the aperiodic (processor) bandwidth.
If the total average utilization of the aperiodic jobs is larger than or equal to the aperiodic
bandwidth of the system (i.e., UA ≥ 1 − U), the length of the aperiodic job queue and the
average response time will grow without bound. Hence we consider here only the case where
UA < 1 − U.
When the jobs in all aperiodic tasks are scheduled on the FIFO basis, we can estimate the
average response time W (also known as waiting time) of any aperiodic job by the following
expression:
Figure 5–9 shows the behavior of the average queueing time. The average queueing time is
inversely proportional to the square of the aperiodic bandwidth. It remains small for a large
range of UA but increases rapidly and approaches infinity when UA approaches (1 − U).
We can improve the average response time of some aperiodic tasks at the expense of some
others by prioritizing the aperiodic tasks. The jobs in the queue for the ith aperiodic task are
executed only when the queues of all the higher priority aperiodic tasks are empty.
We let UH denote the sum_∑k=1to i-1 uk ; it is the average total utilization of all aperiodic tasks
with priority higher than the ith aperiodic task. The average response time Wi of a job in the
ith aperiodic task is given by
If according to the existing schedule, there is a sufficient amount of time in the frames before
its deadline to complete the newly released sporadic job without causing any job in the system
to complete too late, the scheduler accepts and schedules the job. Otherwise, the scheduler
rejects the new sporadic job.
Suppose that the deadline d of S is in frame l+1 and l ≥ t . Clearly, the job must be scheduled
in the lth or earlier frames. The job can complete in time only if the current (total) amount of
slack time σc(t, l) in frames t, t+1, . . . l is equal to or greater than its execution time e.
Therefore, the scheduler should reject S if e > σc(t, l).
As we will see shortly, the scheduler may let a new sporadic job execute ahead of some
previously accepted sporadic jobs. Therefore, the scheduler also checks whether accepting the
new job may cause some sporadic jobs in the system to complete late.
The scheduler accepts the new job S(d, e) only if e ≤ σc(t, l) and no sporadic jobs in system
are adversely affected.
Whenever all the slices of periodic tasks scheduled in each frame are completed, the cyclic
executive lets the jobs in the sporadic job queue execute in the order they appear.
Figure 5–10 gives a pseudocode description of the modified cyclic executive that integrates
the scheduling of sporadic jobs with that of aperiodic and periodic jobs.
Figure 5–11 gives an example. The frame size used here is 4. The shaded boxes show where
periodic tasks are scheduled.
At time 5, S2(29, 4) is released. Frames 3 through 7 end before its deadline. During the
acceptance test at 8, the scheduler finds that the total amount of slack in these frames is 5.5.
Hence, it accepts S2. The first part of S2 with execution time 2 executes in the current frame.
At time 11, S3(22, 1.5) is released. At time 12, the scheduler finds 2 units of slack time in
frames 4 and 5, where S3 can be scheduled. Moreover, there still is enough slack to complete
S2 even though S3 executes ahead of S2. Consequently, the scheduler accepts S3. This job
executes in frame 4.
Suppose that at time 14, S4(44, 5) is released. At time 16 when the acceptance test is done,
the scheduler finds only 4.5 units of time available in frames before the deadline of S4, after it
has accounted for the slack time that has already been committed to the remaining portions of
S2 and S3. Therefore, it rejects S4. When the remaining portion of S3 completes in the current
frame, S2 executes until the beginning of the next frame.
1. The scheduler first determines whether the current total amount of slack in the frames before
the deadline of job S is at least equal to the execution time e of S. If the answer is no, it
rejects S. If the answer is yes, the second step is carried out.
2. In the second step, the scheduler determines whether any sporadic job in the system will
complete late if it accepts S. If the acceptance of S will not cause any sporadic job in the
system to complete too late, it accepts S; otherwise, it rejects S.
To do an acceptance test, the scheduler needs the current total amount of slack time σc(i, k) in
frames i through k for every pair of frames i and k. We can save computation time during run
time by precomputing the initial (total) amounts of slack σ(i, k), for i, k =1, 2, . . . F and
storing them in a slack table.
For any 0 < j < j _ and any i and k equal to 1, 2, . . . F, the initial amount of slack time in
frames from frame i in major cycle j through frame k in major cycle j_ is given by
Suppose that at the beginning of current frame t , there are ns sporadic jobs in the system. We
call these jobs S1, S2, . . . , Sns. Let dk and ek denote the deadline and execution time of
Sk,respectively, and ξk denote the execution time of the portion of Sk that has been completed
at the beginning of the current frame.
Suppose that the deadline d of the job S(d, e) being tested is in frame l + 1. The current total
amount of slack time σc(t, l) in frames t through l can be computed from the initial amount of
slack time in these frames according to
The sum in this expression is over all sporadic jobs in the system that have deadlines equal to
or less than d. Because the slack time available in these frames must be used to execute the
remaining parts of these jobs first, only the amount leftover by them is available to S(d, e).
The cyclic EDF algorithm is optimal in the following sense: As long as the given string of
sporadic jobs is schedulable by any algorithm in this class, the EDF algorithm can always find
a feasible schedule of the jobs.
However, the cyclic EDF algorithm is not optimal when compared with algorithms that
perform acceptance tests at arbitrary times. If we choose to use an interrupt-driven scheduler
which does an acceptance test upon the release of each sporadic job, we should be able to do
better.
At any scheduling decision time, without prior knowledge on when the future jobs will be
released and what their parameters will be, it is not always possible for the scheduler to make
an optimal decision.
Therefore, it is not surprising that the cyclic EDF algorithm is not optimal in this sense when
some job in the given string of sporadic jobs must be rejected.
A transient hardware fault in the system may cause some job to execute longer than expected.
A software flaw that was undetected during debugging and testing can also cause this
problem.
There are many ways to handle a frame overrun. Which one is the most appropriate depends
on the application and the reason for the overrun.
A way to handle overruns is to simply abort the overrun job at the beginning of the next frame
and log the premature termination of the job. Such a fault can then be handled by some
PREPARED BY: K.TARAKESWAR, ASST. PROF, CSE DEPT. Page 11
REAL TIME SYSTEMS
recovery mechanism later when necessary. This way seems attractive for applications where
late results are no longer useful.
Another way to handle an overrun is to continue to execute the offending job. The start of the
next frame and the execution of jobs scheduled in the next frame are then delayed.
Letting a late job postpone the execution and completion of jobs scheduled after it can in turn
cause these jobs to be late. This way is appropriate only if the late result produced by the job
is nevertheless useful, and an occasional late completion of a periodic job is acceptable.
Some periodic tasks are deleted from the system because they will not execute in the new
mode. When the mode change completes, the new set of periodic tasks are scheduled and
executed.
A periodic task that will not execute in the new mode can be deleted and its memory space
and processor time are freed as soon as the current job in the task completes.
During mode change, the scheduler continues to use the old schedule table. Before the
periodic task server begins to execute a periodic job, however, it checks whether the
corresponding task is marked and returns immediately if the task is marked.
In this way, the schedule of the periodic tasks that execute in both modes remain unchanged
during mode change, but the time allocated to the deleted task can be used to execute the
mode-change job.
Once the new schedule table and code of the new tasks are in memory, the scheduler can
switch to use the new table.
This approach can be used only if the application can handle the rejection of the job.
Specifically, if the mode-change job is not schedulable and is therefore rejected, the
application can either postpone the mode change or take some alternate action.
If this mode change cannot be made in time, an acceptable alternative action is for the
bulldozer to stop completely. The time required by the controller to generate the command for
the stop action is usually much shorter, and the sporadic job to stop the bulldozer is more
likely to be acceptable.
On the other hand, the action to stop the bulldozer cannot be postponed. The scheduler must
admit and schedule the job to activate the brakes in time. In general, the mode changer in
Figure 5–12 cannot be used when a sporadic mode change cannot be rejected.
The only alternative is to schedule each sporadic mode-change job that cannot be rejected
periodically, with a period no greater than half the maximum allowed response time.
Once a feasible schedule is found and stored as a table, the static scheduler described in
Figure 5–2 can use the table in the same manner as schedule tables of periodic tasks.
By the bus being the bottleneck, we mean that if there is a feasible schedule of all the data
transfer activities on the bus, it is always possible to feasibly schedule the jobs that send and
receive the data on the respective CPUs.
To illustrate, Figure 5–13(b) shows a cyclic schedule of the data-transfer activities on the bus
and the schedules of CPUs and I/O device interfaces that produce and consume the data.
The shaded boxes on the time lines of CPUS1 and CPUS2 show when the CPUs execute in
order to produce and consume the data that occupy the bus in time intervals shown by shaded
boxes on the time line of the bus.
Similarly, the CPUD is the producer of the data transferred to an I/O device during intervals
shown as dotted boxed on the bus time line.
We can see that the schedules of the CPUs and I/O devices can be derived directly from the
schedule of the bus. Computing the schedule for the entire system is simplified to computing
the schedule of the system bus.
This example is based on the Boeing 777 Airplane Information Management System (AIMS).
The system uses a table-driven system bus protocol. The protocol controls the timing of all
data transfers. The intervals when the bus interface unit of each CPU must execute are
determined by the schedule of the system bus in a manner illustrated by this example.
The general problem of choosing a minor frame length for a given set of periodic tasks,
segmenting the tasks if necessary, and scheduling the tasks so that they meet all their deadlines is NP-
hard. Here, we first consider the special case where the periodic tasks contain no nonpreemptable
Section and then consider the cases of nonpreemptivity.
Before applying the INF algorithm on the given system of periodic tasks, we find all the
possible frame sizes of the system: These frame sizes met the constraints of Eqs. (5.2) and
(5.3) but not necessarily satisfy Eq. (5.1).
The INF algorithm iteratively tries to find a feasible cyclic schedule of the system for a
possible frame size at a time, starting from the largest possible frame size in order of
decreasing frame size. A feasible schedule thus found tells us how to decompose some tasks
into subtasks if their decomposition is necessary.
If the algorithm fails to find a feasible schedule after all the possible frame sizes have been
tried, the given tasks do not have a feasible cyclic schedule that satisfies the frame size
constraints even when tasks can be decomposed into subtasks.
Network-Flow Graph:
The algorithm used during each iteration is based on the well-known network-flow
formulation of the preemptive scheduling problem. In the description of this formulation, it is more
convenient to ignore the tasks to which the jobs belong and name the jobs to be scheduled in a major
cycle of F frames J1, J2, . . . , JN .
The constraints on when the jobs can be scheduled are represented by the network-flow graph
of the system. This graph contains the following vertices and edges; the capacity of an edge is
a nonnegative number associated with the edge.
There are many algorithms for finding the maximum flows of network-flow graphs. The time
complexity of straightforward ones is O((N + F)3). Its worst-case run time complexity is
pseudopolynomial in the length of the major cycle F, but on average, it can find a maximum flow in a
much shorter time.
PREPARED BY: K.TARAKESWAR, ASST. PROF, CSE DEPT. Page 15
REAL TIME SYSTEMS
Specifically, the flow of an edge (Ji , j ) from a job vertex Ji to a frame vertex j gives the
amount of time in frame j allocated to job Ji .
The total amount of time allocated to a job Ji is represented by the total flow out of all the
edges from the job vertex Ji . Since this amount is equal to the flow into Ji , this amount is ei .
Since the flow of the only edge out of every frame vertex is at most f , the total amount of
time in every frame allocated to all jobs is at most f .
As an example, we again look at the tasks T1 = (4, 1), T2 = (5, 2, 7), and T3 = (20, 5). The possible
frame sizes are 4 and 2. The network-flow graph used in the first iteration is shown in Figure 5–15.
The frame size is 4. The maximum flow of this graph is 18, which is the total execution time of all
jobs in a hyperperiod. Hence the flows of edges from the job vertices to the frame vertices represent a
schedule of the tasks, in particular, the schedule shown in Figure 5–7.
The feasible schedule found by the INF algorithm may not meet some of the constraints that
are imposed on the cyclic schedule. Examples are precedence order of jobs and restrictions on
preemptions. The networkflow graph for each possible frame size is generated based on the
assumption that the jobs are independent.
Hence, the jobs may be scheduled in some wrong order according to a feasible schedule found by the
INF algorithm. We can always transform the schedule into one that meets the precedence constraints
of the jobs by swapping the time allocations of jobs that are scheduled in the wrong order.
Advantages:
• The clock-driven approach to scheduling has many advantages. The most important one is its
conceptual simplicity.
• We can take into account complex dependencies, communication delays, and resource
contentions among jobs in the choice and construction of the static schedule, making sure
there will be no deadlock and unpredictable delays.
• When the workload is mostly periodic and the schedule is cyclic, timing constraints can be
checked and enforced at each frame boundary.
• Context switching and communication overhead can be kept low by choosing as large a frame
size as possible.
• Systems based on the clock-driven scheduling paradigm are typically time-triggered. In a
time-triggered system, interrupts in response to external events are queued and polled
periodically.
• Systems based on the clock-driven scheduling paradigm are typically time-triggered. In a
time-triggered system, interrupts in response to external events are queued and polled
periodically.
Disadvantages:
• Clock-driven approach also has many disadvantages.
• The most obvious one is that a system based on this approach is brittle: It is relatively difficult
to modify and maintain.
• The release times of all jobs must be fixed. In contrast, priority-driven algorithms do not
require fixed release times.
• In a clock-driven system, all combinations of periodic tasks that might execute at the same
time must be known a priori so a schedule for the combination can be precomputed.
• The pure clock-driven approach is not suitable for many systems that contain both hard and
soft real-time applications.