0% found this document useful (0 votes)
103 views48 pages

Liu Chapter 3456

merged docs

Uploaded by

cank06306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views48 pages

Liu Chapter 3456

merged docs

Uploaded by

cank06306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

A Reference Model for Real-Time Systems

(Chapter 3 of the Book by Liu)


• Needed for schedulability analysis
• Goal
– Abstract away from functional characteristics
– Focus on timing properties and resource requirements
Reference
Model

Resource Task Scheduling and


Graph Graph Resource Mgmt
Available system Application system Scheduling and resource
resources management algorithms

Processors and Resources

• Processors: All active resources


– P1, ….., Pm
• Resources: Passive resources, e.g., memory and locks,
needed in addition to the processor to make progress
– R1, ….., RS
• Example: Packet transmission
– Processor: Data link
– Resource: Valid sequence numbers
• Not always clear-cut
– I/O bus can be treated as a resource or as a processor

1
Temporal Parameters

• Ji  Job  unit of work


– ri  Release time
– di  Absolute deadline
– Di  Relative deadline
– ei  Max. execution time

Relative deadline, Di
Ji
Time axis
Release time ri ei Absolute
deadline, di

Temporal Parameters

• Ji  Job  unit of work


– ri  Release time
– di  Absolute deadline
– Di  Relative deadline
– ei  Max. execution time

Relative deadline, Di
Ji Ji Ji
A B C
Time axis
Release time ri Absolute
ei,A ei,B ei,C deadline, di
– For a preemptible job, the execution time may be broken down into
more than one interval
• ei,A + ei,B + ei,C = ei

2
Temporal Parameters

• Ti  Task  set of jobs, {Ji1, Ji2, …}


– i  Phase  ri1
– pi  Period  minimum inter-release time
– H  Hyperperiod, H = lcm(p1, ….., pn)
– ei  Execution time
– ui  Utilization = ei/pi
– Di  (Relative) deadline of Ti, typically Di = pi
Period of this task = 10
Phase of this task = 2

0 10 20 30 40
Relative deadline of this task = 8
Absolute deadline of ith job in this task = 10  i

Temporal Parameters

• Example
– p1 = 3, p2 = 4, p3 = 10, then H = 60
– e1 = 1, e2 = 1, e3 = 3, then
• u1 = e1 / p1 = 0.33
• u2 = 0.25
• u3 = 0.3
– Total utilization
• {60/3  1 + 60/4  1 + 60/10  3}/60 = 53/60 = 0.88

0 10 20 30 40 50 60

H = 60

3
Aperiodic and Sporadic Tasks

• Capture events whose release times are not known a priori


• Aperiodic tasks  Tasks that are not periodic and whose
jobs have either soft or no deadlines
• Sporadic tasks  Tasks that are not periodic and whose
jobs have hard relative deadlines
• Example
– Automobile control system: What type of tasks are these?
• Turn on the AC when the temperature rises above a specified limit
• Deploy the air-bag when there is a collision
• Pump the brake when the wheel starts to spin
• Start the windshield wipers when moisture is detected
– What would this task be?
• Open the engine valve at t he correct time to inject fuel in the engine

Precedence Constraints and Precedence Graph

• Capture data and control dependencies


– For example, consider a radar system:

Signal Tracker
processor

Track
data

– There is a data dependence between the tracker and the signal


processor
– An example of a control dependence is to for an aircraft to first
accelerate and reach a specified speed and then adjust the wings to
fly

4
Precedence Relation

• Ji  Jj  Ji precedes Jj
– Partial order relation
• Precedence graph, G = (J, )
(0,7] (2,9] (4,11] (6,13] (8,15]
Periodic independent
(2,5] (5,8] (8,11] (11,14] (14,17]
Periodic dependent
(0,5] (4,8] (5,20]

(0,6] J

(2,10]
2/3 1/2

Additional Inputs to the Scheduler

• Preemptivity
– Non-preemptable  For example, data frame in token ring
• Tied to a particular resource
• Job may still be preemptable on other resources
– Cost of preemption  Context switch time
• Criticalness
– Associate weight with jobs to indicate criticalness w.r.t other jobs
• For example, flight controller > navigation > fuel minimizer

Find current position


relative to chosen course

• Scheduler and resource access protocols then optimize the weigthed


performance measures

5
Scheduler Performance

• Feasible schedule
– i, Ji starts at or after its release time and completes by its deadline
• Optimal algorithm
– Always produces a feasible schedule if such a schedule exists
• Performance measures
– Number of tardy jobs
Tardiness/lateness
= tci - di

Relative deadline, Di Completion time, tci


Ji
Time axis
Release time ri Absolute
deadline, di

Scheduler Performance

• Feasible schedule
– i, Ji starts at or after its release time and completes by its deadline
• Optimal algorithm
– Always produces a feasible schedule if such a schedule exists
• Performance measures
– Number of tardy jobs
• Maximum or average tardiness  max(0,tci - di)
• Maximum or average absolute lateness  tci - di
• Maximum or average response time  tci - ri
• Makespan: i,j, ri = rj; makespan = max(tci - ri)

6
Overview of Real-Time Scheduling
(Chapter 4 of Book by J. Liu)
• Two main kinds of scheduling algorithms

Scheduler  generates a Feasibility analyzer  Checks


schedule at runtime if timing constraints are met

Often more straightforward algorithms

– But may not produce a feasible schedule


• “Feasible schedule”  All timing constraints are met
– A task set T is schedulable using scheduling algorithm A if A
always produces a feasible schedule for T
– A scheduling algorithm is optimal if it always produces a feasible
schedule when one exists

Overview of Real-Time Scheduling


(Chapter 4 of Book by J. Liu)
• Two main kinds of scheduling algorithms

Scheduler  generates a Feasibility analyzer  Checks


schedule at runtime if timing constraints are met

Static scheduling  Also called Dynamic scheduling  Also called


“clock driven” scheduling “priority driven” scheduling

Static (fixed) priority Task-level dynamic priority

Job-level fixed-priority Job-level dynamic priority


 “Dynamic priority”  Complex from assurance point of view

1
Clock-Driven Scheduler

• Scheduling decisions are made at specific time instants,


which are typically chosen a priori

A B C D C A C
Frame 1 Frame 2 Frame 3
t1 t2 t3 t4

• Assumptions
– All the parameters of hard real-time jobs are fixed and known
– Schedule is computed off-line and stored for use at run-time
• “Table-driven” scheduler

Priority-Driven Scheduler

• Examples
– FIFO (First-In-First-Out), LIFO (Last-In-First-Out), SETF
(Shortest-Execution-Time-First), LETF (Longest-Execution-Time-
First)
• Priority-driven schedulers never leave the processor idle
when there is work to be done
– “Work conserving” schedulers
• Preemptive priority-driven scheduling
– Decisions are made when a job becomes ready, a processor
becomes idle, priorities of jobs change
• Choose a ready task with the highest priority
• Non-preemptive priority-driven scheduling
– Decisions are made only when the processor becomes idle

2
Priority-Driven Schedules

J1 , 3
J3 , 2
J2 , 1 J4 , 2

J5 , 2 J6 , 4

J7 , 4 J8 , 1
P1 J1 J4 J7 J6
Preemptive
P2 J2 J3 J7 J5 J8
J5
J1 , J2 , J3 , J4 ,
J6 , J7 , J8

Time
5 10

Priority-Driven Schedules

J1 , 3
J3 , 2
J2 , 1 J4 , 2

J5 , 2 J6 , 4

J7 , 4 J8 , 1
P1 J1 J4 J5 J6
Non-
preemptive P2 J2 J3 J7 J8
J5
J1 , J2 , J3 , J4 ,
J6 , J7 , J8

Time
5 10

3
Priority-Driven Schedules
J2 , 2 J3 , 1
J1 , 1 J4 , 1

J5 , 3

J8 , 3 J7 , 1
J6 , 2

P1 J1 J2 J3 J6 J4
SETF Non-
preemptive P2 J5 J8 J7
P1 J5 J1 J2 J6 J4
LETF Non-
preemptive P
2 J8 J3 J7
J1 J2 J6 J7
SETF Non- Choose J
preemptive P2 J8 J5 J3 J4 Time instead of8 J at
5
5 10 the beginning

Effective Release Times & Effective Deadlines

• Timing constraints are often inconsistent with precedence


constraints
– Effective timing constraints on single processor
• Effective release time, rieff = max(ri , {rjeff | Jj  Ji})
• Effective deadline, dieff = min(di , {djeff | Ji  Jj})
J3 (1,12) J4 (4,9) J6 (0,20)
(2,8) (4,9) (4,20)

J1 (2,10)
(2,8)

J2 (0,7)
(0,7) J5 (1,8) J7 (6,21)
(2,8) (6,21)

4
Effective Release Times & Effective Deadlines

• Timing constraints are often inconsistent with precedence


constraints
– Effective timing constraints on single processor
• Effective release time, rieff = max(ri , {rjeff | Jj  Ji})
• Effective deadline, dieff = min(di , {djeff | Ji  Jj})
– We should also factor in the execution time
• Not necessary for one processor systems
• Theorem: A set of jobs J can be feasibly scheduled on a
processor iff it can be feasibly scheduled to meet all
effective release times and deadlines.

Earliest-Deadline-First (EDF) Algorithm

• At any time, execute that available job with the earliest


deadline
• Theorem: When preemption is allowed and jobs do not
contend for resources, the EDF algorithm can produce a
feasible schedule of a set J of jobs with arbitrary release
times and deadlines on a processor iff J has feasible
schedules.
• Proof:
– Show that any feasible schedule of J can be systematically
transformed into an EDF schedule
– EDF fails to produce a feasible schedule  no feasible schedule
exists (Otherwise it could be transformed into an EDF schedule)

5
Earliest-Deadline-First (EDF) Algorithm

• Proof of theorem:
– Show that any feasible schedule of J can be systematically
transformed into an EDF schedule
• Consider the following feasible schedule
I1 I2
Idle

Ji Jk

rk dk di
• Case 1: rk is after I1
– Jk cannot be scheduled in I1
– The two jobs are already scheduled on the EDF basis in these intervals

Earliest-Deadline-First (EDF) Algorithm

• Proof of theorem:
– Show that any feasible schedule of J can be systematically
transformed into an EDF schedule
• Consider the following feasible schedule
I1 I2
Idle

Ji Jk

rk dk di
Jk Jk Ji

• Case 2: rk is before I1
– Adjust existing feasible schedule by swapping Ji with corresponding
portion of Jk
» New schedule is still a feasible schedule

6
Earliest-Deadline-First (EDF) Algorithm

• Proof of theorem:
– Show that any feasible schedule of J can be systematically
transformed into an EDF schedule
• Consider the following feasible schedule
I1 I2
Idle

Ji Jk

rk dk di
Jk Jk Ji

But new schedule may not be an EDF schedule if there is an idle


interval when a job is ready  Remove the idle interval

Earliest-Deadline-First (EDF) Algorithm

• Proof of theorem:
– Show that any feasible schedule of J can be systematically
transformed into an EDF schedule
• Consider the following feasible schedule
I1 I2
Idle

Ji Jk

rk dk di
Jk Jk Ji

EDF
Schedule Jk Jk Ji

7
Latest-Release-Time (LRT) Algorithm

• When the goal of scheduling is to meet deadlines, there is


no advantage to completing any job sooner than necessary
– Hence: May want to postpone the execution of hard real-time job
to, e.g., allow soft real-time jobs to complete earlier
– Use Latest-Real-Time (LRT) algorithm
• Treat release times as deadlines and deadlines as release times
• Schedule the jobs backwards, starting from the latest deadline to the
current time
Under the same
J1 , 3 (0,6) J2 , 2 (5,8) conditions as EDF, LRT
is also optimal
J3 , 2 (2,7)
Available time  Because processor is idle here,
 LRT is not a priority-driven algorithm

J1 J3 J2

0 1 2 3 4 5 6 7 8 9 10

Least-Slack-Time-First (LST) Algorithm or


Minimum-Laxity-First (MLF) Algorithm

• Slack or laxity of a job with deadline d at time t is


[(d-t) - remaining execution time]

J1 , 3 (0,6) J2 , 2 (5,8)

J3 , 2 (2,7) Under the same


conditions as EDF, LST
Slack of J1 = 3, is also optimal
Slack of J3 = 3
Slack = 3

J1 J1 J3 J2

0 1 2 3 4 5 6 7 8 9 10

8
EDF/LST are not Always Optimal

J1 , 3 (0,10) J2 , 6 (2,14) J3 , 4 (4,12)

• Non-preemptable jobs
J3 misses its
• EDF schedule: Not possible deadline at t = 12

J1 J2 J3

0 5 10 15

• Feasible schedule

J1 J3 J2

0 5 10 15

EDF/LST are not Always Optimal


J1 , 1 (0,1) • Multi-processor system J3 misses its
deadline at t = 5
J2 , 1 (0,2)
• EDF schedule: Not possible
J3 , 5 (0,5)
J2 J3
J1

0 1 2 3 4 5 6

• Feasible schedule:
J3

J1 J2

0 1 2 3 4 5 6

9
Problem with Priority-Driven Schedulers

• Even though priority-driven schedulers are simpler than


clock-driven schedulers and can handle applications with
varying time and response constraints, they have not been
widely used in hard real-time safety-critical systems
– Timing behavior is anomalous making verification difficult

Problem with Priority-Driven Schedulers

• Anomalous timing behavior: Execution time of J2  [2,6]


25
J1 , 5 (0,10) 20
Completion

J2 , [2,6] (0,10) 15
10
Time

J3 , 8 (4,15)
5
J4 , 10 (0,20)
e2
0 1 2 3 4 5 6
• EDF for e2 = 6:
P1 J1 J3
P2 J2 J4

0 5 10 15 20

10
Problem with Priority-Driven Schedulers

• Anomalous timing behavior: Execution time of J2  [2,6]


25
J1 , 5 (0,10) 20

Completion
J2 , [2,6] (0,10) 15
10

Time
J3 , 8 (4,15)
5
J4 , 10 (0,20)
e2
0 1 2 3 4 5 6
• EDF for e2 = 2: Rule is that task cannot be shifted to another processor

P1 J1
P2 J2 J4 J3 J4

0 5 10 15 20

Problem with Priority-Driven Schedulers

• Anomalous timing behavior: Execution time of J2  [2,6]


25
J1 , 5 (0,10) 20
Completion

J2 , [2,6] (0,10) 15
10
Time

J3 , 8 (4,15)
5
J4 , 10 (0,20)
e2
0 1 2 3 4 5 6
• EDF for e2 = 3: J4 misses its deadline!

P1 J1
P2 J2 J4 J3 J4

0 5 10 15 20

11
Problem with Priority-Driven Schedulers

• Anomalous timing behavior: Execution time of J2  [2,6]


25
J1 , 5 (0,10) 20

Completion
J2 , [2,6] (0,10) 15
10

Time
J3 , 8 (4,15)
5
J4 , 10 (0,20)
e2
0 1 2 3 4 5 6
• EDF for e2 = 4:
P1 J1 J4
P2 J2 J3

0 5 10 15 20

Problem with Priority-Driven Schedulers

• Anomalous timing behavior: Execution time of J2  [2,6]


25
J1 , 5 (0,10) 20
Completion

J2 , [2,6] (0,10) 15
10
Time

J3 , 8 (4,15)
5
J4 , 10 (0,20)
e2
0 1 2 3 4 5 6
• EDF for e2 = 5:
P1 J1 J4
P2 J2 J3

0 5 10 15 20

12
Clock-Driven Scheduling
(Chapter 5 of the Book by J. Liu)
• Notations
– n periodic tasks T1 , …, Tn
• Ti : Each job is released pi units of time after the previous job in Ti
– (i , pi , ei , Di) di,1 di,1 di,1

Ji,1 Ji,2 Ji,3 Ji,4

ri,1 ri,2 ri,3 ri,4


Phase, i Period, pi Relative Execution
deadline, Di time, ei

Static Timer-Driven Scheduler

• Input Job to be scheduled


– Table of (tk , T(tk)), 0  k  N-1 at time tk

Time Job to be scheduled


0 J1,1

Time tk 1 J3,1
2 J2,1

3.8 I

4 J1,2
 
19.8 J1,1

1
Static Timer-Driven Scheduler

• Input
– Table of (tk , T(tk)), 0  k  N-1
– Scheduler
i = 0; k = 0;
setTimer(t0); sleep;
on interrupt
if (executing aperiodic job) preempt the job;
J = T(tk); i = i+1; k = i mod N;
setTimer(tk + i/N * H);
if (J == I) wakeup(aperiodic)
else wakeup(J);
sleep();

Static Timer-Driven Scheduler

• Example
– T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2)
Execution
Period time
• Default phase = 0
• Default relative deadline
= period

– Hyperperiod = ?

2
Static Timer-Driven Scheduler

• Example
– T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2)
– Hyperperiod H = 20
Use for aperiodic and sporadic jobs

J2,1 J4,1 J2,2 J2,3 J2,4


J1,1
J3,1

J1,2

J1,3

J1,4

J1,5
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Cyclic Schedule

• A periodic static schedule is a “cyclic schedule”


– General structure of a cyclic schedule
Major cycle i Major cycle i+1

Frame 1 Frame 2 Frame 3 Frame 4

t t+f t + 2f t + 3f t + 4f t+H
Invoked as procedure calls

– Phase of every periodic task is a multiple of the frame size


• I.e., first job of every task is released at the beginning of some frame

3
Frame Constraints

• 1) f  max(ei)
– f should be sufficiently large so that every job can complete in a
single frame

t t+f t + 2f t + 3f t + 4f t+H

Frame Constraints

• 1) f  max(ei)
– f should be sufficiently large so that every job can complete in a
single frame
• 2) f divides H
–  Ti s.t. f divides pi
• 3) f must be sufficiently small so that between the release
time and deadline of every job, there is at least one frame

t t+f t + 2f t + 3f t + 4f t+H

4
Frame Constraints

• 1) f  max(ei)
• 2) f divides H, i.e.,  Ti s.t. f divides pi
• 3) f must be sufficiently small so that between the release
time and deadline of every job, there is at least one frame
– If t > t, we must have: • In [t,t+H), first job of Ti starts at a
• t + 2f  t + Di ,  2f - (t - t)  Di frame boundary
• t - t  gcd(pi , f),  2f - gcd(pi , f)  D•i If gcd(f,pi) = a, jobs of Ti start at
m*b*a where pi = b*a and frames
start at k*c*a where f = c*a
Why?
•  t - t = (m*b - k*c)*a
•  If t > t, then min (t - t) = a

t t + Di t + pi
t t+f t + 2f t + 3f t + 4f t+H

Frame Constraints

• 1) f  max(ei)
• 2) f divides H, i.e.,  Ti s.t. f divides pi
• 3) f must be sufficiently small so that between the release
time and deadline of every job, there is at least one frame
– If t > t, we must have: Since max(gcd(pi,f)  f,
• t + 2f  t + Di ,  2f - (t - t)  Di  both conditions
• t - t  gcd(pi , f),  2f - gcd(pi , f)  Di are captured by::
– If t = t, we must have f  Di
• 2f - gcd(pi , f)  Di

t t + Di t + pi
t t+f t + 2f t + 3f t + 4f t+H

5
Frame Constraints

• Example:
– T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2), f = ?
• 1) f  max(ei)
– f  max(1, 1.8, 1, 2)
– I.e., f  2
• 2) f divides H, i.e.,  Ti s.t. f divides pi
– f divides either 4, 5, or 20
– I.e., f {1,2,4,5,10,20}
– Combining with condition (1), f  {2,4,5,10,20}
• 3) 2f - gcd(pi , f)  Di
– 2f - gcd(4,f)  4  f  {2,4}
– 2f - gcd(5,f)  5  f  {2}
– 2f - gcd(20,f)  20  f  {2,4,5,10,20}
• f=2

Frame Constraints

• Example:
– T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2), f = ?
– Frame size f = 2
– Note that no job crosses a frame boundary

J2,1 J4,1 J2,2 J2,3 J2,4


J1,1
J3,1

J1,2

J1,3

J1,4

J1,5

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

6
Frame Constraints

• Example:
– T1 = (15,1,14), T2 = (20,2,26), T3 = (22,3), f = ?
• 1) f  max(ei)
– f  max(1,2,3)
– I.e., f  3
• 2) f divides H, i.e.,  Ti s.t. f divides pi
– f divides 15 or 20 or 22
– I.e., f  {1,2,3,4,5,10,11,15,20,22}
– Combining with (1), f  {3,4,5,10,11,15,20,22}
• 3) 2f - gcd(pi , f)  Di
– 2f - gcd(15,f)  14  f  {3,4,5}
– 2f -gcd(20,f)  26  f  {3,4,5,10,11,15,20,22}
– 2f - gcd(22,f)  22  f  {3,4,5,11,10,22}
•  f  {3,4,5}

Frame Constraints

• What to do if the frame constraints cannot be met?


– Example: T = {(4,1),(5,2,7),(20,5)} = {(p1,e1),(p2,e2,D2),(p3,e3)}
• 1) f  max(ei)
– f  max(1,2,5) = 5
• 2) f divides H, i.e.,  Ti s.t. f divides pi
– f divides (4,5,20), i.e., f  {1,2,4,5,10,20}
– Combining with (1), f  {5,10,20}
• 3) 2f - gcd(pi , f)  Di
– 2f - gcd(4,f)  4 is not satisfied by any of f  {5,10,20}
» 2*5 - gcd(4,5) = 9
» 2*10 - gcd(4,10) = 18
» 2*20 - gcd(4,20) = 36

– Hence: No solution for the frame constraints

7
Frame Constraints

• What to do if the frame constraints cannot be met?


– Example: T = {(4,1),(5,2,7),(20,5)} = {(p1,e1),(p2,e2,D2),(p3,e3)}
– No solution for the frame constraints
– Approach: Slice jobs into smaller units, e.g., slice (20,5) into
(20,1), (20,3), and (20,1)
• 1) f  max(ei)
– f  max(1,2,1,3,1) = 3
• 2) f divides H, i.e.,  Ti s.t. f divides pi
– f divides (4,5,20), i.e., f  {1,2,4,5,10,20}
– Combining with (1), f  {4,5,10,20}
• 3) 2f - gcd(pi , f)  Di
– 2f - gcd(4,f)  4 is satisfied by f  {4}
– 2f - gcd(5,f)  7 is satisfied by f  {4,5}
– 2f - gcd(20,f)  20 is satisfied by f  {4,5,10,20}
•  f  {4}

Frame Constraints

• What to do if the frame constraints cannot be met?


– Example: T = {(4,1),(5,2,7),(20,5)} = {(p1,e1),(p2,e2,D2),(p3,e3)}
– No solution for the frame constraints
– Approach: Slice jobs into smaller units, e.g., slice (20,5) into
(20,1), (20,3), and (20,1)
– Frame size f = 4
J3a,1

J3c,1

J2,1 J3b,1 J2,1 J2,1 J2,1


J1,1

J1,2

J1,3

J1,4

J1,5

J3c,1 is released only after J3b,1 has


J3a,1 J3b,1 J3c,1
been completed

8
Cyclic Executive

• Three design decisions


– f=?
Can’t make these decisions
– Slice jobs?
independently # of frames in H
– Schedule
• Cyclic executive: Schedule L(k) for 0  k  F-1
On interrupt:
CurrentBlock = L(k); t = t+1; k = t mode F;
if (last job didn’t complete or any slice in CurrentBlock is not released)
Take actions;
Wake up periodic task server; sleep till it completes;
while (aperiodic queue is not empty)
Wake up the first job in the queue; Only if idle time
Sleep till it completes; remains
Remove it from the queue;

Slack Stealing

• Improving the average response time of aperiodic jobs


– Fact: There is no benefit in completing a hard real-time job early
– Hence, schedule these to minimize the response time of aperiodic
jobs
• Makes the system more responsive, especially when the aperiodic
jobs are due to operator actions
– Execute aperiodic jobs ahead of periodic jobs whenever possible

“Slack Stealing”

9
Slack Stealing

• Let the total amount of time allocated to all the slices


scheduled in frame k be xk
– Slack at the beginning of frame k = f – xk

A3
Slack-Stealing
Schedule

Original
Schedule

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

A1: r1 = 4, e1 = 1.5 A3: r3 = 10.5, e3 = 2.0


A2: r2 = 9.5, e2 = 0.5

Implementing Slack Stealing

• Pre-compute the initial slack


• Use interval timer to keep track of the available slack
– Timer runs when an aperiodic job runs
– Problem
• Most operating systems do not offer interval timers of submillisecond
granularity and accuracy
•  Slack stealing is feasible only when temporal parameters are in
orders of hundreds of milliseconds or seconds

10
Scheduling Sporadic Jobs

• Characteristics of sporadic jobs


– Arrive at arbitrary times
– Worst case execution time known when the job is released
– Have hard deadlines  S(d,e)
• It is not possible to schedule every sporadic job
–  Scheduler must check whether a newly released sporadic job
can be scheduled
• Current total amount of slack in frames t, , l
– c(t,l) = (t,l) - dk ≤ d(ek - k)
• Accept S(d,e) only if
S(d,e) – e ≤ c(t,l) AND d
– No sporadic job in the system is affected

Frame t-1 Frame t Frame l Frame l+1

Scheduling Sporadic Jobs

• Example
– S1(17,4.5) is released at time 3
• Slack in frames 2 to 4, c(2,4) = 4,  Reject S1
– S2(29,4) is released at time 5
• Slack in frames 3 to 7, c(3,7) = 5.5 & no other job,  Accept S2
– S3(22,1.5) is released at time 11
• Slack in frames 4 to 5, c(4,5) = 2 & 2 = 1.5,  Accept S3. Now, 2 = 0
e2 - 2 e3 - 3
– S4(44,5.0) is released at time 14
• Slack in frames 5 to 11, c(5,11) = 7-2-0.5 = 4.5,  Reject S4

0 4 78 10 12 15 16 20 24 27 28 30 32 35 36 39 40 44 47 48 50 52
S1(17,4.5) 3.5 23.5 43.5
S2(29,4)
S3(22,1.5)
S4(44,5)

11
Scheduling Sporadic Jobs

• Example where S3 is rejected because it impacts S2


– S1(17,4.5) is released at time 3
• Slack in frames 2 to 4, c(2,4) = 4,  Reject S1
– S2(29,4.5) is released at time 5
• Slack in frames 3 to 7, c(3,7) = 5.5 & no other job,  Accept S2
– S3(22,1.5) is released at time 11
• Slack in frames 4 to 5, c(4,5) = 2
• However, S3 cannot be accepted because it will cause S2 to miss its
deadline

0 4 78 10 12 15 16 20 24 27 28 30 32 35 36 39 40 44 47 48 50 52
S1(17,4.5) 3.5 23.5 43.5
S2(29,4.5)
S3(22,1.5)
S4(44,5)

Scheduling Sporadic Jobs

• What to do if S(d,K) can’t be accepted?


– Application must deal with this Robot arm
• For example, consider a conveyer belt system
Remove
defective parts

Control

• If not possible to schedule this for a defective part then


– Slow down the belt
– Stop the belt
– Alert an operator

• Use EDF algorithm to schedule accepted sporadic jobs

12
Practical Considerations

• Frame overruns
– E.g., due to hardware or software faults
– Abort the job?
• Useful for applications where late results are useless
– Application must be able to handle or recover from this
• Problem in some applications
– Aborting the job can cause inconsistent system state
–  Preempt the job as soon as it exits its critical section (if any)
– Schedule as an aperiodic job
– Another option
• Let the job complete
• Do this only if
– Result will still be useful
– It is ok for periodic jobs to be occasionally late

Practical Considerations

• Mode changes
– System is reconfigured
• Set of tasks is replaced by a new set of tasks
– Need to bring new scheduling table + code of the new tasks into
the memory
• Implement as a “mode change job”

Soft deadline Hard deadline


• Treat as a high priority • Treat as a sporadic job
aperiodic job • Ok if the application can postpone the
mode change if the “mode change” job is
rejected
• Otherwise, need to schedule the job as a
periodic task
• Period ≤ (max. response time) / 2

13
Practical Considerations

• Mode changes
– System is reconfigured
• Set of tasks is replaced by a new set of tasks
– Need to bring new scheduling table + code of the new tasks into
the memory
• Implement as a “mode change job”
– For example, consider a car navigation system in “cruising mode”

Change to “change lane” mode Change to “stop mode”


Can’t be deferred   must be permanently
Can be deferred  aperiodic
scheduled (periodic task)

Practical Considerations

• General workloads and multiprocessor scheduling

Bus, I/O devices,  Multiple CPUs

– More complex than uniprocessor scheduling


– However, can use sophisticated algorithms since computation is
done offline

14
Constructing Static Schedules

• Algorithm for constructing static schedules for


independent preemptable tasks
– Construct network flow graph
• Job vertex Ji for 1 ≤ i ≤ N # of frames in
• Frame vertex j, 1 ≤ j ≤ F the major cycle Ji is allocated h units of
• Source and sink vertices time in frame x and ei – h
units of time in frame k
• Edge (Ji , j)  Ji can be scheduled in frame j x
– Capacity of the edge is f Ji (f), h
(f), h
• Edge (source , Ji) (f), ei - h
(ei), ei y
– Capacity of the edge is ei (f), ei + ek - h


Sink
• Edge (j , sink) Source (f), ek
z
– Capacity of the edge is f
(ek), ek (f), 0
• If a job is scheduled across multiple (f), 0


frames, then we must slice it into Jk
corresponding subjobs F (f), 0

• Slice Ji into 2 subjobs

Constructing Static Schedules

• Example
– T1 = (4,1), T2 = (5,2,7), T3 = (20,5)
• Possible frame sizes  2 or 4 (ignore f  max(ei) constraint)
• Hyperperiod = 20  5 jobs of T1 , 4 jobs of T2 , and 1 job of T3
–  f = 4  F = 5, i.e., there are 5 frames

(1), 1 (4), 1
(2), 2 (4), 2
(5), 5 (4), 4 (4), 1
(4), 4 (4), 3
(4), 3

(4), 3
(4), 4
•Task T3 has to be
split into 3 subtasks

15
Constructing Static Schedules

• Non-independent tasks
– Precedence constraints
• “Ji precedes Jk”
• Make Ji’s release at or before Kk’s release
• And Ji’s deadline at or before Jk’s deadline
• If the slices of Ji and Jk are scheduled in the wrong order, then just
swap them
– Critical sections / nonpreemptable jobs
• NP-hard even on one processor

16
Chapter 6 (Book by Liu)
Priority-Driven Scheduling of Periodic Tasks
• Assumptions
– Tasks are independent
– No aperiodic or sporadic tasks
• Static assumption
– Ensures that uniprocessor algorithm can be used for multiprocessor
systems
P1

P2


Set of tasks Pk
Static allocation of tasks
among processors Set of processors

Priority-Driven Scheduling of Periodic Tasks

• Why not have a common task queue for all the processors?
– In the worst case, the performance of dynamic multiprocessor
scheduling can be poor
Use EDF: Jm+1,1 has the lowest priority
– Example (because it has the longest deadline)
• Ti = (1, 2) for 1 ≤ i ≤ m
• Tm+1 = (1 + , 1) Jm+1,1 misses its deadline!

P1 J1,1 Jm+1,1 Jm,2


2 1 1+2
J2,1 J1,2
P2 2 1

Jm,1 Jm-1,2
Pm 2 1
Set of processors

1
Priority-Driven Scheduling of Periodic Tasks
• Why not have a common task queue for all the processors?
– In the worst case, the performance of dynamic multiprocessor
scheduling can be poor
– Example
• Ti = (1, 2) for 1 ≤ i ≤ m
• Tm+1 = (1 + , 1) However, there is a static schedule
assuming m  2 < 1

P1 J1,1 J2,1  Jm,1


1
J1,2 J2,2  Jm,2 J1,3 J2,3

P2
2  2  2
Jm+1,1
m  2 1+2 1+22
Jm+1,2
1+m2
Jm+1,3 
1+ 2+

• Problem 1
– We do not know how to determine the worst-case and best-case
performance of dynamic systems
– Hence, most current hard real-time systems are static

Priority-Driven Scheduling of Periodic Tasks

• Fixed-priority v/s dynamic-priority algorithms


– Fixed-priority: All jobs in the task have the same priority
• Rate-monotonic: “The shorter the period, the higher the priority”
– Example: T1 = (4,1), T2 = (5,2), T3 = (20,5)
» T1 > T2 > T3

T3 T3 T3 T3
T2 T2
T2 A T2 B C T2 D T2
T1 T1 T1 T1 A T1 B T1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

– T2 runs in background of T1
– T3 runs in background of T1 and T2

2
Priority-Driven Scheduling of Periodic Tasks

• Fixed-priority v/s dynamic-priority algorithms


– Fixed-priority: All jobs in the task have the same priority
• Rate-monotonic: “The shorter the period, the higher the priority”
• Deadline-monotonic: “The shorter the relative deadline, the higher the
priority”
– Example: T1 = (50,50,25,100), T2 = (0,62.5,10,20), T3 = (0,125,25,50)
» T2 > T3 > T1
185

T1 T1
T1 T1 T1
T3 A B T3 T3
T2 T2 T2 T2 T2

0 25 50 75 100 125 150 175 200 225 250 275 300


10 35 62.5 72.5 85 135 160 187.5 197.5

Priority-Driven Scheduling of Periodic Tasks

• Fixed-priority v/s dynamic-priority algorithms


– Fixed-priority: All jobs in the task have the same priority
• Rate-monotonic: “The shorter the period, the higher the priority”
• Deadline-monotonic: “The shorter the relative deadline, the higher the
priority”
– Example: T1 = (50,50,25,100), T2 = (0,62.5,10,20), T3 = (0,125,25,50)
» T2 > T3 > T1
» Consider Rate-monotonic: T1 > T2 > T3

•T2 misses its deadline (82.5)


•  for arbitrary deadlines, DM outperforms RM
– RM  Rate-monotonic
– DM  Deadline-monotonic

3
Priority-Driven Scheduling of Periodic Tasks

• If the relative deadline of every task is proportional to its


period, i.e., Di = *pi for all i, then RM  DM
• Algorithms that do not take into account the urgencies of
jobs in priority assignment usually perform poorly
– E.g., FIFO/LIFO

Schedulable utilization

• Schedulable utilization of a scheduling algorithm


– A scheduling algorithm can feasibly schedule any set of periodic
tasks on a processor if the total utilization of the tasks is equal to or
less than the schedulable utilization of the algorithm
• Higher the schedulable utilization, better the algorithm
• Schedulable utilization ≤ 1.0
• Optimal dynamic-priority algorithms outperform fixed-priority
algorithms
• However, fixed–priority algorithm are more predictable under
overload situations J1,5 and J2,2 also miss their
– Consider T1 = (2,0.8), T2 = (5,3.5), U = 1.1 deadlines
J2,1 J2,1 J1,3 J2,2 J2,2 J1,5
J1,1 J2,1 J1,2 J2,1 J2,1 J1,3 J2,2 J1,4 J2,2 J2,2

J1,1 J2,1 J1,2 J2,1 J2,1 J1,3 J2,2 J1,4 J2,2 J2,2
J1,4
0 1 2 3 4 5 6 7 8 9 10
1.2 + 2.2 = 3.4 < 3.5

4
Schedulable utilization

• Schedulable utilization of a scheduling algorithm


– For dynamic priority algorithms, there is no easy test, short of an
exhaustive one, that allows us to determine which tasks will miss
their deadlines
– In EDF, a late job can cause other jobs to be late
• Fixed-priority algorithms can be adapted to incorporate good overrun
management strategies
• Theorem 6.1:
– A system T of independent, preemptable tasks with relative
deadlines equal to their respective periods can be feasibly
scheduled on one processor using the EDF algorithm if and only if
its total utilization ≤ 1.

Schedulable Utilization of the EDF Algorithm

• If deadlines are less than periods then U ≤ 1 is no longer a


sufficient schedulability condition
– Example: T1 = (2, 1, 1.9), T2 = (2, 1, 1.9)
• U=½+½≤1
• Not schedulable even though U = 1
• For these systems, use densities instead of utilizations
– The density of task Tk is defined as
– k = ek / min(Dk , pk )
• Theorem 6.2 (Liu): A system T of independent
preemptable tasks can be feasibly scheduled on one
processor if its density ≤ 1.
• Schedulability test for the EDF algorithm:
– nk=1 ek / min(Dk , pk) ≤ 1.

5
Schedulable Utilization of the EDF Algorithm

• Theorem 6.2 (Liu): A system T of independent


preemptable tasks can be feasibly scheduled on one
processor if its density ≤ 1.
• Schedulability test for the EDF algorithm:
– nk=1 ek / min(Dk , pk) ≤ 1.
– This is not a necessary condition
• For example, T1 = (2, 0.6, 1) and T2 = (5, 2.3)
• Density = 0.6/1 + 2.3/5 = 0.6 + 0.46 = 1.06 > 1
• However, T1 and T2 are schedulable using EDF:

J1,1 J2,1 A J1,2 J2,1 B J1,3 J2,2 A J1,4 J2,2 B J1,4

0 1 2 3 4 5 6 7 8 9 10
0.6 2.6 3.3 4.6 6.6 7.9 8.6

Designing a System to Ensure Schedulability

• Example
– Digital robot controller
• Control law: e = 8 ms, every 10 ms
– U = 0.8  feasible
• Add “Built-In Self-Test” task: emax = 50 ms, p = as short as possible
but ≤ 1000 ms
– U = 0.8 + 50/x = 1, x = 250 ms
• Add telemetry: e = 15 ms, p = large, but d = small
– Reduce self-test to once per 1000 ms
–  U = 0.8 + 50/1000 + 15/y = 1
–  y = 100 ms = min(d, p)
Large
d = 100 ms

– If not acceptable, then need to redesign the system

6
Optimality of RM and DM

• Assume that tasks are indexed in decreasing priority order,


i.e., Ti has higher priority than Tk if i < k
– i  Priority of Ti
– Ti  Set of tasks with priority equal or higher than i
• Fixed priority algorithms are not optimal
– For example: T1 = (2, 1), T2 = (5, 2.5)
• U = ½ + 2.5/5 = 1, schedulable
• For J1,1 to finish on time, we must have priority(T1) > priority(T2)
• For J2,1 to finish on time, we must have priority(T2) > priority(T1)
– Because no fixed-priority algorithm allows such a change, T1 and T2
are not schedulable using ANY fixed-priority algorithm
J1,1 J2,1 A J1,2 J2,1 B J1,3

0 1 2 3 4 J2,1
5 6 7 8 9 10
C

Optimality of RM and DM
• Simply Periodic Tasks
– Definition: A system of periodic tasks is simply periodic if for every pair
of tasks Ti and Tk in the system where pi < pk, pk is an integer multiple of
pi .
• Theorem 6.3
– A system of simply periodic, independent, preemptable tasks whose
relative deadlines are equal to or larger than their periods is schedulable
on one processor according to the RM algorithm iff its total utilization ≤ 1.
• Theorem 6.4
– A system T of independent, preemptable periodic tasks that are in phase
and have relative deadlines equal to or less than their respective periods
can be feasibly scheduled on one processor according to the DM
algorithm whenever it can be feasibly scheduled according to any fixed-
priority algorithm.
• Corollary:
– The RM algorithm is optimal among all fixed-priority algorithms
whenever the relative deadlines of all tasks are proportional to their
periods.

7
Schedulability Test for Fixed-Priority Tasks with
Short Response Times
• Critical Instants
– A critical instant of a task Ti is a time instant which is such that
• (1) The job in Ti released at the instant has the maximum response
time of all jobs in Ti, if the response time of every job in Ti is equal to
or less than the relative deadline Di of Ti, and
• (2) The response time of the job released at the instant is greater than
Di if the response time of some jobs in Ti exceeds Di.
– The response time of a job in Ti released at a critical instant is the
maximum (possible) response time
• Denote this by Wi.
• Theorem 6.5
– In a fixed-priority system where every job completes before the
next job in the same task is released, a critical instant of any task Ti
occurs when one of its job Ji,c is released at the same time as a job
in every higher-priority task, i.e., ri,c = rk,lk for some lk for every k
= 1, , i-1.

Schedulability Test for Fixed-Priority Tasks with


Short Response Times

• Example: T1 = (2, 0.6), T2 = (2.5, 0.2), T3 = (3, 1.2)


– (a) In phase: RM: T1 < T2 < T3
• T1 jobs: [0,0.6), [2,2.6), [4,4.6), [6,6.6), [8,8.6), [10,10.6), [12,12.6)
• T2 jobs: [0.6,0.8), [2.6,2.8), [5,5.2), [7.5,7.7), [10.6,10.8)
• T3 jobs: [0.8,2), [3,4) + [4.6,4.8), [6.6,7.5) + [7.7,8), [9,10) + [10.8,11)
• W2 = max(0.8-0,2.8-2.5,5.2-5,7.7-7.5,10.8-10) = 0.8 = W2,1
• W3 = max(2-0,4.8-3,8-6,11-9) = 2 = W3,1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

8
Schedulability Test for Fixed-Priority Tasks with
Short Response Times

• Example: (b) Not in phase: T1 = (2,0.6), T2 = (1,2.5,0.2), T3


= (3,1.2)
• T1 jobs: [0,0.6), [2,2.6), [4,4.6), [6,6.6), [8,8.6), [10,10.6), [12,12.6)
• T2 jobs: [1,1.2), [3.5,3.7), [6.6,6.8), [8.6,8.8), [11,11.2), [13.5,13.7)
• T3 jobs: [0.6,1) + [1.2,2), [3,3.5) + [3.7,4) + 4.6,5), [6.8,8), [9,10) +
[10.6,10.8), [12.6,13.5) + [13.7,14)
• W2 = max(1.2-1, 3.7-3.5, 6.8-6.0, 8.8-8.5, 11.2-11, 13.7-13.5) = 0.8 = Response
time of job released at t = 6.
• W3 = max(2.0-0.0, 5.0-3.0, 8.0-6.0, 10.8-9.0, 14.0-12..0) = 2.0 = Response time
of job released at t = 6.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Sufficient Schedulability Conditions for RM and


DM Algorithms

• Theorem 6.11: A system of n independent, preemptable


periodic tasks with relative deadlines equal to their
respective periods can be feasibly scheduled on a processor
according to the RM algorithm if its total utilization U is
less than or equal to URM = n(21/n – 1).

9
Practical Factors

• Definitions
– A (ready) job Ji is blocked when it is prevented from executing by
a lower-priority job. The lower priority job executes while Ji
waits.
– A priority inversion occurs whenever a lower-priority job
executes while some higher-priority job waits
• Practical factors
– Nonpreemptability
– Self-suspension
– Context switches
– Limited priority levels
– Tick scheduling
– Varying priority in fixed-priority systems

Nonpremptability

• Why consider nonpremptability?


– Critical sections
• Preempting the job may not allow a higher-priority job to run earlier
– Cost factor, e.g., disk scheduling
– Packet transmission
• Schedulability analysis for a task should consider not only
all the higher-prioritytasks but also the nonpreemptable
portions of lower-priority tasks Assume nonpreemptable

– Example: T1 = (,4,1), T2 = (,5,1.5), T3 = (9,2),  > 0 Small

J2,1 J2,1 misses its deadlines at t = 5


B
J3,1 J2,1
J1,1 A J2,1

0 1 2 3 4 5 6 7 8 9 10

10
Self-Suspension

• For example, due to invocation of I/O operation or remote


procedure

Ji,l
Jk,j Jk,j+1

Self- Task Ti is delayed by self-


suspension suspension of higher priority
of Tk Ji,l Ji,l task Tk
Jk,j A Jk,j+1 B
Any extra self-suspension of Tk,
i.e., beyond ek, can be used to
run Ti
Ji,l Ji,l Ji,l
A1 Jk,j A2 Jk,j+1 B

Context Switches

• Include the context switch time to start and complete a job


as part of the execution time of the job
– CS  Context-switch time
– Change ei to
• ei + 2CS, if no job self-suspends
• ei + 2(Ki + 1)CS if Ti self-suspends a max of Ki times
– For each self-suspension:
» 1 context-switch time to self-suspend
» + 1 context-switch time to resume execution

11
Limited Priority Levels

• IEEE 802.5 token ring  8 priority levels


• Real-time OS  ≤ 256 priority levels
•  Ji,j may be delayed by equal priority Jk,l
– Delay is largest when these jobs are released just before Ji,j
• Assuming FIFO for scheduling equal priority jobs
– TE(i)  Tasks other than Ti that have the same priority as Ti
–  Delay ≤ Tk  TE(i) ek
– For Dk ≤ pk k, the time-demand function
• wi(t) = ei + bi + Tk  TE(i) ek + Tk  TH(i)  t / pk  * ek, for 0 < t ≤
min(Di,pi), where TH(i)  Tasks having higher priority than Ti
– If some Dk > pk, the time-demand function of the jth job of Ti in a
level-i busy interval is
• wi,j(t) = j*ei + bi + Tk  TE(i)( (j-1)*pi / pk  +1)*ek + Tk  TH(i)  t / pk *ek
for (j-1)*pi < t ≤ wi,j(t)

Limited Priority Levels

• How to map task priorities 1, 2, , n to system priorities


1, 2, , s?
– k  [1,n] for 1 ≤ k ≤ s
– j < k if j < k
– Tasks with priorities  1 are mapped to 1
– Tasks with priorities in (k-1,k] are mapped to k
– Example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

1 = 5 2 = 13 3 = 16 4 = 17 5 = 27 6 = 30

5 13 16 25 27 30

12
Limited Priority Levels

• How to map task priorities 1, 2, , n to system priorities


1, 2, , s?
– Uniform mapping
• Q =  n / s , k = kQ for 1 ≤ k ≤ s – 1, s = n
• Example:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

1 = 5 2 = 10 3 = 15 4 = 20 5 = 25 6 = 30

5 10 15 20 25 30

Limited Priority Levels

• How to map task priorities 1, 2, , n to system priorities


1, 2, , s?
– Uniform mapping
• Q =  n / s , k = kQ for 1 ≤ k ≤ s – 1, s = n
– Constant ratio mapping
• Keep (i-1 + 1) / i , 2 ≤ i ≤ s, as equal as possible
–  More system priority levels for higher priority tasks
• Example: Possible solution with ratios = 0.66, 0.66, 0.7, 0.64, 0.6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

1 = 1 2 = 3 3 = 6 4 = 10 5 = 17 6 = 30

1 3 6 10 17 30

13
Limited Priority Levels

• How to map task priorities 1, 2, , n to system priorities


1, 2, , s?
– Uniform mapping
• Q =  n / s , k = kQ for 1 ≤ k ≤ s – 1, s = n
– Constant ratio mapping
• Keep (i-1 + 1) / i , 2 ≤ i ≤ s, as equal as possible
–  More system priority levels for higher priority tasks
– Example: {1,2,3,4,5,6,7,8,9,10}  {1,2,3}
• Uniform mapping: 1 = 3, 2 = 6, 3 = 10
• Constant ratio mapping: 1 = 1, 2 = 4, 3 = 10
– Ratios are equal to 0.5

• Constant ratio mapping is better than uniform mapping


– n = 100,000 can be supported by s = 256

Varying Priority in Fixed-Priority Systems

• For example, raise the priority of a job when it holds a


nonpreemptable resource
– n(i)  # of segments in each job of task Ti
• Ti  Ti,1 , Ti,2 , , Ti,n(i)
– ei,j, Di,j, i,j Fixed
– Canonical form: i,j+1 < i,j
– Interference block of subtask Ti,k is a chain of Tl,x, Tl,x+1, , Tl,y
for l  i s.t.
• (1) Priority(Tl,z) > priority(Ti,k) for x ≤ z ≤ y
• (2) Tl,x has no predecessor or the priority of its predecessor <
priority(Ti,k)
• (3) Tl,y has no successor or the priority of its successor < priority(Ti,k)

14

You might also like