0% found this document useful (0 votes)
113 views50 pages

OS Assignment Report 242

The document is an assignment report for an Operating Systems course at the University of Technology in Ho Chi Minh City. It details the implementation of a Multilevel Queue (MLQ) scheduling algorithm, including its advantages over other algorithms, as well as the functions required for its implementation. The report also includes sections on memory management and system calls, along with tests and references.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views50 pages

OS Assignment Report 242

The document is an assignment report for an Operating Systems course at the University of Technology in Ho Chi Minh City. It details the implementation of a Multilevel Queue (MLQ) scheduling algorithm, including its advantages over other algorithms, as well as the functions required for its implementation. The report also includes sections on memory management and system calls, along with tests and references.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

VIET NAM NATIONAL UNIVERSITY, HO CHI MINH CITY

UNIVERSITY OF TECHNOLOGY
FACULTY OF COMPUTER SCIENCE AND ENGINEERING

OPERATING SYSTEM - CO2018

Assignment Report

Instructor(s): Bùi Xuân Giang, CSE HCMUT

Students: Phạm Duy Quý - 2312900 (Group L09)


Nguyễn Phạm Mạnh Dũng - 2310559 (Group L09)
Nguyễn Huỳnh Gia Đại - 2310624 (Group L09)
Nguyễn Võ Đức Sơn - 2312974 (Group L09)
Mai Trịnh Thiên Ân - 2310182 (Group L09)

Ho Chi Minh City, 05/2025


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

TABLE OF CONTENTS

1 Scheduling 3
1.1 Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Functions implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Running Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.1 sched . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 sched_0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.3 sched_1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2 Memory 21
2.1 Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Functions implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 libmem.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 mm-vm.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.3 mm.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3 Running Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3 System Call 44
3.1 Questions: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2 Functions implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Running Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4 Overall 48

5 References 49

Operating System - Assignment Report Page 1/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

LIST OF FIGURES

1 Gnatt chart for sched input. . . . . . . . . . . . . . . . . . . . . . . . . . 14


2 The Gantt chart for sched_0 input. . . . . . . . . . . . . . . . . . . . . . 16
3 The Gantt chart for sched_1 input. . . . . . . . . . . . . . . . . . . . . . 19
4 Visual Representation of Inter-Module Interaction . . . . . . . . . . . . . 46

Operating System - Assignment Report Page 2/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1 Scheduling
1.1 Question
Question: What are the advantages of using the scheduling algorithm described in this
assignment compared to other scheduling algorithms you have learned?
In this assignment, the selected scheduling algorithm is Multilevel Queue (MLQ) used in
Linux kernel. This algorithm uses separate queues for each priority, and processes with
a specific priority enter the queue with the respective priority. In comparison with other
algorithms described in the course, such as First-come-first-serve (FCFS), Shortest-job-
first (SJF), Shortest-remaining-time-first (SRTF), Round-robin (RR), Priority schedul-
ing (PS), Multilevel Feedback Queue (MLFQ), MLQ has some advantages that can be
listed below:

• Partitioned queues with tailored scheduling:


MLQ provides prioritization based on process type, meaning that processes that
are important and need to be dispatched as soon as possible, such as real-time
processes, system processes, interactive process, .etc are always dispatched first.
In constrast, FCFS, SJF, SRTF, RR treat all processes identically, which can not
optimize the workload.

• Low scheduling overhead:


As the number of queues are fixed, and the CPU runs processes only in the Round-
robin style, MLQ has minimal context-switching overhead. This is lighter than RR’s
frequent preemptions, and simpler than the dynamic promotions and demotions of
MLFQ.

• Simplicity over MLFQ:


MLQ does not require tracking each process’s CPU bursts for promotions or demo-
tions, which makes it much easier to implement and reason about, at the cost of
being less adaptive.

• Workload isolation:
In FCFS or SRTF, a long job has the possibility of holds up shorter ones. However,
in MLQ, a lower-priority queue cannot block higher-priority queue. This allows
important processes such as I/O or interactive processes not to be blocked by un-
necessary processes.

Operating System - Assignment Report Page 3/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1.2 Functions implementation


1.2.1 Requirements

The functions that need to be implemented in this part are listed below:

• enqueue() and dequeue() in queue.c: These functions are used to put a new
PCB and get the next ’in turn’ PCB out of the queue for scheduling.
1 void enqueue ( struct queue_t * q , struct pcb_t * proc ) {
2 /* TODO : put a new process to queue [q] */
3 }
4

5 struct pcb_t * dequeue ( struct queue_t * q ) {


6 /* TODO : return a pcb whose prioprity is the highest
7 * in the queue [q] and remember to remove it from q
8 * */
9 return NULL ;
10 }

• get_proc() in sched.c: These three functions are used to get a process that is
waiting in the ready queue for CPU. If MLQ_SCHED is defined, mlq_ready_queue is
used; otherwise, the ready_queue is used.

• put_proc() and add_proc() in sched.c: Both of them are used to put a selected
process into running_list, meaning that it is running.
1 struct pcb_t * get_proc ( void ) {
2 struct pcb_t * proc = NULL ;
3 /* TODO : get a process from [ ready_queue ].
4 * Remember to use lock to protect the queue .
5 * */
6 return proc ;
7 }
8

9 void put_proc ( struct pcb_t * proc ) {


10 proc - > ready_queue = & ready_queue ;
11 proc - > running_list = & running_list ;
12

13 /* TODO : put running proc to running_list */


14

15 pthread_mutex_lock (& queue_lock ) ;


16 enqueue (& run_queue , proc ) ;
17 pthread_mutex_unlock (& queue_lock ) ;
18 }

Operating System - Assignment Report Page 4/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

19

20 void add_proc ( struct pcb_t * proc ) {


21 proc - > ready_queue = & ready_queue ;
22 proc - > running_list = & running_list ;
23

24 /* TODO : put running proc to running_list */


25

26 pthread_mutex_lock (& queue_lock ) ;


27 enqueue (& ready_queue , proc ) ;
28 pthread_mutex_unlock (& queue_lock ) ;
29 }
30 # endif

1.2.2 Implementation

For enqueue() and dequeue() functions. The implementation is shown below:


1 void enqueue ( struct queue_t *q , struct pcb_t * proc )
2 {
3 /* TODO : put a new process to queue [q] */
4 // null condition
5 if ( q == NULL || proc == NULL )
6 return ;
7

8 // if queue is full , do nothing


9 if (q - > size == MAX_QUEUE_SIZE )
10 {
11 return ;
12 }
13

14 // as priority queue , find the correct position to insert the


process based on prio
15 int i = q - > size - 1;
16 // for debugging purpose
17 // printf (" Enqueuing process with prio = %d\n", proc -> prio ); //
Debugging
18 while ( i >= 0 && (q - > proc [ i ] - > prio > proc - > prio ) )
19 {
20 q - > proc [ i + 1] = q - > proc [ i ];
21 i - -;
22 }
23

24 // insert at the correct position

Operating System - Assignment Report Page 5/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

25 q - > proc [ i + 1] = proc ;


26 q - > size ++;
27 }
28

29 struct pcb_t * dequeue ( struct queue_t * q )


30 {
31 /* TODO : return a pcb whose prioprity is the highest
32 * in the queue [q] and remember to remove it from q
33 * */
34 // return null if queue is empty or there is no process in the
queue
35 if ( q == NULL || q - > size == 0)
36 return NULL ;
37

38 // get the process with the highest priority


39 struct pcb_t * maxPriorPcb = q - > proc [0];
40 // for debugging purpose
41 // printf (" Dequeuing process with prio = %d\n",
maxPriorPcb -> prio ); // Debuggingg
42 // remove the process from the queue
43 for ( int i = 0; i < q - > size - 1; i ++)
44 {
45 q - > proc [ i ] = q - > proc [ i + 1];
46 }
47 // set the last element to null
48 q - > proc [q - > size - 1] = NULL ;
49 // decrease the size of the queue
50 q - > size - -;
51 return maxPriorPcb ;
52 }

For the enqueue() function, as the queue must be ordered by the priority of PCBs, so
the enqueue() function is implemented so that the process with higher priority always
has the smaller index in the queue array than processes with lower priority. Furthermore,
although the struct pcb_t has two attributes for priority: priority and prio, but prio
overrides priority when the program is running, so we choose prio attribute instead
of priority attribute for priority comparison.
For the dequeue() function, as the process with highest priority lies in index 0 (because
of the implementation of enqueue() function), the dequeue() function returns that
element, then removes that element from the current queue.
1 # ifdef MLQ_SCHED

Operating System - Assignment Report Page 6/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

2 /*
3 * Stateful design for routine calling
4 * based on the priority and our MLQ policy
5 * We implement stateful here using transition technique
6 * State representation prio = 0 .. MAX_PRIO , curr_slot =
0..( MAX_PRIO - prio )
7 */
8 struct pcb_t * get_mlq_proc ( void )
9 {
10 struct pcb_t * proc = NULL ;
11 /* TODO : get a process from PRIORITY [ ready_queue ].
12 * Remember to use lock to protect the queue .
13 * */
14 pthread_mutex_lock (& queue_lock ) ;
15 for ( int i = 0; i < MAX_PRIO ; i ++)
16 {
17 if (! empty (& mlq_ready_queue [ i ]) )
18 {
19 if ( slot [ i ] <= 0)
20 {
21 slot [ i ] = MAX_PRIO - i ;
22 continue ;
23 }
24 proc = dequeue (& mlq_ready_queue [ i ]) ;
25 slot [ i ] - -;
26 break ;
27 }
28 }
29 pthread_mutex_unlock (& queue_lock ) ;
30 return proc ;
31 }
32

33 void put_mlq_proc ( struct pcb_t * proc )


34 {
35 pthread_mutex_lock (& queue_lock ) ;
36 uint32_t prio = proc - > prio ;
37 enqueue (& mlq_ready_queue [ prio ] , proc ) ;
38 pthread_mutex_unlock (& queue_lock ) ;
39 }
40

41 void add_mlq_proc ( struct pcb_t * proc )


42 {

Operating System - Assignment Report Page 7/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

43 pthread_mutex_lock (& queue_lock ) ;


44 uint32_t prio = proc - > prio ;
45 enqueue (& mlq_ready_queue [ prio ] , proc ) ;
46 pthread_mutex_unlock (& queue_lock ) ;
47 }
48

49 struct pcb_t * get_proc ( void )


50 {
51 return get_mlq_proc () ;
52 }
53

54 void put_proc ( struct pcb_t * proc )


55 {
56 proc - > ready_queue = & ready_queue ;
57 proc - > mlq_ready_queue = mlq_ready_queue ;
58 proc - > running_list = & running_list ;
59

60 /* TODO : put running proc to running_list */


61 pthread_mutex_lock (& queue_lock ) ;
62 enqueue (& running_list , proc ) ;
63 pthread_mutex_unlock (& queue_lock ) ;
64

65 return put_mlq_proc ( proc ) ;


66 }
67

68 void add_proc ( struct pcb_t * proc )


69 {
70 proc - > ready_queue = & ready_queue ;
71 proc - > mlq_ready_queue = mlq_ready_queue ;
72 proc - > running_list = & running_list ;
73

74 /* TODO : put running proc to running_list */


75

76 pthread_mutex_lock (& queue_lock ) ;


77 enqueue (& running_list , proc ) ;
78 pthread_mutex_unlock (& queue_lock ) ;
79

80 return add_mlq_proc ( proc ) ;


81 }
82 # else
83 struct pcb_t * get_proc ( void )
84 {

Operating System - Assignment Report Page 8/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

85 struct pcb_t * proc = NULL ;


86 /* TODO : get a process from [ ready_queue ].
87 * Remember to use lock to protect the queue .
88 * */
89 pthread_mutex_lock (& queue_lock ) ;
90 if (! empty (& ready_queue ) )
91 {
92 proc = dequeue (& ready_queue ) ;
93 }
94 pthread_mutex_unlock (& queue_lock ) ;
95 return proc ;
96 }
97

98 void put_proc ( struct pcb_t * proc )


99 {
100 proc - > ready_queue = & ready_queue ;
101 proc - > running_list = & running_list ;
102

103 /* TODO : put running proc to running_list */


104

105 pthread_mutex_lock (& queue_lock ) ;


106 enqueue (& running_list , proc ) ; // Add the process to the
running list
107 enqueue (& run_queue , proc ) ;
108 pthread_mutex_unlock (& queue_lock ) ;
109 }
110

111 void add_proc ( struct pcb_t * proc )


112 {
113 proc - > ready_queue = & ready_queue ;
114 proc - > running_list = & running_list ;
115

116 /* TODO : put running proc to running_list */


117

118 pthread_mutex_lock (& queue_lock ) ;


119 enqueue (& running_list , proc ) ; // Add the process to the
running list
120 enqueue (& run_queue , proc ) ;
121 pthread_mutex_unlock (& queue_lock ) ;
122 }
123 # endif

Operating System - Assignment Report Page 9/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

For the implementation of get_proc() and get_mlq_proc() functions, if the MLQ


scheduling is applied, the function get_mlq_proc() is used. We will traverse through
all the queues in the mlq_ready_queue array, from the queue with highest priority to
the queue with lowest priority. If we see a process, it is returned because it is the process
with highest priority in the queue array currently, then subtract the current slot of that
queue. If the slot is used up, the system changes the resource to the next queue. If MLQ
scheduling is not applied, the function get_proc() is used. We just need to call the
dequeue() function in the single ready_queue if it is not empty to get the process with
highest priority.
For the implmentation of add_proc() and put_proc() functions, we add the process
into the queue using the defined enqueue() function.
All three functions use mutex lock when executing to ensure the process is added or
taken properly.

1.3 Running Tests


The source codes are provided with three input tests for scheduling in the input/ direc-
tory.
When checking the input files, we realized that the input files for scheduling test have
different format from other input files. So the read_config() function in os.c can
not read the config for processes properly and creates an "OUT OF MEMORY" error.
To fix this, we decided to adjust the read_config() function, and use a macro called
SCHED_TEST to change the read_config() function so that it is suitable for reading
scheduling input files. The line defining this macro lies in line 14 of the file os-cfg.h.
This macro is defined to test the scheduling input files only. To test other files, this
macro should be undefined by commenting the defining line in os-cfg.h.
The changed read_config() function with added SCHED_TEST macro is shown be-
low:
1 static void read_config ( const char * path ) {
2 FILE * file ;
3 if (( file = fopen ( path , " r " ) ) == NULL ) {
4 printf ( " Cannot find configure file at % s \ n " , path ) ;
5 exit (1) ;
6 }
7 fscanf ( file , " % d % d % d \ n " , & time_slot , & num_cpus ,
& num_processes ) ;
8 // /* Allocate memory for process list */
9 ld_processes . path = ( char **) malloc ( sizeof ( char *) *
num_processes ) ;

Operating System - Assignment Report Page 10/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

10 ld_processes . start_time = ( unsigned


long *) malloc ( sizeof ( unsigned long ) * num_processes ) ;
11 # ifdef MM_PAGING
12 int sit ;
13 # ifdef MM_FIXED_MEMSZ
14 memramsz = 0 x100000 ;
15 memswpsz [0] = 0 x1000000 ;
16 for ( sit = 1; sit < PAGING_MAX_MMSWP ; sit ++) memswpsz [ sit ] = 0;
17 # else
18 // /////////////////// START //////////////////////
19 # ifdef SCHED_TEST
20 long int savePos = ftell ( file ) ;
21 # endif
22 // //////////////////// END ///////////////////////
23

24 /* Read input config of memory size : MEMRAM and upto 4 MEMSWP


( mem swap )
25 * Format : ( size =0 result non - used memswap , must have RAM and
at least 1 SWAP )
26 * MEM_RAM_SZ MEM_SWP0_SZ MEM_SWP1_SZ MEM_SWP2_SZ
MEM_SWP3_SZ
27 */
28 fscanf ( file , " % d \ n " , & memramsz ) ;
29 for ( sit = 0; sit < PAGING_MAX_MMSWP ; sit ++) fscanf ( file ,
" % d " , &( memswpsz [ sit ]) ) ;
30 fscanf ( file , " \ n " ) ; /* Final character */
31 # endif
32 # endif
33 // /////////////////// START //////////////////////
34 # ifdef SCHED_TEST
35 fseek ( file , savePos , SEEK_SET ) ;
36 # endif
37 // //////////////////// END ///////////////////////
38 # ifdef MLQ_SCHED
39 ld_processes . prio = ( unsigned long *) malloc ( sizeof ( unsigned
long ) * num_processes ) ;
40 # endif
41 int i ;
42 for ( i = 0; i < num_processes ; i ++) {
43 ld_processes . path [ i ] = ( char *) malloc ( sizeof ( char ) * 100) ;
44 ld_processes . path [ i ][0] = ’ \0 ’;
45 strcat ( ld_processes . path [ i ] , " input / proc / " ) ;

Operating System - Assignment Report Page 11/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

46 char proc [100];


47 # ifdef MLQ_SCHED
48 fscanf ( file , " % lu % s % lu \ n " , & ld_processes . start_time [ i ] ,
proc , & ld_processes . prio [ i ]) ;
49 // printf ("% lu %s % lu \n", ld_processes . start_time [i],
proc , ld_processes . prio [i ]) ;
50 # else
51 fscanf ( file , " % lu % s \ n " , & ld_processes . start_time [ i ] ,
proc ) ;
52 # endif
53 strcat ( ld_processes . path [ i ] , proc ) ;
54 // /////////////////// START //////////////////////
55 printf ( " Process % d : % s \ n " , i , ld_processes . path [ i ]) ;
56 // //////////////////// END ///////////////////////
57 }
58 }

1.3.1 sched

The input of sched is shown below:


4 2 3
0 p1s 1
1 p2s 0
2 p3s 0
The running test output is shown below:
$./os sched
Time slot 0
ld_routine
Loaded a process at input/proc/p1s, PID: 1 PRIO: 1
Time slot 1
CPU 1: Dispatched process 1
Loaded a process at input/proc/p2s, PID: 2 PRIO: 0
Time slot 2
CPU 0: Dispatched process 2
Loaded a process at input/proc/p3s, PID: 3 PRIO: 0
Time slot 3
Time slot 4
Time slot 5
CPU 1: Put process 1 to run queue

Operating System - Assignment Report Page 12/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

CPU 1: Dispatched process 3


Time slot 6
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 2
Time slot 7
Time slot 8
Time slot 9
CPU 1: Put process 3 to run queue
CPU 1: Dispatched process 3
Time slot 10
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 2
Time slot 11
Time slot 12
Time slot 13
CPU 1: Put process 3 to run queue
CPU 1: Dispatched process 3
Time slot 14
CPU 0: Processed 2 has finished
CPU 0: Dispatched process 1
Time slot 15
Time slot 16
CPU 1: Processed 3 has finished
CPU 1 stopped
Time slot 17
Time slot 18
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 19
Time slot 20
CPU 0: Processed 1 has finished
CPU 0 stopped

The Gantt chart of the running test can be shown below:

Operating System - Assignment Report Page 13/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Figure 1: Gnatt chart for sched input.

Explanation:

• Time slot 0: ps1 is loaded. There is no process, so it is dispatched to CPU0.

• Time slot 1: ps2 is loaded. As CPU0 is used by ps1, it is dispatched by CPU1.

• Time slot 2: ps3 is loaded. It has higher priority than ps1, so ps1 go back to
ready_queue, and ps3 is dispatched by CPU 0.

• Time slot 3 to 12: CPU0 is dispatching ps3, and there is no new process, CPU1
continues to dispatch ps2 as it has the current highest priority.

• Time slot 13: ps2 completes, and ps3 is run by CPU0, so ps1 is dispatched by CPU1.

• Time slot 16 and time slot 20: ps1 and ps3 completes.

1.3.2 sched_0

The input of sched_0 is shown below:


2 1 2
0 s0 4
4 s1 0
The running test output is shown below:
$./os sched_0
Time slot 0
ld_routine
Loaded a process at input/proc/s0, PID: 1 PRIO: 4
Time slot 1
CPU 0: Dispatched process 1
Time slot 2
Time slot 3
CPU 0: Put process 1 to run queue

Operating System - Assignment Report Page 14/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

CPU 0: Dispatched process 1


Time slot 4
Loaded a process at input/proc/s1, PID: 2 PRIO: 0
Time slot 5
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 2
Time slot 6
Time slot 7
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 2
Time slot 8
Time slot 9
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 2
Time slot 10
Time slot 11
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 2
Time slot 12
CPU 0: Processed 2 has finished
CPU 0: Dispatched process 1
Time slot 13
Time slot 14
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 15
Time slot 16
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 17
Time slot 18
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 19
Time slot 20
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 21

Operating System - Assignment Report Page 15/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Time slot 22
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 23
CPU 0: Processed 1 has finished
CPU 0 stopped

The Gantt chart of the running test can be shown below:

Figure 2: The Gantt chart for sched_0 input.

Explanation:

• Time slot 0: s0 is loaded.

• Time slot 1: Only s0 in MLQ, so CPU 0 dispatches s0.

• Time slot 4: s1 is loaded.

• Time slot 5: As s1 has higher priority than s0, CPU 0 puts s0 back to the
ready_queue, then dispatches s1.

• Time slot 5 to 10: There is no new process, so CPU 0 continues to dispatch s1.

• Time slot 11: s1 finishes, CPU 0 dispatches s0.

• Time slot 12 to 23: There is no new process, CPU 0 dispatches s0 until it finishes.

1.3.3 sched_1

The input of sched_1 is shown below:


2 1 4
0 s0 4
4 s1 0
6 s2 0
7 s3 0
The running test output is shown below:

Operating System - Assignment Report Page 16/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

$ ./os sched_1
Time slot 0
ld_routine
Loaded a process at input/proc/s0, PID: 1 PRIO: 4
Time slot 1
CPU 0: Dispatched process 1
Time slot 2
Time slot 3
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 4
Loaded a process at input/proc/s1, PID: 2 PRIO: 0
Time slot 5
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 2
Time slot 6
Loaded a process at input/proc/s2, PID: 3 PRIO: 0
Time slot 7
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 3
Loaded a process at input/proc/s3, PID: 4 PRIO: 0
Time slot 8
Time slot 9
CPU 0: Put process 3 to run queue
CPU 0: Dispatched process 2
Time slot 10
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 4
Time slot 11
Time slot 12
Time slot 13
CPU 0: Put process 4 to run queue
CPU 0: Dispatched process 3
Time slot 14
Time slot 15
CPU 0: Put process 3 to run queue
CPU 0: Dispatched process 2
Time slot 16

Operating System - Assignment Report Page 17/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Time slot 17
CPU 0: Put process 2 to run queue
CPU 0: Dispatched process 4
Time slot 18
Time slot 19
CPU 0: Put process 4 to run queue
CPU 0: Dispatched process 3
Time slot 20
Time slot 21
CPU 0: Put process 3 to run queue
CPU 0: Dispatched process 2
Time slot 22
CPU 0: Processed 2 has finished
CPU 0: Dispatched process 4
Time slot 23
Time slot 24
CPU 0: Put process 4 to run queue
CPU 0: Dispatched process 3
Time slot 25
Time slot 26
CPU 0: Put process 3 to run queue
CPU 0: Dispatched process 4
Time slot 27
Time slot 28
CPU 0: Put process 4 to run queue
CPU 0: Dispatched process 3
Time slot 29
Time slot 30
CPU 0: Put process 3 to run queue
CPU 0: Dispatched process 4
Time slot 31
Time slot 32
CPU 0: Put process 4 to run queue
CPU 0: Dispatched process 3
Time slot 33
Time slot 34
CPU 0: Processed 3 has finished
CPU 0: Dispatched process 4

Operating System - Assignment Report Page 18/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Time slot 35
CPU 0: Processed 4 has finished
CPU 0: Dispatched process 1
Time slot 36
Time slot 37
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 38
Time slot 39
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 40
Time slot 41
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 42
Time slot 43
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 44
Time slot 45
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
Time slot 46
CPU 0: Processed 1 has finished
CPU 0 stopped

The Gantt chart of the running test is shown below:

Figure 3: The Gantt chart for sched_1 input.

Explanation:
• Time slot 0 to 4: s0 is loaded, and there is no process, so CPU 0 dispatches s0.

• Time slot 4: s1 is loaded.

• Time slot 5 to 6: s1 has the higher priority than s0, so CPU 0 dispatches s0.

Operating System - Assignment Report Page 19/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

• Time slot 7: s2 is loaded.

• Time slot 8 and 9: Time s2 and s1 are dispatched sequentially by CPU 0.

• Time slot 10: Time slice runs out, so CPU 0 dispatches s3.

• Time slot 11 to 20: As s1, s2, s3 have the same priority, they are dispatched alter-
nately by CPU0, every two time slots (because the time slice is 2).

• Time slot 21: s1 finishes, so s2 and s3 dispatched alternately by CPU 0.

• Time slot 33: s2 finishes, CPU 0 dispatches s3.

• Time slot 34: s3 finishes.

• Time slot 35 to 46: CPU 0 dispatches s0, the only process, until it finishes.

Operating System - Assignment Report Page 20/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

2 Memory
The memory management system in the OSSIM Sierra project implements a paging
mechanism combined with segmentation. This mechanism enables the management of
virtual address space for each process, maps virtual addresses to physical addresses, and
supports page swapping when RAM is insufficient.
The memory management model is divided into the following modules:

• mm.c: Basic functions for the paging mechanism.

• mm-vm.c: Management of virtual address space and memory regions.

• mm-memphy.c: Management of physical memory (RAM and swap).

• libmem.c: A library providing memory management functions at the application


layer.

2.1 Question
Question 1: In this simple OS, we implement a design of multiple memory segments or
memory areas in source code declaration. What is the advantage of the proposed design
of multiple segments?
Answer:
The design of multiple memory segments in this simple OS offers the following
advantages:

• Isolation and Security:


Each segment (code, stack, heap) is managed separately, preventing errors or unau-
thorized access between segments. For example, the code segment is read-only and
executable, while the heap supports dynamic allocation.

• Flexible Memory Management:


Dividing memory into contiguous regions (vm_area_struct) allows efficient virtual-
to-physical mapping, maximizes physical space utilization, and supports mecha-
nisms like paging and swapping.

• Multitasking Support:
Processes have independent virtual memory spaces, enabling concurrent execution
without conflicts.

• Access Efficiency:
Managing segments separately optimizes memory access (e.g., caching code seg-
ments to speed up execution).

Operating System - Assignment Report Page 21/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

• Error Prevention:
Isolating stack, heap, and code helps mitigate software errors like buffer overflows
or accidental data overwrites.

• Hardware Compatibility:
This design aligns with CPU paging mechanisms, leveraging the MMU to efficiently
translate virtual-to-physical addresses.

Question 2: What will happen if we divide the address to more than 2 levels in the
paging memory management system?
Answer:
Dividing addresses into more than 2 levels in a paging memory management system
leads to the following consequences:

• Optimized Page Table Size:


Each level manages a smaller portion of the address, reducing table size and saving
physical memory.

• Increased Access Latency:


Each level requires additional memory accesses for translation, but TLB (Transla-
tion Lookaside Buffer) can mitigate this.

• Support for Larger Address Spaces:


Suitable for systems with large physical memory or complex virtualization needs.

• Management Complexity:
Handling page faults, swapping, or frame allocation becomes more complex due to
multi-layered mapping.

• Multi-Level Hierarchy:
For example, in a 3-level system (PGD, PMD, PTE), the virtual address is split
into multiple parts, optimizing page table management (referencing Table 1 in the
document for CPU and page size configurations).

• Performance Trade-Off:
More levels reduce table size but increase memory accesses. The TLB mitigates this
by caching frequent translations.

• Impact on Page Faults and Swap:


Handling page faults becomes more complex due to traversing multiple tables, but
swap mechanisms still operate at the frame level (as described in Section 2.2.3).

Operating System - Assignment Report Page 22/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

For this project 22-bit address space, multi-level paging might be overkill, but for larger
address spaces (like 32-bit or 64-bit), it becomes essential for efficient memory manage-
ment.
Question 3: What are the advantages and disadvantages of segmentation with paging?
Answer:
Combining segmentation and paging offers the following advantages and disad-
vantages:
Advantages:

1. Flexible Logical Memory Management:


- Segmentation divides memory into logical regions (code, stack, heap), enhancing
data protection and access control.
- Paging optimizes physical memory usage, avoiding external fragmentation.

2. Multitasking and Resource Sharing:


- Segments can share memory pages across processes (e.g., shared libraries).

3. Performance Optimization with TLB:


- TLB (Translation Lookaside Buffer) caches frequent address translations, reducing
access latency.

Disadvantages:

1. Increased Complexity:
- Requires maintaining both segment tables and page tables, leading to higher
management overhead and memory consumption.

2. Higher Hardware Cost:


- Requires hardware support (MMU) for multi-level address translation, increasing
latency without TLB.

3. Dynamic Allocation Challenges:


- Combining segmentation and paging complicates heap/stack expansion, risking
internal fragmentation.

2.2 Functions implementation


2.2.1 libmem.c

1 int __alloc ( struct pcb_t * caller , int vmaid , int rgid , int size ,
int * alloc_addr ) {
2 /* Allocate at the toproof */
3 struct vm_rg_struct rgnode ;

Operating System - Assignment Report Page 23/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

4 // TODO : commit the vmaid ( NOTE : oldddddddd module , ignore it )


5

6 if ( get_free_vmrg_area ( caller , vmaid , size , & rgnode ) == 0)


7 {
8 caller - > mm - > symrgtbl [ rgid ]. rg_start = rgnode . rg_start ;
9 caller - > mm - > symrgtbl [ rgid ]. rg_end = rgnode . rg_end ;
10

11 * alloc_addr = rgnode . rg_start ;


12

13 pthread_mutex_unlock (& mmvm_lock ) ;


14 return 0;
15 }
16

17 struct vm_area_struct * cur_vma = get_vma_by_num ( caller - > mm ,


vmaid ) ;
18

19 /* Calculate increase size aligned to page size */


20 int inc_sz = PAGING_PAGE_ALIGNSZ ( size ) ;
21

22 /* TODO Record old sbrk before increasing */


23 int old_sbrk = cur_vma - > sbrk ;
24

25 /* TODO INCREASE THE LIMIT using systemcall sys_memap with


SYSMEM_INC_OP */
26 struct sc_regs regs ;
27 regs . a1 = SYSMEM_INC_OP ;
28 regs . a2 = vmaid ;
29 regs . a3 = inc_sz ;
30

31 /* Invoke syscall 17 ( sys_memmap ) */


32 int ret = syscall ( caller , 17 , & regs ) ;
33

34 if ( ret < 0) {
35 pthread_mutex_unlock (& mmvm_lock ) ;
36 return -1;
37 }
38

39 /* TODO Commit the allocation address at the old_sbrk */


40 * alloc_addr = old_sbrk ;
41

42 /* Update symbol table for this region */


43 caller - > mm - > symrgtbl [ rgid ]. rg_start = old_sbrk ;

Operating System - Assignment Report Page 24/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

44 caller - > mm - > symrgtbl [ rgid ]. rg_end = old_sbrk + size ;


45

46 pthread_mutex_unlock (& mmvm_lock ) ;


47 return 0;
48 }

The __alloc() function is used to allocate the memory region for any asking processes,
it performs the following steps:
- Firstly, we use pthread_mutex_lock(&mmvn_lock) to ensure thread-safe access to
shared memory.
- Then, the function tries to find a suitable free memory region so that the asking
process would be located there. We call the get_free_vmrg_area() function, there are
two separate cases.

• Firstly, If a free memory region is successfully found:

+ The details of this region will be stored in variable rgnode.


+ Follow that, we update the symbol table of caller->mm->symtgtbl with the
start and end of the allocated region and store the starting address in alloc_addr.
+ Then, the function performs mapping the logical memory address to physical
memory address using vm_map_ram() function, if it is successfully done, using
pthread_mutex_unlock() to give place for other processes, else it unlock the
mutex thread and return -1 as an error.
+ Finally, we log information of the allocations which include : process ID, region
ID,starting address and size

• Secondly, if there is no suitable region left in the current virtual memory area:

+ We have to enlarge the virtual memory area size an amount at least equal to
the demand of the asking process. We need to align the requested size to page
size, then record the old sbrk and finally use system call to increase the current
virtual memory size.
+ After that, we do the same thing as the previous case: update symbol table,
print the allocation detail and unlock the mutex.

1 int __free ( struct pcb_t * caller , int vmaid , int rgid ) {


2 if ( rgid < 0 || rgid > PAGING_MAX_SYMTBL_SZ ) return -1;
3

4 /* TODO Manage the collect freed region to freerg_list */


5 struct vm_rg_struct * rgnode = malloc ( sizeof ( struct
vm_rg_struct ) ) ;

Operating System - Assignment Report Page 25/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

6 if ( rgnode == NULL ) return -1;


7

8 /* Copy information from symbol region table */


9 rgnode - > rg_start = caller - > mm - > symrgtbl [ rgid ]. rg_start ;
10 rgnode - > rg_end = caller - > mm - > symrgtbl [ rgid ]. rg_end ;
11 rgnode - > rg_next = NULL ;
12

13 /* Reset symbol table entry */


14 caller - > mm - > symrgtbl [ rgid ]. rg_start = 0;
15 caller - > mm - > symrgtbl [ rgid ]. rg_end = 0;
16

17 /* Enlist the obsoleted memory region */


18 enlist_vm_freerg_list ( caller - > mm , rgnode ) ;
19

20 return 0;
21 }

__free function ís responsible for deallocating a virtual memory region which has been
allocated in advance.
- The function begins with checking whether the id of the given region is valid or not,
then it conducts pthread_mutex_lock() to prevent conflict of multithread action.
- After that, we retrieve the memory region with the corresponding id and check if the
memory region has already been deallocated or has never been allocated.
- We use a loop to change VPN addresses based on virtual addresses, check if there
is PTE_PRESENT to call MEMPHY_put_freefp to give back the physical frame and then
delete the PRESENT bit.
- Add the deallocated region to the list of free_region_list using enlist_vm_freerg_list()
function - Log the information
1 int liballoc ( struct pcb_t * proc , uint32_t size , uint32_t
reg_index )
2 {
3 /* TODO Implement allocation on vm area 0 */
4 int addr ;
5

6 /* By default using vmaid = 0 */


7 return __alloc ( proc , 0 , reg_index , size , & addr ) ;
8 }
9

10 int libfree ( struct pcb_t * proc , uint32_t reg_index )


11 {
12 /* TODO Implement free region */

Operating System - Assignment Report Page 26/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

13

14 /* By default using vmaid = 0 */


15 return __free ( proc , 0 , reg_index ) ;
16 }

liballoc and libfree function just simply call __alloc and __free function with
default parameter vmaid = 0
1 int pg_getpage ( struct mm_struct * mm , int pgn , int * fpn , struct
pcb_t * caller ) {
2 uint32_t pte = mm - > pgd [ pgn ];
3

4 if (! PAGING_PAGE_PRESENT ( pte ) ) { /* Page is not online , make it


actively living */
5 int vicpgn ;
6 int swpfpn ;
7 int vicfpn ;
8 uint32_t vicpte ;
9

10 int tgtfpn = PAGING_PTE_SWP ( pte ) ; // The target frame storing our


variable
11

12 // TODO playing with paging theory


13 find_victim_page ( mm , & vicpgn ) ;
14

15 /* Get victim ’s page table entry */


16 vicpte = mm - > pgd [ vicpgn ];
17 vicfpn = PAGING_FPN ( vicpte ) ;
18

19 /* Get free frame in MEMSWP */


20 MEMPHY_get_freefp ( caller - > active_mswp , & swpfpn ) ;
21

22 /* TODO Copy victim frame to swap : SWP ( vicfpn <--> swpfpn ) */


23 struct sc_regs regs ;
24 regs . a1 = SYSMEM_SWP_OP ;
25 regs . a2 = vicfpn ;
26 regs . a3 = swpfpn ;
27 syscall ( caller , 17 , & regs ) ;
28

29

30 /* TODO Copy target frame from swap to mem : SWP ( tgtfpn <-->
vicfpn ) */
31 regs . a1 = SYSMEM_SWP_OP ;

Operating System - Assignment Report Page 27/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

32 regs . a2 = vicfpn ;
33 regs . a3 = tgtfpn ;
34 syscall ( caller , 17 , & regs ) ;
35

36 /* Update page table entry for victim - set as swapped */


37 pte_set_swap (& mm - > pgd [ vicpgn ] , 0 , swpfpn ) ;
38

39 /* Update page table entry for target page - mark as present */


40 pte_set_fpn (& mm - > pgd [ pgn ] , vicfpn ) ;
41 PAGING_PTE_SET_PRESENT ( mm - > pgd [ pgn ]) ;
42

43 /* Add to FIFO page queue for future replacement */


44 enlist_pgn_node (& mm - > fifo_pgn , pgn ) ;
45 }
46

47 * fpn = PAGING_FPN ( mm - > pgd [ pgn ]) ;


48

49 return 0;
50 }

pg_getpage() function’s mission is to retrieve a virtual memory page and make sure
this page is currently located in physical memory, if not, conduct swapping from disk
to memory.
- Firstly we take the information of the target page, then check the presence of this
page in memory and region. In case, the page is not present in both of these regions,
the function returns -1 as a sign of error.
- If the target page is currently in the swapped region, perform swapping. We take the
swap frame of the target page using PAGING_PTE_SWAP(pte), then we have to find a
victim page which is currently in the memory region to swap out to swap region and
give the free space back.
- Swapping step includes to part, that is swap out (swap target page to SWAP region)
and swap in (swap target page to MEM region), these actions need to call system call.
- Update pte of target page and victim page to remember their new locations.
- Add the new page into the FIFO list of memory page Return the number of physical
frame using *fpn = PAGING_FPN(mm->pgd[pgn]);
1 int pg_getval ( struct mm_struct * mm , int addr , BYTE * data , struct
pcb_t * caller ) {
2 int pgn = PAGING_PGN ( addr ) ;
3 int off = PAGING_OFFST ( addr ) ;
4 int fpn ;
5

Operating System - Assignment Report Page 28/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

6 /* Get the page to MEMRAM , swap from MEMSWAP if needed */


7 if ( pg_getpage ( mm , pgn , & fpn , caller ) != 0) return -1; /* invalid
page access */
8

9 /* Calculate physical address */


10 int phyaddr = ( fpn << ( PAGING_ADDR_OFFST_HIBIT + 1) ) | off ;
11

12 /* MEMPHY READ using SYSCALL 17 sys_memmap with SYSMEM_IO_READ */


13 struct sc_regs regs ;
14 regs . a1 = SYSMEM_IO_READ ;
15 regs . a2 = phyaddr ;
16 syscall ( caller , 17 , & regs ) ;
17

18 * data = ( BYTE ) regs . a3 ; // Data is updated via pointer


19 return 0;
20 }

pg_getval() function is used to read the value at a given offset.


- First of all, we calculate the page number and the offset using corresponding macro
PAGING_PGN() and PAGING_OFFST().
- Use pg_getpage() to get the page from MEMRAM unless there is necessary page,
swap from MEMSWAP.
- Calculate the physical address and then use system call with SYSTEM_TO_READ macro.
- Finally, retrieve the value via pointer *data.
1 int pg_setval ( struct mm_struct * mm , int addr , BYTE value , struct
pcb_t * caller ) {
2 int pgn = PAGING_PGN ( addr ) ;
3 int off = PAGING_OFFST ( addr ) ;
4 int fpn ;
5 /* Get the page to MEMRAM , swap from MEMSWAP if needed */
6 if ( pg_getpage ( mm , pgn , & fpn , caller ) != 0) return -1; /* invalid
page access */
7

8 /* Calculate physical address */


9 int phyaddr = ( fpn << ( PAGING_ADDR_OFFST_HIBIT + 1) ) | off ;
10

11 /* MEMPHY WRITE using SYSCALL 17 sys_memmap with SYSMEM_IO_WRITE


*/
12 struct sc_regs regs ;
13 regs . a1 = SYSMEM_IO_WRITE ;
14 regs . a2 = phyaddr ;
15 regs . a3 = value ;

Operating System - Assignment Report Page 29/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

16

17 syscall ( caller , 17 , & regs ) ;


18

19 return 0;
20 }

pg_setval() function is used to write value to the region with specified offset. We just
simply do the same steps as what have been done in pg_getval(). The only difference
that is we use SYSTEM_TO_WRITE instead of SYSTEM_TO_READ for our system call.
1 int find_victim_page ( struct mm_struct * mm , int * retpgn ) {
2 struct pgn_t * pg = mm - > fifo_pgn ;
3

4 /* TODO : Implement the theorical mechanism to find the victim


page */
5

6 // fifo so get first , second move to first


7 if ( pg != NULL ) {
8 * retpgn = pg - > pgn ;
9 mm - > fifo_pgn = pg - > pg_next ;
10 }
11 // If no page in queue , use a simple approach - page 0
12 else * retpgn = 0;
13

14 free ( pg ) ;
15 return 0;
16 }

find_victim_page() function is a special function which is only used in pg_getpage()


function to find a victim page which will be swapped out to give space for another page
if needed.There are dozens of ways to choose a victim page. However, our assignment
chose the FIFO algorithm. The fifo list is implemented by a linked-list. Therefore, we
simply just get the head of the linked-list and update the new head.
1 int get_free_vmrg_area ( struct pcb_t * caller , int vmaid , int size ,
struct vm_rg_struct * newrg ) {
2 if ( caller == NULL || newrg == NULL ) return -1;
3 // debug
4 // printf (" The value of vmaid is : %d\n", vmaid );
5 struct vm_area_struct * cur_vma = get_vma_by_num ( caller - > mm ,
vmaid ) ;
6 if ( cur_vma == NULL ) return -1;
7 struct vm_rg_struct * rgit = cur_vma - > vm_freerg_list ;
8 if ( rgit == NULL ) return -1;

Operating System - Assignment Report Page 30/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

10 /* Probe unintialized newrg */


11 // mean reset this mem region pointer for use
12 newrg - > rg_end = -1;
13 newrg - > rg_start = -1;
14

15 /* TODO Traverse on list of free vm region to find a fit space */


16 for (; rgit != NULL ;) {
17

18 // found fit region


19 if ( rgit - > rg_end - rgit - > rg_start >= size ) {
20 newrg - > rg_start = rgit - > rg_start ;
21 newrg - > rg_end = rgit - > rg_start + size ;
22

23 // Update the used region


24 // if not used all free space
25 if ( rgit - > rg_end - newrg - > rg_end > 0) {
26 rgit - > rg_start = newrg - > rg_end ;
27 }
28 // if used all free space , remove this rg from free list
29 else {
30 cur_vma - > vm_freerg_list = rgit - > rg_next ;
31 free ( rgit ) ;
32 }
33

34 return 0;
35 }
36

37 rgit = rgit - > rg_next ;


38 }
39

40 return -1;
41 }

get_free_vmrg_area() function return a free virtual memory region which is ready to


be allocated.
- Base on the vmaid we use get_vma_by_num() function to retrieve the virtual memory
area.
- After that, the function retrieves the head of the free memory region linked list. We
traverse through the list to find a free region having the size at least equal to the demand
of newrg, there are two cases that need to be considered.
- If the newrg size is larger smaller than the free region, we update the current free

Operating System - Assignment Report Page 31/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

region right at the place where the newrg ends.


- If newrg size equal to the size of the current free memory region sharply, just simply
remove it from the free_list.
- Go on traversing unless there has not been a suitable region yet.

2.2.2 mm-vm.c

1 int inc_vma_limit ( struct pcb_t * caller , int vmaid , int inc_sz )


2 {
3 struct vm_rg_struct * newrg = malloc ( sizeof ( struct
vm_rg_struct ) ) ;
4 int inc_amt = PAGING_PAGE_ALIGNSZ ( inc_sz ) ;
5 int incnumpage = inc_amt / PAGING_PAGESZ ;
6 struct vm_rg_struct * area = get_vm_area_node_at_brk ( caller ,
vmaid , inc_sz , inc_amt ) ;
7 struct vm_area_struct * cur_vma = get_vma_by_num ( caller - > mm ,
vmaid ) ;
8

9 int old_end = cur_vma - > vm_end ;


10

11 /* Validate overlap of obtained region */


12 if ( validate_overlap_vm_area ( caller , vmaid , area - > rg_start ,
area - > rg_end ) < 0)
13 return -1; /* Overlap and failed allocation */
14

15 /* TODO : Obtain the new vm area based on vmaid */


16 // cur_vma -> vm_end ...
17 // inc_limit_ret ...
18

19

20 /* Update the VM area end limit */


21 cur_vma - > vm_end = area - > rg_end ;
22 cur_vma - > sbrk = area - > rg_end ; /* Update the break point */
23

24 int inc_limit_ret = vm_map_ram ( caller , area - > rg_start ,


area - > rg_end , old_end , incnumpage , newrg ) ;
25 if ( inc_limit_ret < 0) return -1; /* Failed to map the memory
to MEMRAM */
26

27 return 0;
28 }

Operating System - Assignment Report Page 32/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

This function is called by syscall 17 SYSMEM_INC_OP to extend the limit of a VM area


to allocate more space for a process. Specifically:

1. Calculate expansion size: Align inc_sz to the page size and compute the number
of pages required.

2. Determine new memory area: Call get_vm_area_node_at_brk to obtain the new


memory region starting at the current sbrk.

3. Check for overlap: Use validate_overlap_vm_area to make sure the new region
doesn’t overlap existing VMAs.

4. Update VM area limit: Set vm_end and sbrk for the VMA.

5. Perform mapping: Call vm_map_ram to map the new pages into RAM.

6. Clean up temporary regions: Free newrg and area to prevent memory leaks.

1 struct vm_rg_struct * get_vm_area_node_at_brk ( struct pcb_t


* caller , int vmaid , int size , int alignedsz )
2 {
3

4 /* TODO retrive current vma to obtain newrg , current comment


out due to compiler redundant warning */
5

6 /* TODO : update the newrg boundary


7 // newrg -> rg_start = ...
8 // newrg -> rg_end = ...
9 */
10

11

12 struct vm_rg_struct * newrg ;


13 struct vm_area_struct * cur_vma = get_vma_by_num ( caller - > mm ,
vmaid ) ;
14

15 newrg = malloc ( sizeof ( struct vm_rg_struct ) ) ;


16

17 /* Update the newrg boundary */


18 newrg - > rg_start = cur_vma - > sbrk ;
19 newrg - > rg_end = newrg - > rg_start + alignedsz ;
20 newrg - > rg_next = NULL ; /* Initialize next pointer to NULL */
21

22 return newrg ;
23 }

Operating System - Assignment Report Page 33/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

This is a helper function to create a node representing the new memory region, starting
from the current sbrk of the VM area. It does not check for overlap or perform mapping.
The function:

1. Retrieves the corresponding VMA using get_vma_by_num.

2. Allocates a new vm_rg_struct for the desired region.

3. Sets rg_start to sbrk, and rg_end to sbrk + alignedsz.

1 int validate_overlap_vm_area ( struct pcb_t * caller , int vmaid , int


vmastart , int vmaend ) {
2 /* TODO validate the planned memory area is not overlapped */
3 struct vm_area_struct * vma = caller - > mm - > mmap ;
4

5 /* Check if the planned memory area overlaps with existing


areas */
6 while ( vma != NULL ) {
7 /* Skip the vmaid we ’re trying to extend */
8 if ( vma - > vm_id != vmaid ) {
9 /* Check if there ’s an overlap using the defined macro */
10 if ( OVERLAP ( vmastart , vmaend , vma - > vm_start , vma - > vm_end ) ) {
11 return -1; /* Overlap detected */
12 }
13 }
14 vma = vma - > vm_next ;
15 }
16

17 return 0;
18 }

This function checks whether the new memory area overlaps with any other VMAs. It
ensures that the new area won’t corrupt virtual memory space. The procedure:

1. Iterate through all VMAs (mmap) of the process.

2. Skip the VMA with the same vm_id as vmaid (it’s being extended).

3. If overlap is found (OVERLAP), print a warning and return -1.

4. Return 0 if no overlap is detected.

2.2.3 mm.c

Operating System - Assignment Report Page 34/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1 int vm_map_ram ( struct pcb_t * caller , int astart , int aend , int
mapstart , int incpgnum , struct vm_rg_struct * ret_rg )
2 {
3 struct framephy_struct * frm_lst = NULL ;
4 int ret_alloc ;
5

6 /* @bksysnet : author provides a feasible solution of getting


frames
7 * FATAL logic in here , wrong behaviour if we have not enough
page
8 *i.e. we request 1000 frames meanwhile our RAM has size of 3
frames
9 * Don ’t try to perform that case in this simple work , it will
result
10 * in endless procedure of swap - off to get frame and we have not
provide
11 * duplicate control mechanism , keep it simple
12 */
13 ret_alloc = alloc_pages_range ( caller , incpgnum , & frm_lst ) ;
14

15 if ( ret_alloc < 0 && ret_alloc != -3000)


16 return -1;
17

18 /* Out of memory */
19 if ( ret_alloc == -3000)
20 {
21 # ifdef MMDBG
22 printf ( " OOM : vm_map_ram out of memory \ n " ) ;
23 # endif
24 return -1;
25 }
26

27 /* it leaves the case of memory is enough but half in ram , half


in swap
28 * do the swaping all to swapper to get the all in ram */
29 vmap_page_range ( caller , mapstart , incpgnum , frm_lst , ret_rg ) ;
30

31 return 0;
32 }

This is the main function that maps the newly allocated virtual memory area into
physical RAM. It is called by inc_vma_limit and plays the most important role in

Operating System - Assignment Report Page 35/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

the memory expansion procedure, ensuring correct mapping from virtual to physical
addresses. Specifically:

1. Allocate physical frames: Call alloc_pages_range to request incpgnum page frames


from physical memory.

2. Check for errors: If allocation fails (e.g., OOM), return an error.

3. Map frames to page table: Call vmap_page_range to map each frame to its corre-
sponding virtual address in the process’s page table.

4. Free temporary frame list: Release the temporary list of frame metadata to avoid
memory leaks.

1 int alloc_pages_range ( struct pcb_t * caller , int req_pgnum , struct


framephy_struct ** frm_lst ) {
2 int pgit ;
3 int fpn ;
4 struct framephy_struct * newfp_str = NULL ;
5

6 // TODO : allocate the page


7 // caller -> ...
8 // frm_lst -> ...
9

10 /*
11 for ( pgit = 0; pgit < req_pgnum ; pgit ++) {
12 // TODO : allocate the page
13

14 if ( MEMPHY_get_freefp ( caller -> mram , & fpn ) == 0) {


15 newfp_str -> fpn = fpn ;
16 }
17 else {
18 // TODO : ERROR CODE of obtaining somes but not enough frames
19 }
20 }
21 */
22

23 * frm_lst = NULL ;
24 struct framephy_struct * tail = NULL ;
25

26 for ( pgit = 0; pgit < req_pgnum ; pgit ++) {


27 if ( MEMPHY_get_freefp ( caller - > mram , & fpn ) == 0) {
28 // Create new frame

Operating System - Assignment Report Page 36/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

29 newfp_str = malloc ( sizeof ( struct framephy_struct ) ) ;


30 newfp_str - > fpn = fpn ;
31 newfp_str - > fp_next = NULL ;
32

33 // Add to list
34 if (* frm_lst == NULL ) {
35 * frm_lst = newfp_str ;
36 tail = newfp_str ;
37 }
38 else {
39 tail - > fp_next = newfp_str ;
40 tail = newfp_str ;
41 }
42

43 }
44 // Not enough frames available
45 else {
46 // Clean up already allocated frames
47 struct framephy_struct * temp = * frm_lst ;
48 while ( temp ) {
49 struct framephy_struct * next = temp - > fp_next ;
50 MEMPHY_put_freefp ( caller - > mram , temp - > fpn ) ;
51 free ( temp ) ;
52 temp = next ;
53 }
54 return -3000; // OOM
55 }
56

57 }
58

59 return 0;
60 }

This helper function allocates a list of physical page frames based on the requested
number of pages req_pgnum. It interacts directly with physical memory. The procedure:

1. Use MEMPHY_get_freefp to get a free frame.

2. If successful, add it to the list *frm_lst.

3. If frames run out mid-way: free all previously allocated frames and return error
code -3000 (OOM).

4. Once done, reverse the list to maintain mapping order.

Operating System - Assignment Report Page 37/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1 int vmap_page_range ( struct pcb_t * caller , // process


call
2 int addr , // start
address which is aligned to pagesz
3 int pgnum , // num of
mapping page
4 struct framephy_struct * frames , // list of
the mapped frames
5 struct vm_rg_struct * ret_rg ) // return
mapped region , the real mapped fp
6 { // no
guarantee all given pages are mapped
7

9 /* TODO : update the rg_end and rg_start of ret_rg


10 // ret_rg -> rg_end = ....
11 // ret_rg -> rg_start = ...
12 // ret_rg -> vmaid = ...
13 */
14

15

16 /* TODO map range of frame to address space


17 * [ addr to addr + pgnum * PAGING_PAGESZ
18 * in page table caller ->mm -> pgd []
19 */
20

21 int pgit = 0;
22 int pgn = PAGING_PGN ( addr ) ;
23

24 ret_rg - > rg_start = addr ;


25 ret_rg - > rg_end = addr + pgnum * PAGING_PAGESZ ;
26

27 struct framephy_struct * fpit = frames ;


28 for ( pgit = 0; pgit < pgnum ; pgit ++) {
29 if ( fpit == NULL ) break ;
30

31 int current_pgn = pgn + pgit ;


32

33 // caller ->mm -> pgd [ current_pgn ]= fpit -> fpn ;


34

35 uint32_t * pte = & caller - > mm - > pgd [ current_pgn ];


36 pte_set_fpn ( pte , fpit - > fpn ) ;

Operating System - Assignment Report Page 38/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

37

38

39 fpit = fpit - > fp_next ;


40

41 // Tracking for later page replacement activities ( if needed )


Enqueue new usage page
42 enlist_pgn_node (& caller - > mm - > fifo_pgn , current_pgn ) ;
43 }
44

45 return 0;
46 }

This function maps each physical frame into the process’s page table (pgd), starting
from virtual address addr. It connects virtual pages to physical frames after they’ve
been allocated. Details:

1. Update ret_rg with the actual mapped region.

2. Iterate through the list of frames:

3. Get the corresponding virtual page number (pgn).

4. Call pte_set_fpn to write the physical frame number into the page table entry.

5. Add pgn to the FIFO list for future page replacement.

2.3 Running Test


We will commence custom paging test for simplicity reason:
Input of os_custom_paging2 is shown below:
2 1 1
1048576 16777216 0 0 0
1 p7s 10
Input of p7s is shown below:
1 8
alloc 300 0
write 90 0 40
read 0 40 100
free 0
write 90 0 40
alloc 300 0
write 90 0 40

Operating System - Assignment Report Page 39/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

alloc 400 2
The running test output is shown below:
Process 0: input/proc/p7s
Time slot 0
ld_routine
Time slot 1
Loaded a process at input/proc/p7s, PID: 1 PRIO: 10
Time slot 2
CPU 0: Dispatched process 1
===== PHYSICAL MEMORY AFTER ALLOCATION =====
PID=1 - Region=0 - Address=00000000 - Size=300 byte
print_pgtbl: 0 - 512
00000000: 80000001
00000004: 80000000
Time slot 3
write region=0 offset=40 value=90 PID=1
print_pgtbl: 0 - 512
00000000: 80000001
00000004: 80000000
===== PHYSICAL MEMORY DUMP =====
BYTE 00000128: 90
===== PHYSICAL MEMORY END-DUMP =====
================================================================
Time slot 4
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
read region=0 offset=40 value=90 PID=1
print_pgtbl: 0 - 512
00000000: 80000001
00000004: 80000000
===== PHYSICAL MEMORY DUMP =====
BYTE 00000128: 90
===== PHYSICAL MEMORY END-DUMP =====
================================================================
Time slot 5
===== PHYSICAL MEMORY AFTER DEALLOCATION =====
PID=1 - Region=0
print_pgtbl: 0 - 512

Operating System - Assignment Report Page 40/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

00000000: 00000001
00000004: 00000000
Time slot 6
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
=========invalid page access=========
========fail to write========
write region=0 offset=40 value=90 PID=1
print_pgtbl: 0 - 512
00000000: 00000001
00000004: 00000000
===== PHYSICAL MEMORY DUMP =====
BYTE 00000128: 90
===== PHYSICAL MEMORY END-DUMP =====
================================================================
Time slot 7
===== PHYSICAL MEMORY AFTER ALLOCATION =====
PID=1 - Region=0 - Address=00000000 - Size=300 byte
print_pgtbl: 0 - 512
00000000: 80000001
00000004: 80000000
Time slot 8
CPU 0: Put process 1 to run queue
CPU 0: Dispatched process 1
write region=0 offset=40 value=90 PID=1
print_pgtbl: 0 - 512
00000000: 80000001
00000004: 80000000
===== PHYSICAL MEMORY DUMP =====
BYTE 00000128: 90
===== PHYSICAL MEMORY END-DUMP =====
================================================================
Time slot 9
===== PHYSICAL MEMORY AFTER ALLOCATION =====
PID=1 - Region=2 - Address=00000200 - Size=400 byte
print_pgtbl: 0 - 1024
00000000: 80000001
00000004: 80000000

Operating System - Assignment Report Page 41/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

00000008: 80000003
00000012: 80000002
Time slot 10
CPU 0: Processed 1 has finished
CPU 0 stopped

================================================================

Explanation: The reason doing this test was to show how “alloc, free, write, read”
work and interact with each other.
The process we use is custom: p7s.
For the first task: alloc 300 0 .

• We alloc region 0 for this process with size 300 Bytes, thus result in these page
entries (with 1 page size = 256):
– 00000000: 80000001 –
– 00000004: 80000000 –

• Since 300 bytes require 2 pages (256 bytes each), we need these 2 page table entries.

• The most significant bit (MSB) being 1 (8xxxxxxx) indicates these are valid pages.

For the second task: write 90 0 40

• Write value 90 to region 0 at offset 40.

• The physical address becomes 128 (0x80), which is calculated from the base address
plus offset.

• The memory dump confirms value 90 is stored at BYTE 00000128.

For the third task: read 0 40 100

• Read from region 0 at offset 40 with expected value 100.

• The operation returns value 90, which is correct since that’s what we previously
wrote.

• The page table remains unchanged since reading doesn’t modify memory allocation.

For the fourth task: free 0

• Deallocate region 0. The page table entries change to:


– 00000000: 00000001 –
– 00000004: 00000000 –

Operating System - Assignment Report Page 42/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

• The MSB changes from 1 to 0, marking these pages as invalid/deallocated

• Note that the actual memory content is not cleared

For the fifth task: write 90 0 40

• Attempt to write to the freed region 0 at offset 40

• The system detects "invalid page access" and fails the write operation

• The memory dump still shows 90 at address 128, but the operation is blocked due
to invalid page status

For the sixth task: alloc 300 0

• Reallocate region 0 with size 300 bytes

• Page table entries are restored to:


– 00000000: 80000001 –
– 00000004: 80000000 –

• The MSB returns to 1, marking these pages as valid again

For the seventh task: write 90 0 40

• Write value 90 to region 0 at offset 40 again

• This operation succeeds because the region is now valid

• The memory dump shows value 90 at BYTE 00000128 as expected

For the final task: alloc 400 2

• Allocate region 2 with size 400 bytes. This requires 2 pages (400 > 256). The page
table is extended with new entries:
00000008: 80000003
00000012: 80000002

• These entries have the MSB set to 1, indicating valid pages

This test demonstrates the fundamental mechanisms of virtual memory


management:
- Pages are marked valid/invalid using the MSB of page table entries
- Memory protection prevents access to deallocated regions
- Physical memory content persists after deallocation but becomes inaccessible
- Multiple regions can be allocated independently for the same process

Operating System - Assignment Report Page 43/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

3 System Call
In this project, we implemented a custom system call named __sys_killall to support
process management operations within our simulated operating system kernel. The main
purpose of this syscall is to allow a running process to request the termination of all
other processes that match a specified name. This is useful for testing, debugging, or
resource cleanup in a multi-process environment.

3.1 Questions:
Question 1: What is the mechanism to pass a complex argument to a system call using
the limited registers?
Answer:
The mechanism to pass complex arguments to a system call using limited registers
includes:

• Passing Pointers via Registers:


- Instead of transferring entire data, user space passes memory addresses into reg-
isters. The kernel directly reads data from this address.

• Copying Data from User to Kernel Space:


- The kernel copies data from the user-space address into kernel memory to prevent
conflicts or mid-execution modifications.
- This ensures data integrity and safety (refer to SYSMEM_IO_READ/WRITE
in the memmap system call).

• Using Predefined Data Structures:


- User and kernel spaces share standardized data structures (e.g., struct vm_rg_struct)
to indirectly transfer complex information via registers.

• Combining Multiple Registers:


- Large arguments are split and assigned to multiple registers. For example, memmap
uses REG_ARG2 and REG_ARG3 to pass complex parameters

Question 2: What happens if the syscall job implementation takes too long execution
time?
Answer:
If a system call implementation takes too long to execute, the system may encounter
the following issues:
• Starvation: In the MLQ scheduler (Section 2.1), kernel code typically runs in non-
preemptive mode, meaning it cannot be interrupted to prioritize other processes.
This blocks CPU time for other processes, causing resource starvation.

Operating System - Assignment Report Page 44/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

• Deadlock or Blocking: If the system call holds a lock (Section 2.4) for too long,
other processes requiring shared resources will be indefinitely blocked. For example,
killall failing to release a lock could freeze the system.

• Reduced Overall Performance: Prolonged system call execution increases sys-


tem latency, degrading multitasking capabilities and real-time responsiveness.

• Increased Risk of Crashes: Incomplete system calls may leave the kernel in
an inconsistent state (e.g., incomplete page table updates), leading to crashes or
critical errors.

• Impact on Memory Management: Long-running system calls involving paging


(Section 2.2.3) may trigger excessive page faults or inefficient swapping, slowing
down memory mapping.

The inter-module interactions among memory storing process name, OS pro-


cess control, and scheduling queue management:
Overview of Inter-Module Interactions for the killall system call ( __sys_killall ) im-
plementation (because it is the syscall that has the most interaction).
Key Modules and Their Interactions:

1. Memory Storing Process Name: This module is responsible for maintaining


process names in memory regions:
- Memory Region Access: The killall syscall uses a region ID parameter to access
the string containing the program name to match.
- Memory Translation: Utilizes the virtual memory system to translate the region
ID to actual memory addresses.
- Data Retrieval: Performs memory read operations to fetch the target process name
string from the specified memory region.

2. OS Process Control: This module handles process lifecycle management:


- Process Identification: Search PCBs list to find wanted processes (matched pro-
cess’s name).
- Process Termination: Implements the mechanism to properly terminate identified
processes
- PCB Management: Updates process control blocks when processes are terminated.

3. Scheduling Queue Management: This module maintains the queues of pro-


cesses waiting for CPU time:
- Queue Manipulation: Removes terminated processes from ready queues.
- Priority Management: Ensures the remaining processes in ready queues with

Operating System - Assignment Report Page 45/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

proper ordering.
- CPU Assignment: Updates CPU assignments when processes are terminated

Figure 4: Visual Representation of Inter-Module Interaction

3.2 Functions implementation


This syscall is invoked by a running process through its syscall number. The process
provides a memory region identifier (memrg) that points to a string containing the
target process name. The __sys_killall function performs the following actions:

1. Reads the process name from the user space memory.

2. Iterates over the global process list (_proc_list[]).

3. Compares each process’s name with the given string.

4. Marks all matching processes to be terminated.

Operating System - Assignment Report Page 46/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1 int __sys_killall ( struct pcb_t * caller , struct sc_regs * regs )


2 {
3 char proc_name [100];
4 uint32_t data ;
5

6 // hardcode for demo only


7 uint32_t memrg = regs - > a1 ;
8

9 /* TODO : Get name of the target proc */


10 // proc_name = libread ..
11 int i = 0;
12 while ( i < sizeof ( proc_name ) - 1) {
13 if ( libread ( caller , memrg , i , & data ) != 0 || ( char ) data
== ’ \0 ’) break ;
14 proc_name [ i ] = ( char ) data ;
15 i ++;
16 }
17 proc_name [ i ] = ’ \0 ’;
18 printf ( " The procname retrieved from memregionid % d is
\"% s \"\ n " , memrg , proc_name ) ;
19

20 int killed = 0;
21 for ( int i = 0; i < MAX_PROC ; ++ i ) {
22 if ( _proc_list [ i ] != NULL && strcmp ( _proc_list [ i ] - > path ,
proc_name ) == 0) {
23 printf ( " Killing process PID % d with name \"% s \"\ n " ,
_proc_list [ i ] - > pid , _proc_list [ i ] - > path ) ;
24 _proc_list [ i ] - > pc = -1;
25 killed ++;
26 }
27 }
28

29 if ( killed == 0) {
30 printf ( " No process matched the name \"% s \"\ n " , proc_name ) ;
31 } else {
32 printf ( " Total % d processes killed .\ n " , killed ) ;
33 }
34

35 return 0;
36 }

This function begins by reading the process name from user memory, using the libread()

Operating System - Assignment Report Page 47/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

function in a loop to retrieve bytes from a memory region owned by the calling process
(caller). The characters are stored into the proc_name buffer until a null terminator
(’\0’) is encountered or the buffer is full.
Next, the system stores the target process name into the proc_name buffer, which is
later used to identify matching processes in the global _proc_list.
Then, the function traverses the process list and matches processes whose path matches
the given proc_name. For each match, a log is printed indicating the process ID (PID)
and name.
Finally, to terminate the matched processes, it sets the pc (program counter) of each
process to -1. This value is a special signal interpreted by the scheduler or lifecycle
manager to terminate the process in the next scheduling cycle.

3.3 Running Test


• We created multiple processes with the same name (e.g., "worker").

• A separate process invoked __sys_killall to terminate them.

• Output was printed using printf to show which processes were terminated.

Example output:
pgsql
The procname retrieved from memregionid 3 is "worker"
Killing process PID 5 with name "worker"
Killing process PID 8 with name "worker"
Total 2 processes killed.
Overall, this implementation meets the functional requirements of the assignment and
demonstrates the concept of extending the kernel with custom system calls for process
management.

4 Overall
Question: What happens if the synchronization is not handled in your Simple OS?
Illustrate the problem of your simple OS (assignment outputs) by example if you have
any.
Answer:
Without proper synchronization in the Simple OS:

• Race conditions: Multiple processors might simultaneously modify shared data


structures like ready queues, leading to corrupt data.

Operating System - Assignment Report Page 48/49


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

• Memory corruption: Several processes could allocate or free the same memory
region concurrently, causing overlapping allocations or double-free errors.

• Lost updates: One CPU might override another CPU’s changes to shared data
structures.

- Example scenario: If two CPUs simultaneously dequeue from the same priority queue,
they might both get the same PCB. This would lead to the same process running on two
different CPUs, causing unpredictable behavior as both CPUs modify the process state
and memory. Similarly, without synchronization around page tables, one CPU might
mark a page as swapped out while another is trying to access it, leading to memory
access errors.
- Another example would be in the memory management system where two processes
on different CPUs might get the same physical frame allocated if synchronization isn’t
implemented around the frame allocation lists, resulting in one process overwriting an-
other’s data.

5 References
[1] Patterson, D. A., & Hennessy, J. L. (2011). Computer Organization and Design: The
Hardware/Software Interface (4th edition).
[2] Tutorial on MIPS Programming using MARS. (n.d.). https://profile.iiita.ac
.in/bibhas.ghoshal/COA_2021/tutorials/Tutorial_MIPS_Using_MARS.pdf

Operating System - Assignment Report Page 49/49

You might also like