University of frères Mentouri - Constantine 1.
MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
C H AP T ER I: IN T R OD UC T IO N TO CO MP U TE R S C IE NC E
1. OBJECTIVES
The main objectives of the course are as follows:
Acquiring fundamental concepts in computer science.
Understanding the evolution of computer science and computers.
Identifying information coding systems.
Performing conversions between different numbering systems: decimal, octal, hexadeci-
mal, and binary.
Comprehending the operating principle of a computer.
Familiarizing oneself with the hardware and software components of a computer.
Identifying programming languages and application software.
2. DEFINITION OF COMPUTER SCIENCE (in French, Informatique)
DEFINITION 1 : It is the science of processing information through automatic means. The
French term "informatique" was coined in 1962 by P. Dreyfus. It resulted from the combination
of two other words: "information" and "automatique."
DEFINITION2 : Science of the rational processing, especially by automatic machines, of in-
formation considered as the medium for knowledge and communication in technical, economic,
and social domains" (definition from the French Academy).
DEFINITION3 : Computer science or computing science (in French, informatique) is the
study of the theoretical foundations of information and computation, as well as their implemen-
tation and application using computers.
2.1 DEFINITION OF INFORMATION
SENSE 1: The term "data" (in French, "donnée") is often preferred over the term "information."
Although these two words can be synonymous, they are often used as if they refer to two dis-
tinct concepts.
Data: The form of information, the code representing it.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 1
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Information: Meaning, intelligence, knowledge conveyed by documentation about
something.
SENSE 2 : In computer science and telecommunications, information is a knowledge element
(voice, data, image) that can be stored, processed, or transmitted using a standardized encoding
method and a medium. It exists in various forms:
Auditory Form: sound, music
Visual Form: image, text
Audiovisual Form: games, movies, drawings
2.2 INFORMATION SCIENCE
An interdisciplinary science that studies the encoding and measurement of information,
as well as its methods of transmission and storage.
3. EVOLUTION OF COMPUTER SCIENCE AND COMPUTERS
Human beings have always sought to improve their way of calculating for two reasons:
It is slow.
It makes mistakes!
3.1 HISTORY OF COMPUTER SCIENCE:
3.1.1 BEFORE 1945 ???
Initially, humans relied on their fingers and used stones or sticks for counting. The his-
tory of computer science begins with the invention of machines (before the emergence of com-
puters), which initially represented different lines of thought.
1. CALCULATING MACHINES
i. Abaci (counting boards) are the oldest calculating machines. The idea of creating the
first calculating instrument (the abacus) originated in China.
Figure 1: From the Early Abacuses to Calculating Machines
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 2
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
ii. Pascal's Pascaline It was in 1642 that the philosopher and mathematician Blaise Pascal
constructed the first calculating machine (only capable of addition and subtraction) for
his father's calculations.
Figure 2 : BLAISE PASCAL Figure 3 : The PASCALINE
III. LEIBNIZ "STEP RECKONER" (1694) improves Pascal's machine to include the four
basic operations (+, -, *, /) and square roots.
Figure 3 : LEIBNIZ Figure 4 : Leibniz’s machine
2. AUTOMATA
Automata, astronomical clocks, and military machines dating back to the 12th century.
Figure :Leibniz
Figure 5 : Automata
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 3
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
PROGRAMMABLE MACHINES
– 1725: B. BOUCHON introduces continuous perforated paper to control weaving
machines.
– 1728: J. B. FALCON improves the system by replacing paper with intercon-
nected cardboard rectangles.
– 1749: J. de VAUCANSON enhances productivity with a fully automatic weav-
ing machine, replacing cardboard with a cylinder pierced with holes resting on a
carriage with rolling rollers.
– 1752 - 1800: J. JACQUARD develops a loom that uses punched cards to control
needle movements, based on the work of B. Bouchon, the punched cards of J. B.
Falcon, and the cylinder of J. de Vaucanson.
– 1833: Englishman BABBAGE invents the first programmable analytical ma-
chine.
Figure 8 : BABBAGE Figure 7 : J .JACQUARD
Figure 6 : loom
He defines the key concepts upon which computer machines are based:
An input-output device (punched cards).
A control unit (control barrel) managing the transfer of numbers and their arrangement
for processing.
A storage unit (store) for storing intermediate or final results.
A mill responsible for executing operations on numbers.
A printing device.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 4
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
3.1.2 AFTER 1945:
A. THE BIRTH OF THE COMPUTER
In 1945, in the United States, the ENIAC (Electronic Numerator Integrator and Computer)
was born, the first true computer in history. It differed from all previous machines for two rea-
sons:
- Firstly, it was an electronic machine. There were no more mechanical gears; information
was carried by electrons, electrically charged particles that move very quickly.
- Secondly, it was a programmable machine. This means that instructions could be rec-
orded and executed without human intervention. This computer was very massive: it
weighed 30 tons and occupied an area of approximately 150 square meters. To operate
it, more than 17,000 vacuum tubes were required. It consumed 137 kWh and was barely
capable of performing a few simple arithmetic operations.
Figure 9 : The first computer : ENIAC
B. COMPUTER GENERATIONS
It is generally accepted that the era of computing, spanning a few decades, can be divided
into several generations primarily marked by technological advancements.
1. FIRST GENERATION 1945 - 1954:
- Scientific and military computing.
- Focus on solving repetitive calculation problems.
- Development of programming languages, with both successes and failures, to address
the previous problems.
- Heavy technology (vacuum tubes and ferrite cores) leading to space and power con-
sumption challenges.
Only very large nations possessed computing tools.
2. SECOND GENERATION 1955-1965:
- Emergence of business computing.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 5
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
- New technology based on printed circuit boards with transistors, making computers
smaller and cheaper.
- FORTRAN language dominates, with COBOL emerging as a competitor for business-
oriented programming.
Wealthy nations and large enterprises gain access to computing tools.
3. THIRD GENERATION 1966-1973:
- New technology based on transistors and integrated circuits, such as the RCA SPEC-
TRA 70 series.
- Computers become smaller, more energy-efficient, and faster. The emergence of mini-
computers (e.g., Digital Equipment) further accelerates this trend, enabling the integra-
tion of computers into control and manufacturing processes. Computers find applica-
tions in laboratories and factories, not limited solely to calculations, and become in-
creasingly powerful.
SMEs (small and medium-sized enterprises) and PMIs (small and medium-sized indus-
tries) in various countries can acquire computer hardware.
Figure 10: Transistors on a printed circuit board Figure 11: Integrated circuit
Figure10 : transistors d'un circuit imprimé Figure11 : integrated circuit
4. FOURTH GENERATION FROM 1974 ONWARD
- Birth of microcomputing: Electronic circuit manufacturing technology has advanced so
much that it is possible to incorporate all the fundamental functions of a computer on a
"chip."
- The creation of microprocessors enables the emergence of microcomputing (the Micro-
computer Micral by R2E was invented by a Frenchman, FRANÇOIS GERNELLE, in
1973).
- In 1977, several companies, recently founded by enthusiastic electronics experts, dis-
tribute machines called microcomputers, whose low cost makes them accessible to eve-
ryone.
An individual can now purchase their microcomputer in a supermarket.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 6
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Figure 12 : Intel 4004 (4th generation Figure 13 : Micral N by R2E Figure14 : Microprocessor
computer)
Throughout the 1980s, the process gained momentum, and microcomputing swept the
world. The market became significant, and a software company (Microsoft) emerged as a do-
minant player with its products. Office applications (spreadsheets and word processing) began
to be used everywhere.
The 1990s witnessed a new phenomenon, the networking of individual machines. The
PC ("personal computer") became integrated into the global web woven by telecommunications
companies.
By 1999, there were approximately 250 to 300 million computers worldwide, primarily
PCs (Personal Computers). The market share held by "mainframe" computers steadily de-
creased, although some "super-computers" were still being developed (e.g., certain IBM ma-
chines and Crays).
The distribution of machines was roughly as follows:
200-300 million Intel-based machines (with 90% using WINDOWS). Most of these
were built by small companies in Asia. A significant number of these microcomputers were for
domestic use (over 200 million) and were also used as gaming platforms.
30-40 million Apple Macintosh computers.
4-6 million high-level workstations typically running the UNIX operating system.
3. CODING SYSTEMS FOR INFORMATION
4.1 INFORMATION CODING
The information processed by computers comes in various forms: numbers, text, images,
sounds, videos, etc. They are always represented as a sequence of 0s and 1s that the computer
can understand, meaning they are represented in the form of bits. The term "bit" stands for "bi-
nary digit," which can take the value 0 or 1. It is the smallest unit of information manipulable
by a computer. With n bits, you can represent 2n values, meaning all integers from 0 to 2n − 1.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 7
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
4.1.1 CODING UNIT
The components of a computer system respond internally to "on" or "off" signals. These
two stable states are represented by the symbols "0" and "1" or "L" (Low) and "H" (High). The
numbering system suitable for representing such signals is base 2, which is known as binary
coding. The coding unit of information is the bit.
4.1.2 TRANSFER UNIT
For data exchange, elementary pieces of information (bits) are manipulated in groups,
forming binary words. The transfer unit used for data exchange is the 8-bit word (byte).
Note:
A group of 4 bits is called a quartet.
A group of 8 bits is called an octet (byte in English).
A group of 16 bits is called a word.
A group of 32 bits is called a longword (double word in English).
4.1.3 MEASUREMENT UNITS
A single bit or byte is not sufficient to express all available file sizes, so measurement
units have been established. Here are the standardized units:
Measurement Units (Powers of 10) Measurement Units (Powers of 2)
Kilobyte (KB) : 103 bytes = 1000 bytes 1 Kilobyte (KB) : 210 bytes = 1024 bytes
Megabyte (MB) : 106 bytes = 1000 KB Megabyte (MB) : 220 bytes = 1024 KB
Gigabyte (GB) : 109 bytes = 1000 MB Gigabyte (GB) : 230 bytes = 1024 MB
Terabyte (TB) : 1012 bytes = 1000 GB Terabyte (TB) : 240 bytes = 1024 GB
Table 01: Measurement Units
4.2 NUMBER SYSTEMS
4.2.1 DECIMAL SYSTEM (base 10)
The numbers we typically use are in the base 10 system (decimal system). It's the most
common base, and we have ten different digits from 0 to 9 to write all numbers.
4.2.2 BINARY SYSTEM (base 2)
In the base 2 system (binary system), all numbers are formed with only two symbols: {0 and
1}. This is the system that computers operate on.
4.2.3 OCTAL SYSTEM (base 8)
The base 8 numbering system uses eight digits {0,1,2,3,4,5,6,7}. It allows representation of
numbers with 8 different symbols.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 8
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
4.2.4 HEXADECIMAL SYSTEM (base 16)
In this base, you need 16 different symbols to represent a digit:
{0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F}. Each letter has a decimal equivalent.
Figure 15: Representation of Hexadecimal Digits
INTEREST: Easy conversion to binary since 16 = 24.
Note:
In general, any base N is composed of N digits from 0 to N-1.
We consider the polynomial form of an integer written in base b:
nb = (sk sk−1 . . . s1 s0)b = skb k + sk−1b k−1 + sk−2b k−2 .... + s1b 1 + s0b 0
4.3 CONVERSIONS AND BASE CHANGES
4.3.1 TRANSCODING (OR BASE CONVERSION)
Transcoding (or base conversion) is the operation that allows you to change the repre-
sentation of a number expressed in one base to the representation of the same number expressed
in another base.
4.3.2 CONVERTING A DECIMAL NUMBER TO BASE B
To convert a number from base 10 to base b, you can use two methods: the Subtraction
Method or the Multiplication Method. In this course, we focus on the second method, which
will be explained with examples.
1. DECIMAL TO BINARY:
This involves a series of Euclidean divisions by 2. The result will be the concatenation
of the remainders. The diagram below explains the method:
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 9
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
77 in base 2 is written as 1001101.
(77)10= (1001101)2.
2. DECIMAL TO OCTAL
The same method as for binary.
Example: (564)10 = (1064)8
3. DECIMAL TO HEXADECIMAL
It involves successive integer division by the base, which is 16 in this case.
Example: N = (2623)10 = (3F)16
4.3.2 CONVERTING A NUMBER FROM BASE B TO DECIMAL
1. PLACE VALUE AND WEIGHT OF A DIGIT
The place value of a digit is the position of the digit in a number (starting from 0).
The weight of a digit is the result of the operation BASE raised to the power of the dig-
it's place value.
Weight = Base Place_Value
Example 1: What is the weight of the digit 7 in the number (57839)10?
Answer: Since the digit 7 is at position 3, its weight is 103.
Example 2: What is the weight of the digit C in the number (5FC3A) 16?
Answer: Since the digit "C" is at position 2, its weight is 162.
2. BINARY TO DECIMAL
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 10
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Let's take a number: 11001. It has 4 place values, and each place value is a power of 2.
To convert it to decimal, you multiply the first place value (starting from the right) by 20, the
second by 21, and so on. This gives you:
Example : (11001)2 = (?)10 Rank (Place Value) 4 3 2 1 0
Weight 24 23 22 21 20
Digit of the number 1 1 0 0 1
11001 = 1×24 + 1×23 + 0×22 + 0×21 + 1×20
11001 = 1×16 + 1×8 + 0 + 0 + 1
(11001)2 = (25)10
3. OCTAL TO DECIMAL
Example : (4321)8 = (?)10 Rank (Place Value) 3 2 1 0
Weight 83 82 81 80
Digit of the number 4 3 2 1
4321 = 4×83 + 3×82 + 2×81 + 1×80
4321 = 4×512 + 3×64 + 2×8 + 1.
So : (4321)8 = (2257)10
4. HEXADECIMAL TO DECIMAL
Example : (4F2C)16 = (?)10 Rank (Place Value) 3 2 1 0
Weight 163 162 161 160
Digit of the number 4 F 2 C
4F2C = 4×163 + F×162 + 2×161 + C×160
4F2C = 4×163 + 15×162 + 2×161 + 12×160
4F2C = 4×4096 + 15×256 + 2×16 + 12×1
(4F2C)16 = (20 268)10
4.3.4 CONVERSION FROM OCTAL TO BINARY AND VICE VERSA
To represent the 8 symbols of the octal base, you need three bits:
(0)8 = (000)2
(1)8 = (001)2
(6)8 = (110)2
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 11
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
(7)8 = (111)2 …
1. To convert from octal to binary: replace each octal digit with its corresponding three
bits.
2. To convert from binary to octal: go through the binary number from right to left,
grouping the binary digits into sets of 3 (adding zeros as needed). Then, replace each
set of 3 with the octal digit.
4.3.5 CONVERSION FROM HEXADECIMAL TO BINARY AND VICE VERSA
To represent the 16 symbols of the hexadecimal base, you need four bits.
(0)16 = (0000)2 ; (1)16 = (0001)2 ; (2)16 = (0010)2 ; (3)16 = (0011)2 ; (4)16 = (0100)2 ;
(5)16 = (0101)2 ; (6)16 = (0110)2 ; (7)16 = (0111)2 ; (8)16 = (1000)2 ; (9)16 = (1001)2 ;
(A)16 = (1010)2 ; (B)16 = (1011)2 ; (C)16 = (1100)2 ; (D)16 = (1101)2 ; (E)16 = (1110)2 ;
(F)16 = (1111)2 .
1. To convert from hexadecimal to binary: replace each hexadecimal digit with its corre-
sponding four bits.
2. To convert from binary to hexadecimal: go through the binary number from right to
left, grouping the binary digits into sets of 4 (adding zeros as needed). Then, replace
each set of 4 with the hexadecimal digit.
4.4 BINARY CODED DECIMAL (BCD)
BCD stands for Binary Coded Decimal. This coding is used for displaying decimal va-
lues, where each digit is represented in binary using 4 bits (units, tens, hundreds, etc.). BCD
coding doesn't allow for arithmetic calculations; it is solely intended for data entry and display
of decimal data.
Figure 16: Binary Coded Decimal (BCD)
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 12
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
4 .5 ASCII Encoding
Binary code is used to represent numbers that computer systems can manipulate. Howe-
ver, computers also need to work with alphanumeric characters to store and transmit text. To
encode these characters, each one is associated with a binary code, known as ASCII (American
Standard Code for Information Interchange). ASCII is a standardized American code for in-
formation exchange, invented by Bob BERNER in 1961. It is the most well-known, oldest, and
widely compatible character encoding standard in computing. ASCII uses a 7-bit code (values
0 to 127) to define:
Universal printable characters: lowercase and uppercase letters, digits, symbols, etc.
Non-printable control codes: line feed indicator, end of text, peripheral control codes,
etc.
Examples: The character 'A' for instance, has a code of 65. The character 'f' is represented by
102, and the question mark '?' is represented by 63.
*: (The complete table for this ASCII code is provided in the appendix.)
5. Principles of Computer Operation
5.1 Von Neumann Architecture
The architecture, known as the Von Neumann architecture, breaks down the computer
into four distinct parts:
1. The processor consists of an Arithmetic Logic Unit (ALU) or processing unit in English.
Its role is to perform basic operations, and it includes a control unit responsible for sequencing
operations.
2. Memory, which contains both data and the program executed by the control unit. Me-
mory is divided into volatile memory or RAM (Random Access Memory), which holds pro-
grams and data in the process of execution, and non-volatile memory or ROM (Read Only
Memory), which stores the computer's basic programs and data (BIOS: Basic Input Output Sys-
tem).
3. Input and output devices that allow communication with the external world.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 13
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Figure 17: Von Neumann Architecture
The two main components of a computer are the central memory and the processor. The
central memory is used to store information (programs and data), while the processor executes
the instructions that make up the programs step by step. For each instruction, the processor per-
forms the following operations schematically:
1. Read the instruction to be executed from memory (MC).
2. Perform the corresponding processing.
3. Move on to the next instruction.
The processor is divided into two parts (as shown in the figure above), the control unit (CU)
and the arithmetic and logic unit (ALU) respectively:
- The control unit is responsible for reading from memory and decoding instructions.
- The processing unit, also known as the Arithmetic and Logic Unit (ALU), executes ins-
tructions that manipulate data.
MC: This is the internal central memory of the microcomputer. It is used to store programs and
data.
CU: This is the control unit. It coordinates the work between different components.
ALU: This is the arithmetic and logic unit. It is used to perform arithmetic and logical calcula-
tions.
Program: A program is a sequence of elementary instructions that will be executed in order by
the processor.
6. COMPUTER HARDWARE
The computer, a term coined by J. PERRET in 1955, is a machine for processing data
(or information). It consists of a monitor, a central processing unit (CPU), input devices like a
keyboard and mouse, and a video card. It also includes output devices such as a monitor and
printer, as well as storage devices.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 14
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Figure 18: Computer Hardware
NOTE: A peripheral is any electronic hardware that can be connected to a computer through
one of its input-output interfaces, usually via a connector.
6.1 COMPONENTS OF A COMPUTER
Two types of components are distinguished: internal components and external components.
6.1.1 EXTERNAL COMPONENTS OR PERIPHERALS
These represent all the elements that are connected outside the central unit, for example:
monitor, keyboard, mouse, printer, scanner, modem, data projector, etc. The following catego-
ries of peripherals are usually distinguished:
A. DISPLAY PERIPHERALS: These are output devices that provide a visual repre-
sentation to the user, such as a monitor (screen).
Two types of screens are usually distinguished: cathode ray tube (CRT) screens, which
are large and heavy, and flat-panel screens, characterized by a very shallow depth and light
weight. It serves as the interface between the user and the computer. The screen is characterized
by the following parameters:
DISPLAY MODE: This is the dimension of the displayed image, expressed in
the number of pixels (points per line * number of lines). This is referred to as the graphic dis-
play mode, with the most common ones being VGA, 640*480 in 256 colors, and SVGA,
1024*768 in 16 million colors.
SCREEN SIZE: Monitors have typical dimensions of 14, 15, 17, and 20 inches
(one inch = 2.54 cm), measured diagonally across the screen.
B. STORAGE PERIPHERALS: These are input-output devices capable of permanent-
ly storing information (hard drive, CD-ROM drive, DVD-ROM drive, etc.).
C. ACQUISITION PERIPHERALS: They allow the computer to acquire specific da-
ta, such as video data (video capture), digitized images (scanner), or sound (microphone).
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 15
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
D. INPUT PERIPHERALS: These are devices capable of sending information to the
computer only, for example:
MOUSE: It is a pointing device used to move the cursor on the screen to interact
with the machine. Types of mice include mechanical mice, optical mice, and wireless mice (in-
frared, Bluetooth).
KEYBOARD: This is the interactive input device and one of the interfaces be-
tween the user and the machine. It consists of three parts:
- The alphanumeric section.
- The numeric keypad, which provides numbers and direction keys.
- Function keys, whose importance varies, with the role of each key specified for
each program.
E. PRINTER: It is a device used to produce a printed output (on paper) of computer da-
ta. There are several printer technologies, with the most common ones being:
Dot matrix printer (also known as a pin printer).
Inkjet printer and bubble jet printer.
Laser printer.
F. MODEM: This is a device used to transfer information between multiple computers
via a wired transmission medium (e.g., telephone lines).
6.1.2. INTERNAL COMPONENTS
These represent the elements that exist inside the central processing unit (CPU), namely
the motherboard, memory, cards (sound, graphics, network, etc.), and drives (floppy disks,
CD/DVD drives, memory cards, etc.).
A. MOTHERBOARD: Also known as the motherboard, the motherboard is the primary con-
stituent element of the computer. It serves as the base for bringing together all the essential
elements of the computer.
B. MICROPROCESSOR: This is the brain of the computer. It manipulates digital infor-
mation, which means information coded in binary form, and executes instructions stored in
memory.
C. MEMORY: This is an electronic component capable of storing data permanently or tem-
porarily. Two main categories of memory are distinguished:
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 16
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
i. VOLATILE MEMORY: Also known as random access memory (RAM), these memo-
ries temporarily store data during program execution. Their content is lost once the elec-
trical power is interrupted. Example: RAM (Random Access Memory).
ii. MASS STORAGE (Secondary or External Memory): This type of memory allows for
permanent data storage even when the memory is no longer electrically powered (e.g.,
when the computer is turned off). Mass memory includes:
Magnetic storage devices (e.g., hard drives).
Optical storage devices (e.g., CD-ROM or DVD-ROM). - Read-only memo-
ries (e.g., ROM, which stands for Read-Only Memory and is used for com-
puter booting).
D. EXPANSION CARD: These are components directly connected to the motherboard and
located inside the central unit, allowing the computer to acquire new input-output functionali-
ties and be customized according to specific needs. Major expansion cards include the
graphics card, network card, TV card, and more.
7. SYSTEMS
7.1 DEFINITION OF A COMPUTER SYSTEM:
A computer system consists of two distinct parts (branches):
HARDWARE, which includes both the central processing unit and all other connected
devices. This domain deals with the manufacturing, maintenance, and development of computer
hardware.
SOFTWARE: This is the logical part that serves to operate a computer. This domain
represents the set of programs that facilitate communication between the user and the micro-
computer.
7.2 DEFINITION OF SOFTWARE:
Software is a collection of applications composed of a set of programs related to informa-
tion processing. It is a set of computer programs required for the operation and use of a compu-
ter or computer system. Two types of software are distinguished: system software and applica-
tion software.
7.3 SYSTEM SOFTWARE:
These are programs that interpret user commands and transmit them to the machine. These
programs are called operating systems. Several operating systems have been developed, inclu-
ding:
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 17
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
-MS-DOS (Microsoft Disk Operating System)
-Windows (98, 2000, XP, Vista, 7, 8)
-Unix (Linux)
-MAC OS
7.3.1 THE MAIN ROLES OF OPERATING SYSTEMS:
Operating systems play an essential role as coordinators between hardware, users, and ap-
plication programs. They allow for the coherent and optimal use of all computer resources.
An operating system resolves issues related to computer operation by ensuring:
Efficient, reliable, and cost-effective management of physical computer resources, es-
pecially critical resources like the processor and memory.
Ordering and controlling the allocation of processors, memory, icons and windows, de-
vices, and networks among programs that use them.
Assisting user programs and protecting users in shared usage scenarios.
Offering users a simpler and more pleasant abstraction than hardware: a virtual ma-
chine that allows interaction with users by presenting them with a machine that is easier
to operate than the actual hardware.
7.3.2 OPERATING SYSTEM CLASSES
An operating system (OS) is at the core of a computer, coordinating essential tasks for the
proper functioning of the hardware. The quality of resource management (processor, memory,
peripherals) and the user-friendliness of computer use depend on the operating system. These
systems can be classified as follows:
Single-tasking (DOS): At any given time, only one program is executed. Another pro-
gram will not start unless under exceptional conditions, and it will wait until the first one is fin-
ished.
Multi-tasking (Windows, Unix, Linux, VMS): Multiple processes (i.e., running pro-
grams) can execute simultaneously (on multiprocessor systems) or in quasi-parallelism (on
time-sharing systems).
Single-session (Windows 98, 2000): At most one user can use the machine at a time.
Networked systems allow multiple users, but each of them exclusively uses the machine (multi-
user, single-session).
Multi-session (Windows XP, Unix, Linux, VMS): Multiple users can work simultane-
ously on the same machine.
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 18
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Figure 19: Layered Structure of an OS
7.4 PROGRAMMING LANGUAGES
7.4.1 DEFINITION
A programming language serves as an intermediary between humans and machines. It allows
you to write operations that the computer should perform in a language that is close to machine
code but understandable by humans. Since programming languages are meant for computers,
they must adhere to strict syntax rules.
A programming language is implemented by an automatic translator: either a compiler or
an interpreter. An interpreter is a program that translates source code written in a particular pro-
gramming language into binary code that can be directly executed by a computer.
IF (a>b) THEN Compilateur 00110110110110
Figure 20 : Program Compilation
7.4.2 CATEGORIES OF PROGRAMMING LANGUAGES:
There are two main categories:
A. Symbolic machine languages (or assembly language, assembly language) allow a human
to write basic instructions in symbolic form (action of the instruction and memory addresses).
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 19
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
B. High-level languages: These languages allow you to bypass machine instructions and pro-
gram in a language closer to algorithms. In this category, two major families are usually dis-
tinguished:
Procedure-oriented languages (functional or procedural languages): A functio-
nal language (or procedural language) is a language in which the program is constructed using
functions that return a new state as output and take the output of other functions as input, for
example. This type of language allows complex problems to be divided into simpler sub-
problems.
Object-oriented languages (=object-oriented languages): This is a way of pro-
gramming based on the idea that things can have commonalities, similarities in themselves, or
in the way they behave. The idea of object-oriented programming is to group such elements to
simplify their use. A grouping is called a class, and the entities it groups are called objects.
7.4.3 ELEMENTS OF PROGRAMMING LANGUAGES
A programming language is constructed from a formal grammar, which includes sym-
bols and syntax rules, along with semantic rules. These elements can be more or less complex
depending on the language's capabilities.
A. Syntax rules: Defined by a formal grammar, they govern the various ways in which lan-
guage elements can be combined to create programs.
B. Vocabulary: Vocabulary represents the set of instructions constructed based on symbols.
C. Semantics: Semantic rules define the meaning of each sentence that can be constructed
in the language, particularly what the effects of the sentence will be during program execution.
D. Alphabet: The alphabet of programming languages is based on common standards like
ASCII.
7.4.4 CLASSIFICATION OF LANGUAGES
Programming languages can be classified based on their use, as many languages are
specialized for a particular application or domain.
Languages for dynamic web pages
Theoretical programming languages
Specialized languages
Synchronous languages
Educational languages
Languages for digital electronics
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 20
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Audio programming languages, etc.
7.5 APPLICATION SOFTWARE:
Application software is a set of programs designed to perform specific tasks and meet the
needs of human users. For example:
Word processing software (Word, AmiPro, Word Perfect)
Spreadsheet software (Excel, OpenCalc, Quattro Pro, etc.)
Games software, Programming software (C, Pascal, Java, Visual Basic, etc.)
Database software (Access, Oracle, etc.)
Graphic software (Paint, AutoCAD, Photoshop, etc.)
Presentation software (PowerPoint, etc.)
Database management software (Access, Oracle, etc.)
Internet software (Internet Explorer, MSN Messenger, etc.)
7 .6 THE APPLICATION DOMAINS OF COMPUTING
The use of computing technology spans across all areas of life, including:
The management domain (banks, stock markets, insurance, businesses, etc.)
The industrial domain (robots, production process control, etc.)
The scientific and engineering domain (simulation of physical phenomena, etc.)
Telephony, communications, and media domains (transmission of text, voice, sound,
images, videos, etc.), telecommunications, and networks
The education domain (computer-assisted teaching, computer-assisted experimenta-
tion, etc.)
Internet, as well as scientific disciplines, medical fields, social sciences, arts, etc.
ANNEX
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 21
University of frères Mentouri - Constantine 1. MODULE "COMPUTER ARCHITECTURE AND APPLICATIONS "
Not cited codes are not managed by Windows
Dr.Nassima Mezhoud _ Chapter 1: Introduction to Computer Science Page 22