Digital Computer Structure
381-1-0103
1
Lecture 2
Microprocessor
Architecture
Shlomo Greenberg
Web site: http://moodle2.bgu.ac.il
Course Focus: Computer
2
Structure
Cloc Powe
Bu k r
s
I2C Communica Watchdog
tion
SPI Controller Memor Cache
Central y
Interrupt Virtual
Processi Controll
Memory
Controll Secondar
ng Unit er
y
er Memory
DMA (CPU) I/O Keyboar
d
Controll Controll
Mouse
er er
Boot /
Configurati
on
Summary of Lecture #1
3
Basic computer is a CPU connected via
different buses to memory and input/output
devices.
Bus is the entity which allows the transfer of
data between CPU, memory and I/O. Various of
buses exist.
Unidirectional vs. Bidirectional
Separated vs. multiplexed
Internal vs. External
Synchronous vs. Asynchronous
Different arbitration schemes control which
device uses the bus at certain bus-cycle.
Syllabus (2/9)
4
Introduction to digital computer structure
Bus
Intel 8086 Microprocessor – Background and
Interface
Microprocessor Architecture (CPU)
CPU Introduction
CPU Registers
Instruction Cycle and Data Flow
CPU Pipeline
Intel 8086 Microprocessor
ARM Microprocessor
IO
Memory
…
5 CPU - Introduction
Technological and Economic
6
Forces
Moore’s law predicts a 60% annual
increase in the number of transistors
that can be put on a chip.
Remark:
The data
points given
in this figure
are memory
sizes, in bits.
From: Tanenbaum, Structured Computer Organization, Fifth Edition,
(c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0
Intel Computer Family
7
Moore’s law for (Intel) CPU chips:
Typical Microcomputer
8
System
CPU – Principal components
9
Arithmetic Logic Unit (ALU) -
performs arithmetic and logic operations
Processor Registers - supply operands
to the ALU and store the results of ALU
operations
Control Unit - orchestrates the fetching
(from memory) and execution of
instructions by directing the coordinated
operations of the ALU, registers and
other components
CPU – Principal Components
10
CPU Structure
CPU must:
Fetch instructions
Interpret instructions (decode)
Fetch data
Process data (execute)
Write data
CPU Role – Running A
12
Program
Reset
Determines in which address the 1st
command resides
E.g.: In Intel architecture it’s 0xFF_FFFC
At RESET, the program counter (PC) gets
the address of the 1st command
PC is “copied” to the Address bus
Fetch opcode
Every program cycle, the CPU gets the next
command thru the data bus from the
program memory into one of the registers
Running a program – cont.
13
Decode opcode
Validating the command is legal
Execute opcode
Without operand
E.g.: NOP, HALT, INC acc
With operand, or even 2 operands (Fetch
and Execute)
E.g.:
JUMP 0x8000
Requires fetching of the operands before
execution
Execution can be with write of data or
without
Stored-Program Computer
14
Entire program is stored in program
memory, and all is controlled via the PC
Each period, it’s PC <= PC+1
Fetch (opcode) => Decode => Fetch
(operand) => Fetch (operand) [optional]
=> Execute => RD/WR
period
CPU Operation
15
Commands are executed sequentially
unless jumping to other address in
memory.
Register which saves the next command
address to be executed is called PC
(Program Counter).
Command execution is done is two
phases: fetch and execute.
Program execution in standard
microprocessor:
Branches
16
Branch (jump) is a change in the order of
command execution
Branches can be of many sorts. One of
the options is dependent-jump.
Every dependent-jump are done based
on the flags register. This register is
updated after each command execution.
Example of flags: CF (carry), PF (parity), ZF
(Zero), F (sign), OF (overflow)
See next slides for registers
Example: 8086 CPU
17
Structure
CPU
Example: 8086 CPU
18
Structure
The 8086 CPU contains two independent units which
together control the functioning of the microprocessor
The Bus Interface Unit (BIU)
Internal unit which interfaces with the world outside of the
CPU. Its job is to:
Fetch instructions from memory and place them in a pre-fetch
queue
Pass data between the execution unit and the outside world
The Execution Unit (EU)
The EU contains the hardware that executes the
instructions of the program. The instructions are taken
from the pre-fetch queue, decoded and then executed. If
data is required by an instruction, the EU send the BIU the
address of the data or I/O device and lets the BIU perform
the communication with the outside world.
Parallel Operation in 8086
19
CPU
The EU and BIU are separated units which operate
independently of one another.
The instruction pre-fetch queue creates a pipeline between
the two units. The pipeline is filled with instructions that
the BIU fetches whenever it is not performing a read or
write for a currently executing instruction.
Thus, when the EU is finished executing a given instruction,
the next instruction is usually read for immediate execution
without the delay caused by instruction fetching.
20 ALU
ALU
21
A digital circuit within the processor that performs integer
arithmetic and bitwise logic operations. The inputs to the
ALU are the data words to be operated on (called
operands), status information from previous operations,
and a code from the control unit indicating which operation
to perform.
Depending on the instruction being executed, the operands
may come from internal CPU registers or external memory,
or they may be constants generated by the ALU itself.
When all input signals have settled and propagated
through the ALU circuitry, the result of the performed
operation appears at the ALU's outputs. The result consists
of both a data word, which may be stored in a register or
memory, and status information that is typically stored in a
special, internal CPU register reserved for this purpose
ALU - Scheme
22
23 CPU Registers
Registers
CPU must have some working space
(temporary storage) – These are called
registers
Number and function vary between
processor designs
One of the major design decisions
Top level of memory hierarchy
Two kinds:
User-visible
Non user-visible
User Visible Registers
General Purpose
Data
Address
Condition Codes
General Purpose Registers
May be true general purpose
May be restricted
Orthogonal: If any general-purpose
register can contain the operand for any
opcode
May be used for data or addressing
Data
Accumulator
Addressing
Segment
General Purpose Registers –
Cont.
Make them general purpose
Increase flexibility and programmer options
Increase instruction size & complexity
Make them specialized
Smaller (faster) instructions
Less flexibility
How Many General Purpose
Registers?
Usually, between 8 - 32
Fewer = more memory references
More does not reduce memory
references and takes up processor real
estate
Depends also on RISC vs. CISC decision
How big?
Large enough to hold full address
Large enough to hold full word
Often possible to combine two data
registers
C programming:
double int a;
long int a;
Condition Code Registers
Sets of individual bits
e.g. result of last operation was zero
Can be read (implicitly) by programs
e.g. Jump if zero
Can not (usually) be set by programs
Sometimes, called flags registers
Program Status Word (PSW)
A set of bits
Includes Condition Codes
Sign of last result
Zero
Carry
Equal
Overflow
Interrupt enable/disable
Supervisor
Supervisor Mode
Intel ring zero
Kernel mode
Allows privileged instructions to execute
Used by operating system
Not available to user programs
Non-user-visible registers
Control & Status Registers
Examples
Program Counter
Instruction Decoding Register
Memory Address Register
Memory Buffer Register
Other Registers
May have registers pointing to:
Process control blocks
Interrupt Vectors
CPU design and operating system design
are closely linked, can tailor register
organization to the OS
Often the first few hundred or thousand
words of memory allocated for control
purposes
Example Register
Organizations
Remark: Instruction Pointer (IP) = Program Counter (P
Example Microprocessors –
Register Organization -
MC68000
8 x 32-bit general purpose data registers
8 x 32-bit address registers
Some used as stack pointers, OS
32-bit program counter
16-bit status register
Nice clean architecture, no messy
segmentation
Example Microprocessors –
Register Organization – Intel
8086
Other extreme from MC68000, lots of
specific registers
2x 16-bit: flags, Instruction Pointer
General Registers, 16 bits
AX – Accumulator, favored in calculations
BX – Base, normally holds an address of a
variable or func
CX – Count, normally used for loops
DX – Data, normally used for multiply/divide
Example Microprocessors –
Register Organization -
Intel 8086 – Cont.
Segment, 16 bits
SS – Stack, base segment of stack in memory
CS – Code, base location of code
DS – Data, base location of variable data
ES – Extra, additional location for memory data
Index, 16 bits
BP – Base Pointer, offset from SS for locating
subroutines
SP – Stack Pointer, offset from SS for top of
stack
SI – Source Index, used for copying data/strings
DI – Destination Index, used for copy
data/strings
39 Control Unit (CU)
Control Unit
40
The control unit of the CPU contains
circuitry that uses electrical signals to
direct the entire computer system to
carry out stored program instructions
The control unit does not execute
program instructions; rather, it directs
other parts of the system to do so. The
control unit communicates with both the
ALU and memory
41
Instruction Cycle and
Data Flow
Instruction cycle
42
Instruction cycle is dependent on the
number of operands the command has
Addressing method affects the number
of operands required
Addressing Methods
43
The address on the address bus selects a
memory cell or an IO device. The way the
address is coded in the program is called
addressing method.
Main addressing methods:
Immediate – Operand is part of the opcode.
E.g. MOV AX, 15h
Direct – Operand is in the address in the
opcode.
E.g.
MOV AX, buffer (where buffer is an address
in memory)
Indirect – CPU uses a register as an address
Instruction Cycle - Indirect
Cycle
In the Fetch Portion, there are three ways to handle
addressing in the instruction:
Immediate Addressing – Operand is directly present in the
instruction, e.g. ADD 5 = “Add 5 to Acc”
Direct Addressing – The operand contains an address with
the data, e.g. ADD 100h = “Add (Contents of Mem Location
100)”
Cons: Need to fit entire address in the instruction, may limit
address space
Indirect Addressing – The operand contains an address, and
that address contains the address of the data, e.g. Add (100h)
= “The data at memory location 100 is an address. Go to the
address stored there and get that data and add it to the Acc.”
Cons: Requires more memory accesses
Pros: Can store a full address at memory location 100
Can also do Indirect Addressing with registers
Indirect Addressing can be thought of as an additional
instruction subcycle
Instruction Cycle with
Indirect
Instruction Cycle State
Diagram
Data Flow (Instruction
Fetch)
Depends on CPU design
In general:
Fetch
PC contains address of next instruction
Address moved to MAR
Address placed on address bus
Control unit requests memory read
Result placed on data bus, copied to MBR,
then to IR
Meanwhile PC incremented by 1
Data Flow (Instruction Fetch
Diagram)
Data Flow (Data Fetch)
IR is examined
If instruction uses immediate
addressing
Rightmost N bits of MBR available for
processing
If instruction uses direct addressing
Send rightmost N bits of MBR to MAR
Control unit requests memory read
Result (operand at that address) moved
to MBR
Data Flow (Data Fetch) –
Cont.
If instruction calls for indirect
addressing, indirect cycle is
performed
Right most N bits of MBR transferred to
MAR
Control unit requests memory read
Result (address of operand) moved to
MBR
MBR moved to MAR
Control unit requests memory read
Result (operand at the address) moved
to MBR
Data Flow (Indirect
Diagram)
Data Flow (Execute)
May take many forms
Depends on instruction being executed
May include
Memory read/write
Input/Output
Register transfers
ALU operations
Data Flow (Interrupt)
Simple
Predictable
Repeat the following for all registers that
need saving:
Contents of register copied to MBR
Special memory location (e.g. stack pointer)
loaded to MAR
MBR written to memory
Increment stack pointer
PC loaded with address of interrupt
handling routine
Data Flow (Interrupt
Diagram)
55 CPU Pipeline
Prefetch
Simple version of Pipelining – treating
the instruction cycle like an assembly
line
Fetch accessing main memory
Execution usually does not access main
memory
Can fetch next instruction during
execution of current instruction
Called instruction prefetch
Improved Performance
But not doubled:
Fetch usually shorter than execution
Prefetch more than one instruction?
Any jump or branch means that prefetched
instructions are not the required
instructions
Add more stages to improve
performance
But more stages can also hurt
performance…
Instruction Cycle State
Diagram
Assuming operands are required, and
write is required
Instruction Cycle State
Diagram
Operand Required
Write Required
No Operands No Write
Pipelining
Consider the following decomposition for
processing the instructions
Fetch instruction – Read into a buffer
Decode instruction – Determine opcode,
operands
Calculate operands – Indirect, Register indirect,
etc.
Fetch operands – Fetch operands from memory
Execute instructions - Execute
Write result – Store result if applicable
Overlap these operations
Timing of Pipeline
Pipeline
In the previous slide, 9 instructions were
completed in the time it would take to
sequentially complete two instructions!
Assumptions for simplicity
Stages are of equal duration
Things that can mess up the pipeline
Structural Hazards – Can all stages can be
executed in parallel?
What stages might conflict? E.g. access memory
Data Hazards – One instruction might depend
on result of a previous instruction
E.g. INC R1 ADD R2,R1
Control Hazards - Conditional branches break
the pipeline
Stuff we fetched in advance is useless if we take the
Branch in a Pipeline
Branch
Fetched,
Executed
Dealing with Branches
1. Multiple Streams
2. Pre-fetch Branch Target
3. Loop buffer
4. Branch prediction
5. Delayed branching
1. Multiple Streams
Have two pipelines
Pre-fetch each branch into a separate
pipeline
I.e. “play it safe” – whatever the branch
decision is, pipeline is ready…
Use appropriate pipeline
Leads to bus & register contention
Still a penalty since it takes some cycles to
figure out the branch target and start
fetching instructions from there
Multiple branches lead to further pipelines
being needed
Would need more than two pipelines …
More expensive circuitry
2. Pre-fetch Branch Target’s
instructions
Target’s instructions of branch is pre-
fetched in addition to instructions
following branch
Pre-fetch here means getting these
instructions and storing them in the cache
Keep target’s instructions until branch is
executed
Used by IBM 360/91
3. Loop Buffer
Uses very fast memory for buffering the
entire (?) loop
See also in cache lecture
Maintained by the fetch stage of the
pipeline
Remembers the last N instructions
Assumes loop in smaller than N instructions
Check buffer before fetching from memory
Very good for small loops or jumps
Used by CRAY-1
4. Branch Prediction
Predict never taken
Assume that jump will not happen
Always fetch next instruction
68020 & VAX 11/780
VAX will not pre-fetch after branch if a page
fault would result (O/S with CPU design)
Predict always taken
Assume that jump will happen
Always fetch target instruction
Studies indicate branches are taken around
60% of the time in most programs
Branch Prediction – Cont.
Predict by Opcode
Some types of branch instructions are more
likely to result in a jump than others (e.g. LOOP
vs. JUMP)
Can get up to 75% success
Taken/Not taken switch – 1 bit branch
predictor
Based on previous history
If a branch was taken last time, predict it will be taken
again
If a branch was not taken last time, predict it will not
be taken again
Good for loops
Could use a single bit to indicate history of the
Branch Prediction State
Diagram – Example with 1
70
bit history
Only wrong
once for
branches Taken Not Taken
that Not Taken
execute an Predic
Predic
unusual t t Not
direction Taken Taken
once (e.g.
loop) Taken
0 1
Branch Prediction State
Diagram – Example with 2
bits history
00 10
Only wrong
once for
branches that
execute an
Start State Taken unusual
Not direction once
Taken (e.g. loop)
01 11
Branch Prediction
State not stored in memory, but in a
special high-speed history table
Branch
Instruction Target
Address Address State
0xFF0103 0xFF1104 0b11
…
5. Delayed Branching
Used with RISC machines
Requires some clever rearrangement of instructions
Burden on programmers but can increase
performance
Most RISC machines: Doesn’t flush the pipeline in
case of a branch
Called the Delayed Branch
This means if we take a branch, we’ll still continue to
execute whatever is currently in the pipeline, at a
minimum the next instruction
Benefit: Simplifies the hardware quite a bit
But we need to make sure it is safe to execute the
remaining instructions in the pipeline
Simple solution to get same behavior as a flushed
pipeline: Insert NOP – No Operation – instructions
after a branch
Called the Delay Slot
RISC Pipeline
Branch is
Fetched,
Executed,
But
CONTINUE
Instruction 4
(maybe 5-7)
unlike the
previous
example
Normal vs. Delayed Branch
Address Normal Delayed
100 LOAD X,A LOAD X,A
101 ADD 1,A ADD 1,A
102 JUMP 105 JUMP 106
103 ADD A,B NOP
104 SUB C,B ADD A,B
105 STORE A,Z SUB C,B
106 STORE A,Z
One delay slot - Next instruction is always in the pipeline.
“Normal” path contains an implicit “NOP” instruction as the
pipeline gets flushed. Delayed branch requires explicit NOP
instruction placed in the code!
Optimized Delayed Branch
But we can optimize this code by rearrangement!
Notice we always Add 1 to A so we can use this
instruction to fill the delay slot
Address Normal Delayed Optimized
100 LOAD X,A LOAD X,A LOAD X,A
101 ADD 1,A ADD 1,A JUMP 105
102 JUMP 105 JUMP 106 ADD 1,A
103 ADD A,B NOOP ADD A,B
104 SUB C,B ADD A,B SUB C,B
105 STORE A,Z SUB C,B STORE A,Z
106 STORE A,Z
Use of Delayed Branch
I = Instruction Fetch
E = Instruction Execute
D = Memory Access
Note that in both cases there are no pipeline delays
Can sometimes be hard to optimize and fill the delay slot
Other Pipelining Overhead
Each stage of the pipeline has overhead in
moving data from buffer to buffer for one
stage to another. This can lengthen the
total time it takes to execute a single
instruction!
The amount of control logic required to
handle memory and register
dependencies and to optimize the use of
the pipeline increases enormously with
the number of stages. This can lead to a
case where the logic between stages is
more complex than the actual stages
being controlled.
Pipelining on the Intel 486
(Pentium)
5-stage pipeline
Fetch
Instructions
can have variable length and can
make this stage out of sync with other stages.
This stage actually fetches about 5 instructions
with a 16 byte load
Decode1
Decodeopcode, addressing modes – can be
determined from the first 3 bytes
Decode2
Expandopcode into control signals and more
complex addressing modes
Execute
Write Back
Store value back to memory or to register file
486 Pipelining Examples
Fetch D1 D2 Ex WB MOV R1, M
Fetch D1 D2 Ex WB MOV R1, R2
Fetch D1 D2 Ex WB MOV M, R1
Fetch D1 D2 Ex WB MOV R2, M
Fetch D1 D2 Ex MOV R1, (R2)
Need R2 written back to use as
addr for second instruction in
stage D2
Bypass circuitry allows us to read
this value in the same stage
486 Pipelining Examples
Fetch D1 D2 Ex WB CMP R1,Imm
Fetch D1 D2 Ex JCC Target
Fetch D1 … Target
Target address known after D2 phase
Runs a speculative Fetch on the target during EX
hoping we will execute it (predict taken)
Also fetches next consecutive instruction if
branch not taken
Pentium II/IV Pipelining
Pentium II
12 pipeline stages
Dynamic execution incorporates the
concepts of out of order and speculative
execution
Two-level, adaptive-training, branch
prediction mechanism
Pentium IV
20 stage pipeline
Combines different branch prediction
mechanisms to keep the pipeline full
84 Intel 8086 Micro-Processor
CISC Architecture
The 8086 Microprocessor
The 8086, announced in 1978, was the
first 16-bit microprocessor introduced by
Intel Corporation
8086 is internally a 16-bit MPU.
Externally the 8086 has a 16-bit data
bus and has the ability to address up to
1MB of memory via its 20-bit address
bus.
Micro-Processor Block
86
diagram
AX - accumulator Internal
BX - register base 20b bus Addres
combi s bus
CX - counting
register ner Queue
DX - data register comman Dat
SP - stack pointer ds contr a
segme bus
BP - base pointer ol Rail
nt
SI - source index syste Contr
registe
m ol
DI - Destination rs contr
bus
Index ol
CS - code Segment syste
DS - data segment m
SS - stack segment
ES - extra segment genera Internal
IP - Instruction l Bus
Pointer purpos
e
registe ALU
rs
8086 CPU model
Fetch and Execute
The BIU outputs the contents of the IP onto
the address bus.
IP incremented by 1.
Instruction passed to the queue.
Queue empty - instruction executed.
Next instruction fetched while last executed -
may fill queue before finishing execution.
8086 Programming Model
(Registers)
The 8086 has general-purpose registers and
pointers with which to perform its instructions.
Moreover, it has status & control flags and segment
registers (memory address pointers).
The general-purpose registers are used by the
programmer for holding intermediate values and
addressing memory. The registers are 16b wide.
Four of these registers (AX, BX, CX, DX) can be
used as 8b registers (AL, AH, BL, BH, CL, CH, DL,
DH) for byte specific operations.
A number of the registers can be used as pointers.
When used as a pointer, a register holds a value
(address) that can be considered as pointing to a
specific location.
Pointer Registers
90
Instruction Pointer (IP) – The address of
the next instruction to be fetched from
memory.
Stack Pointer (SP) – The address of the
last entry placed on the stack. This
address is the top of the stack.
Data pointers – The address of one
particular location in memory.
8086 Programming Model
Status and Control Flags
92
8086 has a 16-bit flags register.
Nine of these condition code flags are active,
and indicate the current state of the processor:
Carry flag (CF)
Parity flag (PF)
Auxiliary carry flag (AF)
Zero flag (ZF)
Sign flag (SF)
Trap flag (TF)
Interrupt flag (IF)
Direction flag (DF)
Overflow flag (OF)
Addressing Modes
Immediate Addressing Mode
Direct Addressing Mode
Register Addressing Mode
Indirect Memory Addressing Mode
Register Relative
Based Index
Relative based index
TI MSP430 Micro-
Controller
RISC Architecture
The MSP Family
• Ultra-low power; mixed signal processors
• Widely used in battery operated
applications
• Uses Von Neumann architecture to
connect CPU, peripherals and buses
• AVR is commonly used debugger
BY: ADITYA PATHAK
The MSP family (cont.)
• 1 to 60 kB flash
• 256B to 2kB RAM
• With or without Hardware multipliers,
UART and ADC
• SMD package with 20 to 100 pins
• MSP 430 family has 4 kB flash, 256B
RAM, 2 timers and S0-20 package
BY: ADITYA PATHAK
Memory Organization
BY: ADITYA PATHAK
Architecture: Basic
Elements
• 16 bit RISC processor
• Programmable 10/12 bit ADC
• 12 bit Dual DAC for accurate analog
voltage representation
• Supply voltage supervisor for detection
of Gray level
• Programmable timers, Main and Auxiliary
crystal circuits
BY: ADITYA PATHAK
Architecture
BY: ADITYA PATHAK
CPU features
• Reduced Instruction Set Computer (RISC)
Architecture
• 27 instructions wide instruction set
• 7 orthogonal addressing modes
• Memory to Memory data transfer
• Separate 16 bit Address and Data buses
• 2 constant number generators to optimize
code
BY: ADITYA PATHAK
Instruction Set
• 27 “CORE” instruction and 24 “EMULATED”
instructions
• No code or performance penalties for
Emulated instructions
• Instructions can be for word or byte operands
(.W / .B)
• Classified into 3 groups:
Single Operand Instructions: RR, RRC, PUSH, CALL
Dual Operand Instructions: MOV, ADD, SUB
Jumps: JEQ, JZ, JMP
BY: ADITYA PATHAK
Clock sub-system
Basic Clock module includes:
LFXT1 – LF/HF crystal circuit, that uses
either 32,768 Hz crystal (LF); or standard
resonators in 450K-8MHz range
XT2 – optional HF oscillator that can be
used with standard crystals or external
resonators in 450K -8MHz.
DCO – Digitally Controlled Oscillator.
Software programmable, RC characteristics
BY: ADITYA PATHAK
Clock Sub-system (cont.)
• 3 clocks for the balance of power
consumption and performance
ACLK: uses the LFXT1 (32,7Hz)clock divided
by 1,2,4 or 8 for individual peripherals
MCLK: uses LXT1, XT2 or DCO sources as
software programmed Used by the CPU
and system
SMCLK: uses LXT1, XT2 or DCO sources as
software programmed for the peripherals
BY: ADITYA PATHAK
Flash Memory Organization
• Bit, Byte or Word addressable
memory
• Information memory divided into
segments of 128 bytes
• System memory has 2 or more 512
byte segments.
• Segment is further divided into 64
bytes blocks
• Can have segment erase and mass
erase
BY: ADITYA PATHAK
Supply Voltage Supervisor
(SVS)
• Used to monitor the AVcc level
• 14 selectable ranges
• Software accessible
• Generates a POR interrupt
BY: ADITYA PATHAK
ADC
• Selectable 10 or 12 bit precision
• Uses Dual Slope ADC technique
• Monolithic 10/12 bit conversion with no
missing codes
• Higher than 200 ksps conversion rates
• Sample and Hold
• 8 individually configurable channels
• Initialization by software or timer A.
BY: ADITYA PATHAK
DAC
• 12 bit DAC with selectable 8/12 bit
precision
• Straight or 2’s compliment binary
• Self calibration option for offset
• Programmable settling time for power
consumption
BY: ADITYA PATHAK
Typical Applications
• Filters – FIR, wave filtering
• Benchmarking circuits
• Data Encryption and Decryption
• Flash monitor
• Low power sensing applications
• Random Number generation
BY: ADITYA PATHAK
CPU - Summary
109
The Central Processing Unit has a very
basic data flow, which includes fetch,
decode, fetch data, execute and write
data.
To allow the CPU to operate properly, it
has few registers (general purpose,
pointers, flags and control,
segmentation).
Pipelining allows executing multiple
commands in shorter time than the
usual sequential processing
110 End of lecture 2
References
111
DCS Lecture Notes of Shlomo Greenberg,
Yaara Ben-Or, 2013
TI MSP430 Microcontrollers, Aditya Pathak,
2007
Computer Organization and Architecture,
William Stallings, 2002
DCS - 8086/8080 CPU Architecture, Shlomo
Greenberg, 2014
https://en.wikipedia.org/wiki/ - many items
Structured Computer Organization,
Tanenbaum, Fifth Edition, 2006