0% found this document useful (0 votes)
19 views11 pages

Mbist Assignment 1

The document discusses the role of memory in digital systems, detailing its types, components, and operations. It explains the functional model of memory, including address lines, data lines, control lines, and the scrambling technique used to enhance data integrity. Additionally, it covers the design of memory systems and the operation of SRAM cells during read processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views11 pages

Mbist Assignment 1

The document discusses the role of memory in digital systems, detailing its types, components, and operations. It explains the functional model of memory, including address lines, data lines, control lines, and the scrambling technique used to enhance data integrity. Additionally, it covers the design of memory systems and the operation of SRAM cells during read processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

MBIST ASSIGNMENT 1

Memory in Digital Systems


Memory is a critical component in any digital or computing system, serving as the medium
for storing data and instructions. It enables a processor or microcontroller to access data
quickly, either temporarily or permanently, depending on the type of memory. In embedded
systems and computer architecture, memory plays a central role in determining the overall
performance, functionality, and flexibility of the system.
Memory can broadly be categorized into volatile (e.g., RAM) and non-volatile (e.g., ROM,
Flash) types. Volatile memory loses its contents when power is turned off, while non-volatile
memory retains data even when the power supply is removed. Each type of memory has its
specific use in storing operational data, firmware, boot instructions, or user applications.
1. Address Lines
Address lines are used to identify the location of data within the memory. When the
processor wants to access a particular memory cell, it sends the corresponding address via
these lines. The number of address lines determines the maximum number of unique
memory locations that can be accessed. For example, 10 address lines can access 2¹⁰ = 1024
memory locations.
2. Data Lines
Data lines are responsible for carrying data to and from the memory. When reading data, the
contents of the addressed location are placed on the data lines. When writing, the data lines
carry the data that is to be stored in the memory.
3. Control Lines
Control lines determine whether the memory should perform a read or write operation. The
most common control lines include:
 Read Enable (RE) / Output Enable (OE): Allows data to be read from memory.
 Write Enable (WE): Enables writing data to a memory location. These lines are
essential for proper timing and coordination of memory operations.
4. Memory Cells
Memory cells are the smallest units of memory where a single bit (0 or 1) is stored. These
cells are typically arranged in rows and columns, and each cell can be accessed via a
combination of row and column addresses. A collection of these cells forms a memory block
or array.
5. Address Decoder
The address decoder is a logic circuit that activates a specific memory cell or block based on
the input address. It takes the binary address from the address lines and selects the
corresponding cell for read or write operations. This decoder is crucial for ensuring that only
one memory cell is accessed at a time.
6. Timing and Control Logic
This component synchronizes the memory operation with the processor or controller clock.
It ensures correct sequencing of read and write cycles, activation of control signals, and
adherence to timing constraints

1. Address Bus: Connected to the address lines, this bus carries the address information
from the CPU to the memory block. It allows the system to specify which memory
location is being accessed.
2. Data Bus: Connected to the data lines, this bus is bi-directional and allows the
transfer of data to and from the memory.
3. Control Signals: These include Read Enable and Write Enable. The CPU activates
these signals to instruct the memory whether to perform a read or write operation.
4. Address Decoder: Converts the binary address into a unique line that selects one
memory cell or block. Only one line is activated at a time.
5. Memory Array (Memory Cells): The core of the memory block where binary data is
stored. Each cell stores a single bit, and a group of cells forms a word.
6. Read/Write Logic: This logic handles the actual process of fetching or storing data
based on the control signal and decoder output.

Functional Model of Memory


The functional model of memory represents how memory performs its key functions:
storing data, retrieving data, and managing access. It is mainly focused on what memory
does rather than how it is physically constructed.

In a functional sense, memory can be seen as a large array of storage locations (cells),
each with a unique address. The system (usually a processor or controller) interacts with
memory by performing read and write operations. Control signals are used to specify the
type of operation, and the address determines where the data will be written or read from.

Functional Operations
1. Write Operation
In a write operation, the CPU provides:
 An address specifying where the data should be stored.
 The data to be written.
 A write enable (WE) signal to tell memory to perform the write.
The memory stores the incoming data at the location specified by the address.
2. Read Operation
In a read operation, the CPU provides:
 An address from where data is to be fetched.
 A read enable (RE) signal to tell memory to perform a read.
The memory responds by placing the data from the specified location onto the data bus.
The functional model of memory describes how data is stored, accessed, and transferred
within a digital system. In this model, the process begins when an address is sent from the
CPU or controller to the memory. This address first passes through the address latch, which
temporarily holds the address to ensure stability and synchronization for decoding. The
address is then forwarded to two decoders: the row decoder and the column decoder. The
row decoder identifies the specific row (word line) in the memory array that needs to be
accessed, while the column decoder determines the exact column (bit line), thereby
selecting a specific memory cell within the memory cell array. This memory array is the core
storage unit that holds data in the form of bits arranged in rows and columns.
When performing a write operation, the data input from the CPU is temporarily stored in a
data register. The control signals, such as Write Enable and Chip Enable, activate the write
driver, which pushes the data into the selected memory cell at the intersection of the chosen
row and column. During a read operation, the selected memory cell outputs a small
electrical signal representing the stored bit. This weak signal is fed into the sense amplifiers,
which amplify it to a readable digital level. The amplified data is then loaded into the data
register and sent to the CPU via the data output line. Throughout the process, control signals
determine whether a read or write operation is to be performed and ensure that memory
access only occurs when the chip is enabled. This functional flow ensures reliable storage
and retrieval of data in a controlled and synchronized manner.

16m x32 memory address and datalines

"16M" stands for 16 Mega locations → 16 × 2²⁰ = 2²⁴ = 16,777,216 unique


memory addresses.
"× 32" means each memory location stores 32 bits of data.
So, this memory has:
2²⁴ memory locations, and
Each location is 32 bits wide (4 bytes).

Address Lines
To access each of the 2²⁴ = 16M locations uniquely, we need enough address
lines to generate all those combinations.

The number of address lines required =

So, 24 address lines are needed to access 16M locations.


Each address line can be in one of two states (0 or 1), and with 24 address
lines, the system can generate unique addresses.
Since each location stores 32 bits, we need: 32 data lines
So, 32 data lines are required to transfer 32-bit data in and out of the memory.
These lines are bidirectional in most systems:
 During a write operation, data travels from CPU to memory.

 During a read operation, data flows from memory to CPU.

Address Lines = 24 → To select any of the 16M locations.


Data Lines = 32 → To read/write 32 bits of data per access.

SCRAMBLING

Scrambling in VLSI (Very Large Scale Integration) refers to a data encoding technique used to
randomize digital bit patterns before transmission or storage. It is primarily implemented in
high-speed communication systems and memory interfaces to avoid the occurrence of long
sequences of identical bits, such as continuous 0s or 1s. Such patterns can cause
synchronization problems at the receiver end, create signal integrity issues, increase
electromagnetic interference (EMI), and introduce DC bias. Scrambling transforms
predictable data into a more random-like format without increasing bandwidth, while still
allowing the original data to be recovered at the receiving end using a complementary
descrambler.

The core idea behind scrambling is to apply a mathematical operation, typically using XOR
logic and a Linear Feedback Shift Register (LFSR), to modify the original data stream in a
controlled yet reversible manner. An LFSR generates a pseudo-random bit sequence based
on a feedback polynomial, and each bit of the input data is XORed with the corresponding
bit from the LFSR sequence. This operation changes the data pattern in a way that appears
random but follows a deterministic structure, making it possible for the same LFSR
configuration at the receiver to reverse the process. When the scrambled data is received, it
is again XORed with the same LFSR-generated sequence to retrieve the original data.
1. Define the Original Data:
Let’s assume we have the following original data sequence that needs to be scrambled:
 Original Data = 00000000
2. Define the LFSR Polynomial and Initial State:
We choose a simple LFSR polynomial to generate the pseudo-random sequence. Let’s use an
LFSR of length 4 with the polynomial:
This polynomial generates a 4-bit shift register with the following feedback
connections:
 The feedback bit is taken from the XOR of the 4th and 1st bits.

Let’s initialize the LFSR to a specific state (e.g., 1001).


3. Generate Scrambled Data Using XOR:
Now, we start scrambling the original data bit by bit. We apply the XOR of each
data bit with the corresponding LFSR bit.
Step 1: First Bit (Data: 0, LFSR: 1)
 Data bit = 0, LFSR bit = 1

 Apply XOR:

0 XOR 1 = 1
 The first scrambled bit is 1.

Now, shift the LFSR register:


 LFSR state changes from 1001 to 0010.

Step 2: Second Bit (Data: 0, LFSR: 0)


 Data bit = 0, LFSR bit = 0

 Apply XOR:

0 XOR 0 = 0
 The second scrambled bit is 0.

Now, shift the LFSR register:


 LFSR state changes from 0010 to 0100.

Step 3: Third Bit (Data: 0, LFSR: 0)


 Data bit = 0, LFSR bit = 0
 Apply XOR:
0 XOR 0 = 0
 The third scrambled bit is 0.

Now, shift the LFSR register:


 LFSR state changes from 0100 to 1000.

Step 4: Fourth Bit (Data: 0, LFSR: 1)


 Data bit = 0, LFSR bit = 1

 Apply XOR:

0 XOR 1 = 1
 The fourth scrambled bit is 1.

Now, shift the LFSR register:


 LFSR state changes from 1000 to 1101.

Step 5: Fifth Bit (Data: 0, LFSR: 1)


 Data bit = 0, LFSR bit = 1

 Apply XOR:

0 XOR 1 = 1
 The fifth scrambled bit is 1.

Now, shift the LFSR register:


 LFSR state changes from 1101 to 1011.

Step 6: Sixth Bit (Data: 0, LFSR: 1)


 Data bit = 0, LFSR bit = 1

 Apply XOR:

0 XOR 1 = 1
 The sixth scrambled bit is 1.

Now, shift the LFSR register:


 LFSR state changes from 1011 to 1100.

Step 7: Seventh Bit (Data: 0, LFSR: 1)


 Data bit = 0, LFSR bit = 1

 ApplyXOR:

0 XOR 1 = 1
 The seventh scrambled bit is 1.

Now, shift the LFSR register:


 LFSR state changes from 1100 to 0110.

Step 8: Eighth Bit (Data: 0, LFSR: 0)


 Data bit = 0, LFSR bit = 0
 ApplyXOR:

0 XOR 0 = 0
 The eighth scrambled bit is 0.

Now, shift the LFSR register:


 LFSR state changes from 0110 to 0011.

4. Scrambled Data:
The final scrambled data sequence after applying XOR to all bits of the original
data is:
 Scrambled Data = 10001011

5. Descrambling Process:
To retrieve the original data, the same LFSR sequence (with the same initial
state and polynomial) is used to XOR the scrambled data.
Let’s apply the same steps in reverse:
 Scrambled Data = 10001011

 XOR with LFSR bits as before, starting with the same LFSR state: 1001

After applying XOR with the LFSR bits step-by-step, we will retrieve:
 Original Data = 00000000

Design of 8KB memory 1KB:


The given diagram illustrates a memory system that consists of eight separate memory
modules, each capable of storing 1024 memory locations. The address lines A0 through A12
are used to access the full address space. Specifically, the lower 10 address lines (A0 to A9)
are connected to all the memory modules and are used to access individual memory
locations within each module, allowing access to 1024 locations per module The upper
address lines A10, A11, and A12 are fed into a 3-to-8 decoder, which is responsible for
selecting one out of the eight modules at any given time. The decoder works by interpreting
the 3-bit binary value formed by A10 to A12 and enabling only the corresponding memory
module. For example, if A12-A10 is 000, Module 0 is selected; if it is 001, Module 1 is
selected, and so on, up to 111 for Module 7. This setup allows for a total addressable range
from address 0 to 8191 effectively expanding the memory by breaking it into manageable
blocks while using a decoder for module selection.

SRAM cell and read operations

The 6T SRAM cell consists of six transistors—four forming two cross-coupled inverters (M3-
M4 and M5-M6) to store the logic state, and two access transistors (M1 and M2) that
interface the cell with the bit lines (BL and BLB). During a read operation, both bit lines BL
and BLB are first precharged to a high voltage level (V<sub>DD</sub>) and then the word
line (WL) is enabled, turning ON access transistors M1 and M2. This connects the internal
nodes (Q and Q̅) to the respective bit lines. Depending on the stored value, one of the
internal nodes is at logic '1' and the other at logic '0'. The high node maintains the voltage
on its bit line, while the low node slightly pulls down the voltage on the opposite bit line. For
example, if Q = 1 and Q̅ = 0, then BL remains high while BLB begins to discharge through M2
and the low node Q̅. This creates a small voltage difference between BL and BLB. A
differential sense amplifier connected to these lines detects this difference and amplifies it
to produce the final logic output. Importantly, the internal data is not disturbed due to the
strong feedback of the cross-coupled inverters, ensuring a non-destructive read.
Precharge the Bit Lines:
 Both BL (Bit Line) and BLB (Bit Line Bar) are precharged to V<sub>DD</sub> (logic
high) using precharge circuitry.
 This ensures a known and balanced starting point.
Enable the Word Line (WL):
 The Word Line (WL) is asserted high (logic 1), which turns ON access transistors M1
and M2.
 These transistors connect the internal storage nodes (Q and Q̅) to the bit lines (BL
and BLB).
Sensing the Data:
 Assume the stored data is Q = 1 and Q̅ = 0.
 Since Q is connected to BL, and Q̅ is connected to BLB:
o BL remains high (since Q = 1).
o BLB discharges slightly (since Q̅ = 0, it pulls BLB down through M2).
 A small voltage difference is created between BL and BLB.
Sense Amplifier Detects the Difference:
 A sense amplifier connected to BL and BLB detects this small difference and amplifies
it.
 The sense amp interprets which side is higher and outputs the corresponding logic
value.
Key Points During Read:
 The internal state (Q/Q̅) is not disturbed, because the cell is designed to be strong
enough to hold its value during read.
 The access transistors (M1 and M2) are designed carefully to balance speed and
disturbance immunity.
 The bit lines are always precharged before a read to enable differential sensing.
Address bits A 1G × 1 DRAM stores 1 gigabit of data, where each memory cell holds 1 bit.
This type of DRAM is typically organized in a 2D array of rows and columns for efficient
addressing.
Total Memory Size:
 1Gbit = 2³⁰ bits
 Organized as:
o Rows = 1M = 2²⁰
o Columns = 1K = 2¹⁰
Address Bits Calculation
To access each bit in this DRAM, we need to uniquely identify a specific row and column.
The number of address bits is the sum of the bits required to address all rows and all
columns.
Row Address Bits:
 Since there are 2²⁰ rows, you need 20 bits to address all rows.
Column Address Bits:
 Since there are 2¹⁰ columns, you need 10 bits to address all columns.
Total Address Bits:
 Row Address Bits: 20
 Column Address Bits: 10
 Total = 20 + 10 = 30 bits
So, to access any bit in this 1G × 1 DRAM, a 30-bit address is required.
Address Multiplexing in DRAM (Optional but common in practice):
DRAM typically uses address multiplexing to reduce the number of pins:
 The 30-bit address is split:
o First, the row address (20 bits) is sent.
o Then, the column address (10 bits) is sent.
 This reduces the number of address pins to just 15 (since 30 bits are sent in two
cycles: 15 bits at a time).

You might also like