CSS Notes (Cryptography and System Security)
CSS Notes (Cryptography and System Security)
Security Mechanism
Data Integrity ensures that the data received is exactly the same as the data that
was originally sent, without any alteration or corruption. This is achieved using
cryptographic hash functions or checksums, which generate a fixed hash value based
on the original data. If any change occurs in the data, the hash value will also change,
signaling a loss of integrity.
Digital Signature is a technique that provides authentication, data integrity, and non-
repudiation. It involves generating a hash of the message and encrypting that hash
using the sender’s private key. The receiver can decrypt the hash using the sender’s
public key to verify that the message has not been modified and that it came from the
claimed sender.
Routing Control is a mechanism that ensures data takes secure and predetermined
paths through the network. It involves applying routing policies to avoid insecure or
untrusted routes, thereby protecting data in transit. This mechanism is often used in
networks requiring a high level of security, such as VPNs or military communication
systems.
Notarization is a mechanism where a trusted third party (called a notary) certifies the
existence or authenticity of a message or document at a specific point in time. This
provides proof that a transaction occurred and prevents either party from denying it
later, thereby ensuring non-repudiation.
[Link]
The term "steganography" Is derived from the Greek word “steganos” which means
“hidden or covered” and “graph” means “to write.” It has been in use for centuries. For
example, in ancient Greece, people carved messages onto wood and covered them
with wax to hide it. Similarly, Romans used different types of invisible inks which could
be revealed when exposed to heat or light.
Step 1: The first step in steganography is selecting a cover medium which is the file or
message that will carry the hidden data. Common cover media include:
Step 3: The secret message is then hidden using one of several techniques:
Least Significant Bit (LSB): The least significant bit of a byte is changed to hide the
secret message. This method is often used in image and audio files.
Frequency Domain: Instead of modifying the raw data (like pixels or audio samples), the
secret message can be embedded in the frequency components of an image or audio
file.
Bit Planes: In this method, data is hidden in the higher-order bit planes of an image.
This can be more secure because it uses bits that are less likely to be noticed.
Step 4: The modified data is then embedded into the cover medium. The resulting file
which now contains both the cover data and the hidden message is referred to as the
stego-object which can be safely transmitted or stored without raising suspicion.
Step 5: The receiver of the stego-object needs to know the method used for embedding
the secret message. In some cases, a secret key is required to extract the data if
encryption is used in combination with steganography.
[Link]
Introduction:
2. Splitting:
1. Expansion (E):
o The 32-bit right half is expanded to 48 bits using an expansion table.
2. Key Mixing:
o The 48-bit result is XORed with the 48-bit round key.
3. Substitution (S-boxes):
o The 48-bit result is divided into eight 6-bit blocks.
o Each block is passed through a substitution box (S-box) to produce a 4-bit
output.
o Total output becomes 32 bits.
4. Permutation (P-box):
o The 32-bit output is permuted to rearrange the bits.
5. XOR and Swap:
o The result is XORed with the left half.
o Then, the halves are swapped.
This process is repeated for 16 rounds, with a different round key each time.
After the 16 rounds, the left and right halves are combined and passed through
the inverse of the initial permutation, giving the ciphertext.
The original 56-bit key is used to generate 16 different 48-bit round keys.
The key is shifted and compressed in each round using permutation tables.
Security Issues:
The 56-bit key size is vulnerable to brute-force attacks.
In each round of DES, the 32-bit right half of the data block undergoes an operation
called expansion permutation. This step increases the size of the block from 32 bits to
48 bits by duplicating and rearranging certain bits according to a fixed table. The main
reason for this expansion is to match the size of the 48-bit round key, so that both can
be XORed. It also helps introduce more diffusion in the cipher, which improves security
by mixing the input bits more effectively before substitution.
3. Role of S-box:
After expansion and XOR with the round key, the 48-bit result is divided into eight
blocks of 6 bits each. These blocks are passed through S-boxes (Substitution boxes).
Each S-box takes a 6-bit input and gives a 4-bit output based on a predefined
substitution table. The purpose of S-boxes is to introduce non-linearity and confusion
into the encryption process. This makes it difficult for attackers to find relationships
between the plaintext, ciphertext, and key, enhancing the strength of the encryption.
4. Possible Attacks on DES:
Although DES was secure when introduced, it is now considered vulnerable due to its
short key size. The most well-known attack is the brute-force attack, where all 2⁵⁶
possible keys are tried to decrypt the message. With today’s computational power, this
can be done in hours or even minutes. Other attacks include differential cryptanalysis
and linear cryptanalysis, which exploit statistical patterns in the encryption rounds. Due
to these weaknesses, DES has been replaced in modern systems by more secure
algorithms like Triple DES (3DES) and Advanced Encryption Standard (AES).
Double DES:
Double DES is a encryption technique which uses two instance of DES on same plain
text. In both instances it uses different keys to encrypt the plain text. Both keys are
required at the time of decryption. The 64 bit plain text goes into first DES instance
which then converted into a 64 bit middle text using the first key and then it goes to
second DES instance which gives 64 bit cipher text by using second key.
2.1 Double DES uses two applications of the DES algorithm with two different keys, K1
and K2.
Encryption process:
C1 = E_K1(Plaintext)
Ciphertext = E_K2(C1)
Ciphertext = E_K2(E_K1(Plaintext))
C1 = D_K2(Ciphertext)
Plaintext = D_K1(C1)
So, Double DES decryption is:
Plaintext = D_K1(D_K2(Ciphertext))
2.3 Weakness:
Although 2DES uses 112 bits of key length (56 + 56), it is still vulnerable to a meet-in-
the-middle attack, which reduces the effective security significantly. Hence, Double DES
is not widely used.
Triple DES:
Triple DES is a encryption technique which uses three instance of DES on same plain
text. It uses there different types of key choosing technique in first all used keys are
different and in second two keys are same and one is different and in third all keys are
same.
Triple DES is also vulnerable to meet-in-the middle attack because of which it give total
security level of 2^112 instead of using 168 bit of key. The block collision attack can also
be done because of short block size and using same key to encrypt large size of text. It
is also vulnerable to sweet32 attack.
3.1 Triple DES is designed to overcome the weaknesses of both DES and Double DES.
It applies the DES algorithm three times using either two keys or three keys.
3.5 Security:
4. Conclusion:
Double DES is an improvement over DES but still vulnerable to certain attacks.
Triple DES offers significantly better security and has been used in banking and
financial systems for many years.
Despite its security, 3DES is now considered outdated due to its slower speed,
and it is being replaced by AES (Advanced Encryption Standard) in modern
applications.
Definition:
Public Key Infrastructure (PKI) is a framework of policies, roles, hardware, software, and
procedures needed to create, manage, distribute, use, store, and revoke digital
certificates.
It enables secure communication and data exchange over untrusted networks like the
Internet using public-key cryptography.
2.2. Authenticates user identities before passing certificate requests to the CA.
[Link] is authorised by the CA to provide Digital certificates to the users on a case-by-
case basis.
Digital Certificates:
3.2. Contain fields like subject, issuer, public key, validity period, and digital signature.
4.1. Public Key: Shared openly, used for encryption or verifying digital signatures.
4.2. Private Key: Kept secret, used for decryption or creating digital signatures.
5.1. A list maintained by the CA of certificates that are no longer valid before their expiry
date.
6.1. Entities (e.g., browsers, mail clients) that use digital certificates to perform secure
operations.
🛡️ Functions of PKI:
Integrity – Ensures data hasn’t been tampered with using digital signatures.
4.1. They use the certificate to verify the website’s identity (authentication).
4.2. They securely exchange encryption keys to establish a private communication
channel.
🧑 Definition:
It takes an input message of any length and produces a 128-bit (16-byte) fixed-size
hash value.
The input message is padded so that its length becomes 448 mod 512 (i.e., 64 bits
short of a multiple of 512).
Finally, append the original message length (in bits) as a 64-bit value.
✅ Example:
A = 0x67452301
B = 0xefcdab89
C = 0x98badcfe
D = 0x10325476
The padded message is divided into 512-bit blocks (64 bytes each).
Each block is further divided into 16 words of 32 bits.
For each 512-bit block, perform 4 rounds of transformation using logical functions and
modular additions.
1. Round 1 (F):
F(B, C, D) = (B AND C) OR ((NOT B) AND D)
2. Round 2 (G):
G(B, C, D) = (B AND D) OR (C AND (NOT D))
3. Round 3 (H):
H(B, C, D) = B XOR C XOR D
4. Round 4 (I):
I(B, C, D) = C XOR (B OR (NOT D))
In each operation:
🔹 Example:
For input "abc", the MD5 hash is:
CopyEdit
900150983cd24fb0d6963f7d28e17f72
Collision
Poor Good
Resistance
Feature MD5 SHA-256
Deterministic: A hash function must consistently produce the same output for the same
input.
Fixed Output Size: The output of a hash function should have a fixed size, regardless of
the size of the input.
Fast Computation/Efficiency: The hash function should be able to process input quickly.
Given an input M1, it should be hard to find another input M2 (where M1 ≠ M2) such
that hash(M1) = hash(M2).
Avalanche Effect: A small change in the input should produce a significantly different
hash value.
When multiple programs are running in memory, the operating system must ensure that
no program interferes with another. This includes preventing access to another
program’s data or to critical parts of the operating system itself. To accomplish this,
operating systems use various techniques collectively known as memory and address
protection mechanisms. Let’s understand each of them in detail.
🔰 Fence
The fence method is the most basic form of memory protection, primarily used in older
single-user systems. In this approach, a fence address is set by the operating system
which defines the boundary between the OS’s memory and the user’s memory. For
example, if the OS resides in memory locations 0 to 999 and user programs can use
memory from address 1000 onwards, a hardware register (fence) is set to 1000. Any
attempt by the user program to access memory below this fence is trapped by the
system. This ensures that the user program cannot overwrite the operating system.
However, this is only practical in systems with one user or one program running at a
time, since it provides only a single fixed boundary.
🔁 Relocation
The base and bound register mechanism is a dynamic memory protection scheme used
during program execution. The base register holds the starting physical address of the
memory allocated to a process. The bound register holds the length or size of the
allocated memory block. Every time the process tries to access memory, the CPU
checks whether the requested address is within the range defined by the base and
bound registers. If the requested memory address is less than the base or greater than
base + bound, the system raises an exception or trap. This way, a process is strictly
limited to its own memory region, and cannot read or write into another process’s
memory or the OS’s memory. This approach is simple, efficient, and widely used in
multi-user operating systems.
📦 Segmentation
Segmentation is a more advanced form of memory protection which not only divides
memory for protection purposes but also supports logical division of a program. A
program naturally consists of parts such as code, data, stack, and heap. With
segmentation, each of these parts is stored in a separate segment, and each segment
has a segment number and a length. When a program accesses memory, it provides a
segment number and an offset (distance from the start of that segment). A segment
table maintained by the OS keeps track of each segment’s base address and limit. The
CPU checks whether the offset is valid (i.e., less than the segment limit). If not, it
generates a trap. Each segment can also have different permissions — for instance, the
code segment might be read-only, while the data segment is read-write. This method
provides fine-grained protection and supports features like dynamic memory allocation
and sharing.
📄 Paging
Paging is a method of memory management that divides both logical and physical
memory into fixed-size blocks. Logical memory is divided into pages, and physical
memory is divided into frames. All pages are of the same size as the frames. When a
process runs, Its pages are loaded into available frames in the main memory. A page
table keeps track of which page is stored in which frame. Unlike segmentation, paging
does not divide the program based on its logical parts but purely based on size. The
CPU generates a logical address (page number and offset), and the page table maps it
to a physical address (frame number and offset). Paging also supports protection bits
like read-only, read-write, and execute, which are checked every time a memory access
is made. If a program tries to access a page it doesn’t own, or perform an unauthorized
operation, a trap occurs. Paging makes memory usage more efficient and avoids
external fragmentation.
📁 File Protection (Detailed Explanation)
Just as memory needs protection, files also require protection to prevent unauthorized
access, modification, or deletion. Operating systems use various mechanisms to ensure
data privacy, integrity, and control over how files are accessed.
The most common method of file protection is the use of Access Control Lists (ACLs).
Every file or directory is associated with a list that defines which users or groups can
perform what operations on that file. The typical operations are:
For example, the owner of a file may have full read, write, and execute permissions,
while other users may only be allowed to read it.
👥 User Classes
Modern operating systems, especially Unix-based ones, classify users into three
categories for file access:
Permissions can be set separately for each category. For example, a file may allow the
owner to read and write, allow the group to read only, and deny access to others
entirely. This is represented by permission strings like rw-r—r--.
🔒 Password Protection
These attributes help control how the system or users interact with the file and provide
an extra layer of control.
🔑 Encryption
Encryption is a method of converting the file content into an unreadable format using a
key. Only users with the right decryption key can access the actual content. This is
especially important in systems dealing with sensitive or private data (like government
files, banking, and health records). Encryption ensures that even if someone gets
access to the file physically, they can’t understand its contents.
Modern operating systems often keep logs of file access. This means the system
records who opened a file, when it was accessed, and what changes were made. This
feature helps detect suspicious or unauthorized activity and is especially useful in
enterprise environments or servers.
🔐 File Protection
Definition: Specifies who can access a file and what operations they can
perform.
Components:
o User ID / Group ID: Identity-based control.
o Access Rights: Read, Write, Execute, Append, Delete, etc.
Techniques:
o Access Control List (ACL): Lists users and their permissions for each
file.
o Access Control Matrix: A table showing access rights of each user over
files.
o Role-Based Access Control (RBAC): Access based on user roles (e.g.,
admin, guest).
📝 Example: Only Admin can write to [Link], others can only read.
2. User Authentication
Definition: Ensures that only legitimate users can access the file system.
Methods:
o Passwords: User must enter correct credentials to access files.
o Two-Factor Authentication (2FA): Adds another layer like OTP or
biometric.
o Biometrics: Fingerprint or face recognition to grant file access.
📝 Example: A user must log in using their fingerprint to access confidential files.
3. Encryption
4. File Locking
📝 Example: While editing a document, the system locks it so no one else can edit it at
the same time.
📝 Example: Google Drive and OneDrive provide automatic backup and restore
options.
6. Other Mechanisms
a. File Permissions:
Protects files from malicious programs that may corrupt or steal data.
c. Audit Logs:
✅ Summary Table
Mechanism Purpose Example
Backup & Recovery Data recovery after loss/failure Cloud backups, Time Machine
4o
Search
Reason
Deep research
Create image
ChatGPT can make mistakes. Check important info. See Cookie Preferences.
Access Control
Access control mechanisms determine which users are allowed to access specific data
and what operations they are permitted to perform, such as reading, writing, or deleting
data. This is typically implemented using Access Control Lists (ACLs) or Role-Based
Access Control (RBAC), where permissions are granted based on user roles rather than
individual identities.
User Authentication
User authentication ensures that only legitimate users can access the database system.
It involves verifying user identity using various techniques such as passwords, biometric
verification (like fingerprints or facial recognition), or multi-factor authentication (MFA),
which combines two or more independent credentials.
Data Integrity
Data integrity guarantees that the data remains accurate, consistent, and unaltered
during both storage and transmission. Integrity is enforced through database
constraints, checksums, and transactional mechanisms like ACID (Atomicity,
Consistency, Isolation, Durability) properties, which ensure that database operations are
processed reliably.
Data Confidentiality/Encryption
A robust backup and recovery strategy is essential for protecting data against accidental
deletion, hardware failures, or cyber-attacks. This includes performing full, incremental,
or differential backups and ensuring the ability to restore the database to its last known
good state.
Inference Control
Yes, the Bell-LaPadula Model and the Biba Model are formal security models used
in multilevel database security to enforce different aspects of data protection.
✅ Yes, both are fundamental models in multilevel security (MLS) databases, where
data and users are classified into different security levels (e.g., Top Secret, Secret,
Unclassified).
[Link] Management
Session management is the process by which a web application tracks and maintains a
user’s interactions after they log in or begin a session. When a user accesses a website
and logs in, the server needs a way to remember who that user is across multiple
requests, because HTTP is a stateless protocol—it doesn’t keep track of previous
interactions by itself.
To solve this, the server generates a unique identifier known as a session ID. This ID is
stored on the server side along with the user’s session data, such as login status or
preferences. The session ID is then sent to the user’s browser, typically stored in a
cookie. Every time the user makes a new request, the browser automatically sends the
session ID back to the server, allowing the server to identify the session and continue
where it left off.
Proper session management involves more than just assigning IDs. It includes setting
time limits for inactivity (session timeout), ensuring the session ID is difficult to guess,
and securely storing the session ID to prevent unauthorized access. Developers often
use secure flags on cookies—like HttpOnly and Secure—to protect the session ID from
being stolen via attacks such as cross-site scripting (XSS).
In essence, session management ensures that a user’s experience on a web
application is seamless, continuous, and secure, while also providing mechanisms to
end or expire the session safely when necessary.