Computer and Network Security - BEC714b
UNIT –III
Auditing: Definitions, Anatomy of an Auditing System, Designing an Auditing System, A
Posterior Design, Auditing Mechanisms, Examples, Audit Browsing (Text 3: Chapter 14)
Intrusion Detection: Principles, Basic Intrusion Detection, Models, Architecture,
Organization of Intrusion Detection Systems, Intrusion Response (Text 3: Chapter 15)
BEC714B: Computer and Network Security G V Bhat 1
-
Roll No 19:
Auditing
Auditing emerged to track access to sensitive data and monitor system interactions, aiming to detect
unauthorized use and security breaches. Anderson pioneered audit trails to monitor threats, relying on
existing logging mechanisms enhanced with additional data without altering their fundamental design.
o Logging: The act of recording system events and usage statistics.
o Auditing: The analysis of those logs to assess system behaviour and detect anomalies.
• Functionality:
o Logs help reconstruct system states to identify how or when security was compromised.
o Even partial logging enables elimination of potential causes and supports focused
investigation.
• Additional Uses:
o Evaluating protection mechanisms by analysing usage patterns.
o Supporting intrusion detection systems through behavioural analysis.
o Tracking privileged actions and deterring attacks via accountability.
• Challenges:
o Choosing what to log vs. what to audit.
o Effective auditing demands understanding of security policies and attacker behaviour
such as commands used, object alterations, and access patterns.
Roll No 20:
Anatomy of an Auditing System
An auditing system consists of three components: the logger, the analyzer, and the notifier. These components
collect data, analyze it, and report the results.
Logger
A logger is responsible for recording information about system activity and performance. Depending on
configuration parameters, it may collect data in human-readable text or binary format, and might
also transmit this data directly to an analyser module for further processing. When binary logs are used,
systems typically provide viewing tools that allow users to convert and examine the raw data using
common text-processing utilities.
Examples:
• RACF (IBM MVS/VM Security):
Logs critical security interactions, such as failed access attempts and privilege changes. It
supports detailed user information tracking (e.g., access dates, groups, permissions) using
commands like LISTUSER. RACF can also record attempts to modify its own behavior, providing
accountability for administrative changes.
• Windows NT Logging:
Maintains three separate logs:
o System event log: Records predefined OS-level events like crashes or failures.
o Application event log: Contains entries added by individual applications.
o Security event log: Tracks security-relevant activities such as logins and resource access.
Only administrators can access it.
Each record includes an event header with metadata (e.g., timestamp, user ID, event ID, source) and a
description. For binary records, Windows NT uses an event viewer tool to translate logs into readable
format.
BEC714B: Computer and Network Security G V Bhat 2
-
Example Log Entry (Security Event):
Date: 2/12/2000
Time: 13:03
EventID: 592
Type: Success
Source: Security
User: WINDSOR\Administrator
Computer: WINDSOR
Description: A new process has been created:
Image File Name: \Program Files\Internet Explorer\[Link]
System administrators can define actions to take when logs reach capacity—like disabling logging,
overwriting old entries, or shutting the system down.
Analyzer
An analyzer processes collected logs to extract meaningful insights or detect anomalies. Its conclusions
might adjust future logging behaviors, reveal system issues, or trigger further action.
Examples:
• Swatch Log Filtering:A system administrator may use swatch patterns to filter specific events. For example, excluding local and
internal rlogin or telnet usage:
/rlogin/&!/localhost/&!/*.[Link]/
/telnet/&!/localhost/&!/*.[Link]/
• Database Query Monitoring: A mechanism tracks user queries and their responses. If excessive data overlap occurs, it flags
potential information leakage.
• Intrusion Detection Systems (IDS): Analyze logs for suspicious behavior or known attack patterns. IDS modules serve as specialized
analyzers that flag violations of the system's security model.
Notifier
A notifier receives the analyser’s findings and communicates results to stakeholders often prompting
corrective or protective actions.
Examples:
• Swatch Notification Setup:
Configuration to alert staff via mail:
/rlogin/&!/localhost/&!/*.[Link]/mail staff
/telnet/&!/localhost/&!/*.[Link]/mail staff
• Database Privacy Control: Automatically blocks query responses if they reveal too much combined data.
• Login Failure Response: After three consecutive failed login attempts, the system disables the account and alerts the administrator.
This shows logging, analysis, and notification working together as a cohesive auditing system.
Viva Questions on Network & Information Security
1. What is the core difference between logging and auditing in a computer system?
2. Name the three key components of an auditing system and describe their basic roles.
3. In Windows NT, which log is used to track login and resource access activities?
4. How does a notifier respond to repeated failed login attempts in an auditing framework?
Theory Question:
1. Explain the anatomy of an auditing system with emphasis on the interaction between the logger,
analyzer, and notifier using examples.
2. Discuss the challenges faced in auditing, specifically the decision-making process involved in choosing
what to log and what to audit.
3. Describe how auditing contributes to intrusion detection and prevention, citing mechanisms like Swatch
and IDS.
BEC714B: Computer and Network Security G V Bhat 3
-
Roll No 21:
Designing an Auditing System
An auditing system is a vital part of any computer security architecture, as it ensures not only the ability
to log events but also to analyse them for violations of established policies. The foundation of auditing
lies in a unified logging subsystem that records security-relevant activities across the system. By
examining this data, the auditing mechanism can determine whether unauthorized actions have
occurred or if the system has entered an insecure state.
Implementation Considerations
Logging Requirements
• Auditing does not guarantee security but detects violations of policy constraints.
• The initial system state is crucial; logging must capture starting conditions along with runtime actions.
Operation Semantics
• Real-world operations may abstract broader actions:
o “Write” may include: append, change permissions, or update system clock.
• Covert channels can subtly transmit information and must be modeled for effective auditing.
Naming & Object Representation
• Systems must track actions on an entity across all possible representations.
• Example:
o A UNIX file may be accessed via file system (regular path) or raw disk (inode level).
o Both paths should be logged, although practical performance limits often prevent block-level
logging.
Designing an auditing system requires translating abstract security models into practical logging
mechanisms. This begins with understanding which actions must be logged and under what conditions.
It is not sufficient to assume the system is in a secure initial state; instead, auditing must include
capturing baseline information at startup, because the absence of violations in logged activity does not
guarantee the system is secure. If the system begins in an insecure state, no amount of proper logging
will rectify it.
Further complexity arises when defining operations such as “write.” In implementation, a write could
represent actions ranging from appending to a file or creating directories, to changing system
parameters like the clock. Each of these actions may impact security differently and must be interpreted
accordingly. Covert channels—mechanisms by which information is passed indirectly—also pose
challenges. These must be considered in the design of the auditing system, even though they are difficult
to monitor directly.
The naming of objects adds another layer of complexity. Objects can have multiple aliases or
representations. For instance, in UNIX, a file may be accessed either through its traditional path or via
its raw disk blocks. Without logging every access method, the auditing process remains incomplete.
However, due to performance constraints, systems rarely log at the disk block level, resulting in blind
spots within traditional audit logs.
BEC714B: Computer and Network Security G V Bhat 4
-
Syntactic Issues
Contextual Logging Challenges
• Effective auditing demands clear, unambiguous log entries.
• Ambiguity arises when:
o Context of entries is missing
o Naming conventions are misleading
o Example:
If /etc/passwd appears in an FTP log for an anonymous user, it may refer to a sandboxed version,
not the system file — a potential misinterpretation.
Grammar-Based Logging
• Flack & Atallah advocate defining a log grammar using BNF (Backus-Naur Form) to structure log entries
clearly.
• Benefits:
o Enables parsers to reliably extract and analyse log data.
o Standardizes audit tool development.
Example Grammar
entry : date host prog [ bad ] user [ "from" host ] "to" user "on" tty
date : daytime
host : string
prog : string ":"
bad : "FAILED"
user : string
tty : "/dev/" string
• This grammar ensures consistent formatting across tools and entries.
• Ambiguities are detected as parse errors, improving the analyst’s ability to resolve issues precisely.
One of the most critical challenges in designing an effective auditing system is ensuring clarity in log
entries. Logs must not only record data but do so in a way that the context and meaning of each entry
are unambiguous and useful for analysis. Inconsistent or poorly structured logs make it difficult for
auditors to reconstruct actions and verify whether security policies have been upheld.
Context can sometimes be misleading. Consider a UNIX log entry that reports the retrieval of the file
/etc/passwd during an anonymous FTP session. This may appear alarming, but in reality, the file accessed
could be a safe replica located in an anonymous FTP directory. Without precise contextual tagging, such
ambiguities may lead to false alarms or missed violations.
To solve this, researchers like Flack and Atallah propose a grammar-based approach to log design, using
formalisms like Backus-Naur Form (BNF). Defining log formats in such syntactic structures ensures
consistency across systems and enables tools to parse logs programmatically. For instance, logs
documenting failed privilege escalation attempts can be described with a grammar that specifies exactly
how fields like date, user, host, and device are structured. This makes the logs easier to interpret and
allows security analysts to build robust automation tools for detection and analysis.
Unfortunately, most systems today do not use formal grammars in log design, which leads to frequent
misinterpretations and analysis challenges. Even mature logging frameworks like BSM (Basic Security
Module) suffer from ambiguities. In BSM, optional fields can be misread if not handled carefully, leading
to parse errors and complicating auditing efforts. By enforcing stricter syntactic rules, auditors can
minimize misinterpretations and enhance the fidelity of their findings.
BEC714B: Computer and Network Security G V Bhat 5
-
Log Sanitization
When a site considers certain log content confidential, it must sanitize logs before making them publicly
accessible. A log L is considered sanitized with respect to a policy P and a set of users U when all information in
C(U) (content forbidden to those users) is removed from the log.
There are two ways confidentiality policies impact logging:
1. Preventing information from leaving the site to guard against external analysis, such as identifying
proprietary file names or sensitive IP addresses.
2. Preventing information from leaving the system protecting users from internal surveillance by system
administrators.
Laws and privacy regulations often restrict when system administrators can monitor users, typically only under
suspicion of misconduct. In these cases, the site must enforce protections so that investigators cannot see benign
user behaviour.
As illustrated in Figure, there are two sanitization models:
• Post-write sanitization: Data is written to the log and then filtered before external review — it protects
corporate confidentiality but not user privacy.
• Pre-write sanitization: Data is sanitized before logging this prevents even administrators from accessing
sensitive values. Cryptographic techniques may be used to protect data while allowing potential future
reconstruction.
This leads to two types of sanitizations:
• Anonymizing sanitizer: Removes information irreversibly, even the log’s creator cannot reconstruct it.
• Pseudonymizing sanitizer: Removes information but keeps a mapping that allows the originator to re-
identify data if necessary.
Effective sanitization must preserve analytical value. Otherwise, important patterns — such as those indicating
an attack may be lost.
For example, if the Humongous Corporation replaces IPs randomly in a log showing sequential probes of email
ports, it might erase evidence of a port scan. Instead, it could use sequential pseudonyms that reflect the pattern
without revealing actual IPs.
Researchers Biskup and Flegel emphasize that if anonymization is desired, the best approach may be to not collect
the data at all. But pseudonymity requires that sensitive data be logged and then hidden using:
• Pseudonyms, stored in a mapping table accessible only to trusted parties.
• Cryptographic techniques, such as secret sharing schemes, where the decryption key is split among
multiple stakeholders.
Application and System Logging
Application logs contain entries made by programs and express events using high-level abstractions. For instance:
• su: bishop to root on /dev/ttyp0
• smtp: delivery failed; could not connect to [Link]
These logs report issues or outcomes but often lack detail about underlying system actions such as syscalls, their
arguments, or the sequence of operations leading to the event.
In contrast, system logs document kernel-level activity, including:
• System call invocations (CALL)
• Return values (RET)
• File name lookups (NAMI)
BEC714B: Computer and Network Security G V Bhat 6
-
• Resource access patterns and I/O operations
Example system log for the above su event:
3876 ktrace CALL execve(0xbfbff0c0,0xbfbff5cc,0xbfbff5d8)
3876 ktrace NAMI "/usr/bin/su"
3876 su RET execve 0
...
This kind of log can generate thousands of lines, tracking every low-level step.
The focus of each log type differs:
• Use application logs to audit logical failures (e.g., password issues, delivery errors).
• Use system logs to trace root causes (e.g., why a config file could not be opened).
System logs offer completeness, recording exact filenames, access types, and failure reasons. But they can become
large and unwieldy, requiring log rotation or selective logging to manage space.
Application logs offer abstraction, interpreting low-level events for better readability. For example:
• appx: cannot open config file [Link] for reading: no such file
An auditor often needs to correlate system and application logs to understand full context. Determining which
system actions lead to application-level failures is essential for discerning whether issues reflect security
breaches or benign errors.
Understanding both log layers provides a comprehensive view of activity, enabling accurate attack detection and
diagnosis.
Viva Questions on Network & Information Security
1. What is the difference between post-write and pre-write sanitization in audit logs, and how does
each impact user privacy and confidentiality?
2. Explain how covert channels pose a challenge to the effectiveness of an auditing system. Can they be
reliably detected through logs?
3. Why is it important to record the initial system state before beginning the audit process? How does
this affect the reliability of auditing outcomes?
4. How does a grammar-based approach like BNF improve the accuracy and consistency of log
analysis? Can you give an example of a log entry format structured using BNF?
Theory Question:
1. Discuss the implications of ambiguous log entries on security audits. How can naming conventions
and context mislead an analyst, and what solutions have been proposed to resolve such ambiguities?
2. Describe the differences between application-level logging and system-level logging. What are the
strengths and limitations of each, and in what scenarios would using both be necessary?
3. Define pseudonymizing and anonymizing sanitizers. How do they differ in terms of reconstructing
original data, and what techniques support each form of sanitization?
BEC714B: Computer and Network Security G V Bhat 7
-
Roll No 22:
Posteriori Design
In practice, most computer systems are developed without comprehensive security considerations from
the outset. As a result, security breaches often occur in environments where auditing was not built into
the initial architecture. This leads to the concept of a posteriori design, where the auditing system is
integrated into an already functioning system. The aim here is twofold: to detect violations of a stated
security policy, and to identify actions associated with known attack behaviours even if such actions
don’t directly breach the formal policy.
The difference between these objectives is subtle yet significant. The first focuses on policy enforcement,
ensuring all activity aligns with established security constraints. The second emphasizes threat
detection, targeting behaviours that are known to precede or constitute attacks.
Auditing to Detect Violations of a Known Policy
This approach mirrors the design principles of traditional, policy-driven auditing systems. However,
since the system in question wasn't built with auditing in mind, analysts must retroactively determine
what information is available and how it can be used to check policy compliance. The system must be
evaluated to discover which configurations, actions, and settings affect the policy's validity. To
implement this, two methodologies are available
State-Based Auditing
In a state-based audit, the system’s current state is inspected to assess whether it conforms to security
policy. This method relies on the assumption that the system can provide a coherent snapshot of its
various components—a task complicated by the concurrent nature of modern computing environments.
Definition: This approach outlines a logging and analysing system states to determine whether a
violation has occurred. However, challenges arise when the system is active or distributed. For instance,
Chandy-Lamport's algorithm provides consistent state capture in distributed settings, but gathering a
coherent snapshot in non-distributed environments requires the system to be quiescent often an
impractical demand.
Example: File system scanners that compare current file states to a known baseline database represent
state-based tools. Yet, unless the file system is idle, such tools might collect fragmented state data,
leading to flawed conclusions. These tools may fall victim to time-of-check-to-time-of-use (TOCTOU)
issues, where assumptions based on earlier checks become invalid due to changes during analysis.
Transition-Based Auditing
In transition-based auditing, the focus shifts to monitoring operations that might move the system into
a noncompliant state. Rather than evaluating static conditions, it assesses how commands or actions, if
executed, might violate the security policy.
Definition This approach describes a model as logging transitions (i.e., actions) and auditing them by
evaluating the consequences they may have on system integrity.
However, this approach has limitations: if the system is already in a nonsecure state, analysing
transitions alone may not uncover existing policy violations.
Example 1: In UNIX systems, tcp_wrappers monitor incoming TCP connections. If a connection
originates from an address listed in the hosts. deny file, the connection is blocked. This demonstrates
transition-based auditing—the system evaluates the action (a connection request) before executing it,
without considering whether the system’s current state is already compromised.
Example 2: America On line’s instant messaging system restricts users to a single active login session.
If a user attempts to log in from a second machine while already signed in elsewhere, the system checks
both the action (login attempt) and the current state (active session status). This hybrid method uses
both state-based and transition-based auditing, offering a more accurate protection mechanism.
BEC714B: Computer and Network Security G V Bhat 8
-
Auditing to Detect Known Violations of a Policy
In many computing environments, an explicit security policy may not be formally documented.
However, certain behaviours are universally acknowledged as nonsecure. These include activities such
as flooding a network to the point of making it unusable, or unauthorized access to a system. Even in the
absence of clearly stated rules, these actions violate the implicit expectations of secure system
behaviour.
Under such conditions, security analysts must proactively identify sequences of operations or state
characteristics that suggest a breach. Rather than waiting for policy violations to be defined, they focus
on patterns and behaviours that are symptomatic of attacks.
Example: The Land Attack
An illustrative case of auditing to detect known violations comes from the analysis of the Land attack, as
described by Daniels and Spafford. This attack exploits a vulnerability in the behaviour of the TCP three-
way handshake, which is the standard procedure for initiating a TCP connection.
The handshake involves three steps:
1. The source sends a SYN packet with sequence number s.
2. The destination replies with a SYN/ACK packet containing sequence number t and
acknowledgment number s + 1.
3. The source responds with an ACK packet with acknowledgment number t + 1.
This handshake sequence is typically used to initiate communication between distinct processes.
However, ambiguity arises in the TCP specification when the source and destination IP addresses and
port numbers are identical. In this unusual case, the system responds to its own packets, triggering a
flawed loop.
If the host follows one part of the specification, it sends a RESET (RST) packet, immediately terminating
the connection and preventing the attack.
Alternatively, if the host follows another interpretation, it sends an empty packet with sequence number
t + 1 and acknowledgment number s + 1. Since the packet loops back into the host, it re-triggers itself,
initiating a recursive cycle of invalid acknowledgments. If the system disables interrupts during this
phase, it may hang entirely. If not, it continues to function but extremely slowly, making it susceptible
to a denial of service (DoS).
Auditing Requirements
To detect this kind of attack, auditing systems must be equipped to recognize the characteristics of the
initial Land packet. Specifically, the following condition must be monitored in the log data:
source address = destination address AND source port number = destination port number
This condition flags any packet that appears to originate from and target the same location, a strong
indicator of this specific attack. A well-designed auditing system must log these critical fields to enable
accurate and timely detection. By identifying these patterns and correlating them with known threats,
auditors can safeguard systems against implicit security violations, even when formal policies are
lacking or incomplete.
BEC714B: Computer and Network Security G V Bhat 9
-
Viva Questions on Network & Information Security
1. What is the key difference between state-based and transition-based auditing? Can you explain with
examples where each would be more effective?
2. In the context of a posteriori design, how can auditors detect violations of security policy when no
formal policy is documented?
3. Explain the Land attack in relation to TCP. How does the structure of the TCP handshake contribute
to the vulnerability exploited by this attack?
4. Why might a hybrid auditing approach, combining state-based and transition-based methods, offer
better security coverage than using one alone?
Theory Question:
1. Discuss the challenges of implementing a posteriori auditing systems in environments not originally
designed with security in mind. Include references to both state-based and transition-based
methodologies.
2. Describe how logging mechanisms can be designed to detect known violations such as the Land
attack. What fields must be captured in audit logs to identify these anomalies effectively?
Roll No 23:
Auditing Mechanisms
Different systems adopt varied strategies for logging events. Typically, most systems log all activities by
default, offering administrators the option to disable specific types of logging. While this default
behavior ensures comprehensive tracking, it also leads to excessively large and often bloated logs. This
section explores how different systems handle auditing—from securely designed environments to
retrofitted modules in legacy or less secure contexts.
Secure Systems
Systems built with security as a foundational objective incorporate auditing mechanisms deeply into
their architecture. These mechanisms are often configurable via system-level interfaces, allowing
administrators to monitor specific subjects, objects, or events. The goal is to ensure that logs are focused
on relevant security activities rather than capturing unrelated data.
Example: VAX VMM System
The VAX VMM was developed to meet the stringent requirements of the A1 classification under the
TCSEC standard. This mandates proactive detection of security violations, responsive mitigation, and
audit capabilities focused on subjects and objects. Despite being intended for production use, the system
achieves high reliability and minimal performance impact.
• The system’s architecture uses a layered kernel where each layer conducts its own logging and
auditing of controlled objects.
• These distributed logs are then collected by a unified audit subsystem , which evaluates the
event’s severity and relevance using an audit table.
• Mandatory logging is triggered either by a programmer-defined flag or by detection of policy
violations such as repeated failed logins or covert channel usage.
Log Management Features:
• When the system log reaches 75% capacity, it triggers an archiving process.
• If archiving fails, the system halts entirely to uphold the philosophy that the kernel must not
operate without auditing.
BEC714B: Computer and Network Security G V Bhat 10
-
• Audit reduction is supported using filters based on time, security level, and severity.
Example: Compartmented Mode Workstation (CMW)
CMW features an advanced auditing interface connecting user, process, and kernel levels.
• User-level control: The chaud command allows toggling specific event logging.
• Process-level hooks: System calls like audit_on, audit_off, audit_write, and audit_suspend
enable dynamic audit control, even allowing processes to generate high-level audit entries.
• Some processes, such as window managers, opt out of low-level auditing to record semantically
meaningful high-level events.
• At the kernel level, audit decisions depend on log capacity. To preserve functionality, the kernel
may halt, discard entries, or disable specific audit triggers.
A tool called redux helps analysts convert log data into readable formats, offering filtering based on
users, events, and object security labels.
Nonsecure Systems
Systems that were not originally designed for security often have limited auditing capabilities aimed
primarily at system accounting rather than policy enforcement. These systems may lack detailed
information necessary for detecting sophisticated security violations and require external modules for
enhancement.
Example: Basic Security Module (BSM) on SunOS
BSM serves as a retrofit for auditing, offering improved security capabilities on nonsecure systems.
• Logs are composed of records; each built from tokens. These tokens encapsulate a variety of
fields such as user and process identity, group memberships, file paths, IPC data, and network
information.
• Records can refer to kernel-level events (e.g. system calls) or application-level events (e.g. failed
logins).
• Events are categorized into audit classes, enabling flexible pre-audit and post-audit filtering. Log
reduction tools like auditreduce let analysts focus on specific event categories.
Example Log Record:
header,35,AUE_EXIT,Wed Sep 18 [Link] 1991, + 570000 msec,
process,bishop,root,root,daemon,1234,
return,Error 0,5
trailer,35
• Binary format helps conserve space.
• praudit converts these binary logs into human-readable output.
• System managers retain full control over what is logged and audited, allowing BSM to be tailored
to different environments and policies.
BEC714B: Computer and Network Security G V Bhat 11
-
Viva Questions on Network & Information Security
1. How does the architecture of the VAX VMM system support reliable and secure auditing without
impacting system performance?
2. What distinguishes auditing in the Compartmented Mode Workstation (CMW) from standard
systems in terms of granularity and control?
3. Can you explain the role of audit reduction in secure systems? Why is it necessary, and what criteria
are typically used?
4. How do retrofit systems like BSM on SunOS manage security logging differently from systems
designed with security in mind? What flexibility do they offer administrators?
Theory Question:
1. Compare and contrast the auditing mechanisms in secure systems such as VAX VMM and CMW with
nonsecure systems like SunOS using BSM. Focus on architecture, log control, and effectiveness.
2. Discuss the challenges associated with default logging behavior in systems and explain how audit
reduction techniques help mitigate these issues. Use examples from both secure and nonsecure
systems.
Roll No 24:
Examples: Auditing File Systems
Audit Analysis of the NFS Version 2 Protocol
NFS Version 2 (Network File System) allows remote file access over a network, facilitating distributed
computing environments. From a security auditing perspective, however, NFSv2 presents several
challenges and limitations due to its original design priorities favoring performance and openness
over confidentiality and accountability.
Key Auditing Observations:
• Stateless Design: NFSv2 operates statelessly, meaning each file operation is independent. This
makes it difficult to track user sessions or correlate actions over time.
• Weak Authentication: Early versions rely on host-based authentication, where the server trusts
the client IP address—a model vulnerable to spoofing and impersonation attacks.
• Limited Logging Granularity: Most NFS servers log file access at the RPC (Remote Procedure
Call) level but without specific user actions or security context.
• Auditing Limitations: Because NFS does not inherently associate file requests with
authenticated user identities, audit trails lack accountability. Auditing must rely on external
mechanisms like client-side logs or network monitoring.
While NFSv2 enables file sharing across systems, its security and audit capabilities are minimal by
modern standards, requiring enhancement through layered systems or protocol extensions.
The Logging and Auditing File System (LAFS)
LAFS (Logging and Auditing File System) is a specialized file system explicitly built with security and
forensic accountability at its core. Unlike traditional systems where logging is an add-on feature, LAFS
treats auditing as a primary design goal.
Core Features of LAFS:
• Structured Logging: Every file operation—read, write, delete, rename—is logged with
timestamp, user ID, file path, and event type.
• Real-Time Auditing Hooks: Auditing is embedded in kernel-level operations, providing tamper-
resistant, high-fidelity logs.
BEC714B: Computer and Network Security G V Bhat 12
-
• Granular Control: Administrators can define policies for logging depth, retention duration, and
sensitivity thresholds.
• Anomaly Detection Support: Through comprehensive logs, LAFS facilitates detection of unusual
access patterns, privilege escalations, or policy violations.
• Cryptographic Protection: Logs can be secured with hashing and encryption to preserve
integrity and prevent unauthorized tampering.
LAFS represents an evolution from passive audit logging to proactive surveillance and compliance
support within the file system.
Comparison
Feature NFS Version 2 Logging and Auditing File System (LAFS)
Native; deeply integrated with kernel
Audit Integration Minimal; external logging required
operations
User
Weak due to stateless design Strong via per-user and per-event recording
Accountability
Security High; cryptographic safeguards and policy
Limited by legacy architecture
Awareness control
Log Granularity RPC-level only Full file-level event logging
Vulnerable without third-party
Tamper Resistance Built-in protections via integrity checks
tools
High; supports intrusion detection and
Forensic Value Low without augmentation
compliance
Give me 4 viva question and 3 theory questions on the above topics
Viva Questions on Network & Information Security
1. What are the limitations of stateless design in NFSv2 with respect to audit logging and user
accountability?
2. How does LAFS provide stronger forensic value compared to NFSv2? Explain with reference to
logging depth and cryptographic protection.
3. Why is host-based authentication in NFSv2 considered insecure, and how does it affect audit
reliability?
4. Can you explain how anomaly detection is supported in LAFS? What role does structured logging
play in identifying unusual file activities?
Theory Question:
1. Compare NFS Version 2 and the Logging and Auditing File System (LAFS) in terms of audit
integration, log granularity, and tamper resistance. Use a tabular format to highlight differences.
2. Discuss how the design priorities of NFSv2 have impacted its ability to serve as a secure and
accountable audit environment. What enhancements are needed to improve its auditing
capabilities?
3. Describe how LAFS enables real-time auditing at the kernel level. What are the benefits of this
approach in high-security environments?
BEC714B: Computer and Network Security G V Bhat 13
-
Roll No 25:
Audit Browsing.
In addition to automated audit mechanisms, security analysts often directly examine log files to detect
anomalies, patterns of misuse, or signs of intrusion that automated tools may miss. While audit
mechanisms are essential for systematic log analysis, they can lack sophistication or overlook subtle
clues. Moreover, most systems do not unify logs across components, resulting in multiple disjointed files
organized solely by timestamp and source process.
To address these challenges, audit browsing tools aim to visualize, correlate, and contextualize log
entries so that analysts can discover meaningful associations and trace events. These tools must help
auditors connect log entries to related entities and actions, offering both fine-grained control and
broader system visibility.
Browsing Techniques
Six core techniques for browsing audit logs effectively:
1. Text Display: Displays raw logs in text format, either fixed or customizable. Analysts can search by
attributes like time or user, but this technique lacks correlation between entries.
2. Hypertext Display: Converts logs into hyperlinked documents where entries are interlinked based
on relationships. Useful for tracing chains of actions but limited in showing overarching system-
wide behaviour.
3. Relational Database Browsing: Stores log data in a relational database and supports complex
queries. Associations between entries can be discovered dynamically; however, preprocessing is
required and output is often textual.
4. Replay: Reconstructs and plays back events in chronological order across multiple logs. This
highlights temporal dependencies and interactions that span processes and subsystems.
5. Graphing: Visualizes entities (processes, files) as nodes and their interactions as edges. Enables
recognition of hierarchies and interconnections but may struggle with scale or clutter unless
abstracted.
6. Slicing: Extracts the minimal sequence of events affecting a specific entity (like a file), helping focus
attention on relevant actions. It’s highly localized and best used alongside other techniques.
Example: Visual Audit Browser
Designing effective audit browsing tools is both scientific and artistic. The science guides what
information to extract and represent, while the art lies in intuitive, expressive visualization. A successful
interface should empower analysts to quickly grasp anomalies, follow investigative leads, and
reconstruct security incidents with clarity and precision.
The Visual Audit Browser toolkit embodies several of these approaches, specifically designed to
analyse BSM (Basic Security Module) logs:
• Frame Visualizer creates static graphs representing audit trails.
• Movie Maker dynamically constructs sequential graphs over time, illustrating attack
progression or behaviour changes.
• Hypertext Generator builds webpages for each user, modified file, and summary index, enabling
focused log navigation.
• Focused Audit Browser integrates slicing and graphing, allowing analysts to explore how
specific nodes (like files or processes) were affected and what entities interacted with them.
Through iterative focus and context expansion, an analyst can trace how a file was modified, which
process initiated the change, and ultimately pinpoint the intrusion path whether via unauthorized login,
daemon exploitation, or masquerading. The toolkit even supports timeline movies, providing
compelling visual narratives useful for reporting to stakeholders or law enforcement.
BEC714B: Computer and Network Security G V Bhat 14
-
Give me 4 viva question and 3 theory questions on the above topics
Viva Questions on Network & Information Security
1. What is the difference between text display and hypertext display in audit browsing, and in what
scenarios would each be useful?
2. How does the slicing technique help focus on relevant audit events, and why is it often combined
with other browsing methods?
3. Can you explain how replay-based audit browsing aids in detecting multi-step intrusions across
distributed logs?
4. What makes graphing an effective approach for audit log visualization, and what challenges can
arise when scaling graph-based displays?
Theory Question:
1. Discuss the architectural components and functionality of the Visual Audit Browser toolkit. How
does each module enhance the audit browsing experience and assist in forensic investigations?
2. Evaluate the limitations of traditional audit mechanisms and explain how audit browsing tools help
overcome these shortcomings using contextual and visual correlation techniques.
Roll No 27:
Intrusion Detection
Intrusion Detection: Principles and Fundamentals
A secure computer system typically demonstrates certain behavioural patterns that, when disrupted,
may indicate an attack. According to Denning’s hypothesis, systems under attack fail to uphold at least
one of these foundational characteristics:
1. Predictable User and Process Behaviour: Users generally operate within familiar boundaries. For
example, a user who only performs word processing shouldn't suddenly initiate system-level
tasks.
2. Absence of Malicious Command Sequences: Legitimate users do not issue commands designed to
subvert the security policy. Known attack patterns can be detected; unknown ones remain
elusive.
3. Conformance to Specification: Processes should behave strictly within their defined
specifications. Any deviation may signal compromise or malicious modification.
Practical Examples
• An attacker installing a backdoor may need elevated privileges. If a nonprivileged user suddenly
attempts system-level changes, this violates expected behavior (characteristic 1). The toolset or
methods used likely aim to bypass policy (characteristic 2), and the result may be a system
process behaving unpredictably (characteristic 3).
• Cliff Stoll’s famed detection story began with a minor accounting discrepancy, leading to the
discovery of an espionage ring illustrating how even subtle anomalies can signal deeper threats.
Basic Intrusion Detection and Attack Tools
With the rise of automated attack scripts, sophisticated breaches may now be executed by
unsophisticated users. These tools simplify exploitation and mask their footprints.
Definition: Attack Tool
BEC714B: Computer and Network Security G V Bhat 15
-
An attack tool is a pre-configured automated script or suite intended to violate a system’s security
policies.
Example: Rootkit
Rootkits are notorious attack tools tailored for UNIX systems. They:
• Sniff network traffic, especially passwords.
• Replace system utilities (ps, netstat, ls, du) with versions that hide malicious activity.
• Use control files to determine what files, processes, or connections to conceal.
• Accept “magic passwords” in login utilities to grant unauthorized access.
• Alter checksums so modified files resemble the originals.
• Include installation tools like fixer, and cleanup utilities like zapper.
Despite their obfuscation, rootkits cannot hide all traces. Independent or customized tools that bypass
modified utilities can reveal discrepancies. For example, mismatched disk usage statistics or
inconsistent process lists could signal a compromise this falls under anomaly detection (characteristic
1).
Goals of Intrusion Detection Systems (IDS)
Intrusion Detection Systems are designed to safeguard computer systems by identifying abnormal
behaviour or known exploit patterns. Their primary goals include:
1. Broad Detection Capability
IDS should detect internal and external intrusions, covering both known attack signatures and
previously unknown behaviours.
2. Timely Detection
Real-time analysis isn’t always necessary, but delay must be minimized. Actionable detection
should occur shortly after compromise, avoiding analysis paralysis or stale alerts.
3. Clear Presentation
Alerts and analysis results must be easily interpretable. Although a simple green/red status
light is ideal, systems often present nuanced summaries requiring informed judgment.
4. Accuracy
o False Positives: Incorrectly flagging benign actions as attacks.
o False Negatives: Failing to detect actual intrusions.
Both are undesirable, but false negatives are more dangerous. IDS should aim for
minimal errors while maintaining sensitivity.
Viva Questions on Network & Information Security
1. What are the three core behavioral characteristics that, according to Denning’s hypothesis, help
distinguish a secure system from one under attack? Explain with an example.
2. How can a rootkit conceal its presence from standard system utilities, and what auditing strategy
can expose these tactics?
3. Why are false negatives in intrusion detection considered more dangerous than false positives? Give
an example scenario where a false negative could have severe consequences.
4. How does predictable user and process behavior support intrusion detection mechanisms, and what
could be a red flag in such patterns?
Theory Question:
1. Discuss the role of automated attack tools in modern cyber intrusions. How do tools like rootkits
exploit system vulnerabilities, and what makes them difficult to detect?
2. Explain the primary goals of an Intrusion Detection System (IDS). Why is it important to balance
detection breadth with clarity and accuracy in real-world deployments?
BEC714B: Computer and Network Security G V Bhat 16
-
Roll No 29:
Models
Models of Intrusion Detection
Intrusion detection systems (IDS) rely on different modelling approaches to detect suspicious or
unauthorized behaviour within computer systems. These models serve as analytical frameworks to
identify deviations, known threats, or violations of functional boundaries. Each model stems from a
distinct philosophical approach to understanding how attacks manifest—whether by abnormality,
pattern matching, or rule violations.
Anomaly Modelling
Anomaly modelling focuses on identifying behaviours that diverge significantly from established normal
patterns. This technique stems from the idea that attacks often involve an abnormal use of legitimate
commands or access attempts inconsistent with historical data.
Characteristics:
• Models are built using statistical analysis, machine learning, or heuristics based on past
behaviour.
• Events such as unusual login times, resource usage spikes, or new command sequences may
trigger alerts.
• Adaptive systems can learn and update normal profiles over time.
Strengths:
• Can detect previously unknown attacks (zero-day threats).
• Effective against insider threats or novel exploits.
Limitations:
• Susceptible to false positives, especially when legitimate behaviour changes (e.g., a user
switching roles or tasks).
• Requires significant training data and baseline profiling.
Misuse Modelling
Misuse modelling detects intrusions by comparing system activity against a database of known attack
signatures. This approach is rule-based and matches sequences of operations to predefined patterns
recognized as malicious.
Characteristics:
• Each known attack is translated into a signature, which can include sequences of commands,
system calls, or access attempts.
• IDS tools scan for these signatures in real-time or during log analysis.
Strengths:
• Offers high accuracy in detecting well-documented attacks.
• Easy to configure and maintain with regular signature updates.
Limitations:
• Ineffective against unknown or obfuscated attacks.
• Maintenance overhead: needs constant updates to remain relevant.
Specification Modelling
Specification modelling defines a set of rules or behavioural specifications that trusted programs or
processes must follow. Any deviation from this predefined model—such as executing unauthorized
system calls—is treated as a potential intrusion.
Characteristics:
• Specifications are manually defined or generated from software documentation and security
policies.
• Commonly applied to privileged programs, daemons, or system utilities where expected
behaviour is well understood.
Strengths:
BEC714B: Computer and Network Security G V Bhat 17
-
• Provides precise control and low false-positive rates.
• Useful in critical systems where behavior is tightly regulated (e.g., kernel modules).
Limitations:
• Requires extensive domain knowledge and careful modeling.
• May not scale easily across diverse or evolving applications.
Comparison
Detects Unknown False
Model Complexity Ideal Use Case
Attacks Positives
User behaviour monitoring,
Anomaly Modelling Yes High High
adaptive IDS
Misuse Modelling No Low Medium Signature-based IDS, antivirus
Specification
Partial Low High Trusted program surveillance
Modelling
Give me 4 viva question and 2 theory questions on the above topics
Viva Questions on Network & Information Security
1. What are the key differences between anomaly modeling and misuse modeling in intrusion detection
systems? How do their detection capabilities compare?
2. In specification modeling, why is it important to have a well-defined behavioral specification, and
how does this impact false-positive rates?
3. How can anomaly modeling detect zero-day attacks, and what challenges does it face in
differentiating between malicious and legitimate user behavior?
4. Why is misuse modeling considered less effective against obfuscated or novel attacks, and how does
its dependency on signature databases influence its adaptability?
Theory Question:
1. Compare and contrast anomaly, misuse, and specification modeling approaches in terms of
detection scope, complexity, accuracy, and ideal deployment scenarios. Support your answer with
practical considerations.
2. Discuss the limitations of anomaly modeling in intrusion detection systems and explain how
machine learning can both improve and complicate its implementation.
BEC714B: Computer and Network Security G V Bhat 18
-
Roll No 30:
Architecture
Intrusion Detection System Architecture
An Intrusion Detection System (IDS) functions as an automated auditing mechanism, comprising three
principal components that mirror the design of traditional audit systems:
• Agent: Collects data from various sources like logs, processes, or network traffic.
• Director: Analyzes the data received from agents to identify signs of intrusion.
• Notifier: Decides whether to raise alerts and can instruct agents to adjust their data collection
methods.
This division ensures efficient monitoring, analysis, and responsive action in detecting security policy
violations.
22.4.1 Agent
The agent’s role is to gather and preprocess relevant data before sending it to the director. It typically
discards irrelevant information to optimize the director’s processing load. Agents can be configured to
adapt based on instructions from the director, allowing dynamic surveillance tuning during suspected
attack scenarios.
If the goal is to monitor suspicious login activity, the agent might extract only failed login attempts from a
security log and forward those to the director.
Host-Based Information Gathering
Host-based agents utilize system-level and application-level logs, including security logs (like BSM or
Windows NT logs), to identify activity patterns. Some agents reside inside the kernel, enabling direct
access to structured event data and eliminating the need for format conversion—though portability across
heterogeneous systems becomes restricted.
Policy Checkers as Agents
Certain agents, called policy checkers, evaluate system states to detect non-compliance. Though effective,
they tend to be complex, which violates the principle of keeping system modules simple. Hence, their
output is typically stored in logs for standard agents to process.
Network-Based Information Gathering
Network-based agents monitor network traffic, providing visibility into denial-of-service attacks, port
scanning, or unauthorized content transfers. These agents can:
• Use network sniffing for traffic inspection.
• Deploy strategically depending on the network topology (broadcast vs. point-to-point).
• Focus on critical network ingress and egress points.
However, analysing encrypted traffic (e.g., via HTTPS) is impossible without decryption keys, limiting
these agents’ scope. Also, if ingress systems already log traffic comprehensively, network monitoring
overlaps with host-level data collection.
Combining Sources
Effective intrusion detection demands a multi-layered view of system activity. Consider a UNIX system
where both application logs and kernel-level system call traces are available.
Reveals granular operations tied to password file access and user authentication.
This layering highlights abstraction differences—application-level logs provide clear summaries, while
system-level logs show mechanics. Depending on what the director is analysing (high-level behaviour or
low-level anomalies), the agent must either provide appropriate abstraction or translate between layers.
Director
The Director is the analytical core of an IDS. It processes data collected by agents, filters out noise, and
evaluates patterns to identify potential attacks or suspicious behaviours. The director's responsibilities
are both strategic and operational, and its effectiveness directly impacts the reliability of the IDS.
Key Functions:
BEC714B: Computer and Network Security G V Bhat 19
-
• Data Reduction: Incoming logs are pruned to eliminate redundancy and irrelevant entries,
streamlining analysis.
• Correlation of Logs: Events from multiple sources are correlated to reveal anomalies that may not
be visible in isolation.
• Isolation for Security: Directors often run on separate systems to:
o Improve performance
o Prevent tampering
o Protect analysis rules and profiles from attackers
Notifier
The Notifier is the decision point for communication and response. Once an intrusion or anomaly is
detected by the director, the notifier determines how and whom to alert—and in some cases, it can
initiate defensive actions.
Key Responsibilities:
• Alert Generation: Sends notifications to system security officers via:
o Email
o System logs
o Messaging systems
• Graphical Interfaces: Visual tools help interpret alerts effectively.
Together, the Director and Notifier transform raw data into actionable intelligence and defence, enabling
organizations to maintain robust security postures against evolving threats.
If you'd like a table summarizing each component’s features or a visual diagram illustrating this flow, I
can whip that up next!
Viva Questions on Network & Information Security
1. What is the role of an agent in an IDS architecture, and how do host-based and network-based
agents differ in terms of data collection?
2. Why is it important for the director component to run on a separate system, and what implications
does this have for intrusion detection accuracy and security?
3. How does the combination of application-level and system-level logs enhance the effectiveness of
intrusion detection? Can you give a practical example?
4. Explain how a notifier contributes to both communication and defensive response in an IDS. What
are some actions it might take following an intrusion alert?
Theory Question:
1. Describe the layered architecture of an Intrusion Detection System. Discuss how each component—
agent, director, and notifier interacts to detect and respond to security incidents.
2. Compare and contrast host-based agents and network-based agents in terms of their strengths,
limitations, and applicability. Include considerations related to encrypted traffic and system
heterogeneity.
BEC714B: Computer and Network Security G V Bhat 20
-
Roll No 31:
Organization of Intrusion Detection Systems
Modern Intrusion Detection Systems (IDS) are architected to balance scalability, responsiveness, and
visibility across complex computing environments. Depending on their focus and design philosophy, IDS
can monitor network traffic, host activity, or even autonomously adapt through distributed intelligence.
This section explores three notable organizational models:
Monitoring Network Traffic for Intrusions: NSM
NSM (Network Security Monitor) is a pioneering IDS architecture focused on analyzing real-time
network traffic to detect suspicious activity. It operates at the perimeter of a system or enterprise
network, offering visibility into communication flows between hosts and remote entities.
Key Features:
• Passive Sniffing: NSM listens to packets using network taps or mirror ports without interfering
in traffic flow.
• Protocol Decoding: Captures packet-level data and decodes headers to extract useful metadata
(e.g., source IP, destination port).
• Signature Matching: Applies predefined rules to detect known patterns of network-based
attacks (e.g., port scans, malformed packets).
• Centralized Analysis: All data is sent to a dedicated monitoring host for inspection.
Advantages:
• Ideal for detecting external threats, such as denial-of-service or worm propagation.
• Covers a broad network landscape without needing access to individual hosts.
Limitations:
• May miss host-based attacks, such as privilege escalation or insider misuse.
• Ineffective against encrypted payloads unless additional decryption layers are integrated.
Combining Host and Network Monitoring: DIDS
DIDS (Distributed Intrusion Detection System) expands IDS architecture by integrating both host-based
and network-based agents under a unified framework. It combines visibility into system internals
(processes, logs, file access) with external network traffic analysis.
Architecture:
• Host Monitors: Reside on individual systems, analysing system calls, login attempts, and audit
logs.
• LAN Monitors: Capture traffic across subnet boundaries or key network segments.
• Central Director: Correlates input from hosts and networks, identifying complex, multi-vector
attacks.
Benefits:
• Can detect coordinated attacks, spanning multiple layers (e.g., lateral movement after a phishing
entry point).
• Provides greater context by merging different data streams.
• Improves accuracy and reduces false positives through event correlation.
Challenges:
• Requires careful synchronization of logs and timestamps.
• More complex to manage and maintain compared to standalone IDS.
Autonomous Agents: AAFID
AAFID (Autonomous Agents for Intrusion Detection) introduces a decentralized, modular IDS
architecture using intelligent agents that operate independently yet communicate with each other.
Structure:
• Agents: Lightweight processes deployed across hosts, each monitoring specific behaviors (e.g.,
authentication, file integrity).
BEC714B: Computer and Network Security G V Bhat 21
-
• Transceivers: Serve as interfaces between agents and higher-level components.
• Analyzers: Aggregate data from transceivers and agents, performing global assessments.
Characteristics:
• Agents are specialized and autonomous, allowing flexible deployment and scalability.
• Fault-tolerant architecture—if one agent fails, others can continue operation.
• Encourages distributed collaboration and data sharing for holistic threat analysis.
Advantages:
• Well-suited for heterogeneous and large-scale environments.
• Enables adaptive detection by deploying new agents as threats evolve.
Comparison
IDS Model Focus Strengths Limitations
NSM Network traffic Strong external threat visibility Limited host context
DIDS Host + Network Comprehensive threat detection Complex integration and coordination
AAFID Autonomous agents Scalable, resilient, adaptable More complex orchestration
Viva Questions on Network & Information Security
1. What distinguishes NSM’s approach to intrusion detection from DIDS and AAFID in terms of data
source and visibility?
2. How does the DIDS architecture enhance detection accuracy compared to a purely network-based
IDS? Can you mention its components and how they coordinate?
3. Why is AAFID considered fault-tolerant and scalable? Explain how autonomous agents contribute to
its resilience.
4. What challenges might arise when trying to synchronize logs between host monitors and LAN
monitors in a distributed IDS like DIDS?
Theory Question:
1. Compare NSM, DIDS, and AAFID across focus, strengths, and limitations. Use a structured format to
explain which model would be best suited for a large, heterogeneous enterprise system.
2. Discuss the architectural trade-offs between centralized analysis (NSM) and decentralized analysis
(AAFID). How do these designs impact scalability, performance, and security coverage?
BEC714B: Computer and Network Security G V Bhat 22
-
Roll No 32:
Intrusion Response
Handling cyber intrusions involves more than immediate technical fixes—it requires a structured and
layered approach designed to prevent, contain, eradicate, and learn from incidents. This framework
ensures that organizations not only stop malicious activity but also strengthen their defenses for the
future.
Incident Prevention
This phase focuses on establishing proactive defenses against unauthorized access and breaches.
Activities include:
• Regular system updates and vulnerability patching
• Role-based access control (RBAC)
• Intrusion detection systems (IDS) and audit logging
• Cyber hygiene awareness and user training
Digraph: Prevention Framework
[User Awareness] → [Policy Enforcement]
↓ ↓
[System Hardening] → [Access Controls]
↓
[Monitoring Tools]
This shows that layered prevention connects both human and technical components.
Intrusion Handling
When prevention fails, handling the incident effectively is critical. This process unfolds in three
coordinated phases:
Containment Phase
The goal is to stop the attack from spreading or causing further damage.
Common actions:
• Disconnect affected systems from the network
• Disable compromised accounts
• Isolate traffic at firewalls
• Preserve volatile forensic data (e.g., memory dumps)
Digraph: Containment Flow
[Detection] → [Assess Scope] → [Isolate Assets]
↓
[Preserve Evidence]
This highlights the dual priority of stopping the threat and retaining audit trails.
Eradication Phase
Here, the focus shifts to removing the root cause and cleaning the system.
Steps involved:
• Remove malware or attacker traces
• Patch exploited vulnerabilities
• Reset passwords and permissions
• Run integrity verification scans
Digraph: Eradication Chain
[Root Cause Analysis] → [System Cleanup] → [Validation Scan]
Every action in this phase ensures full restoration of trust in the system environment.
Follow-Up Phase
This stage ensures that lessons are learned and used to reinforce security posture.
Key activities:
BEC714B: Computer and Network Security G V Bhat 23
-
• Document the attack timeline and recovery steps
• Perform a post-mortem and revise security policies
• Update incident response protocols
• Communicate findings with stakeholders or authorities
Digraph: Response Cycle
[Containment] → [Eradication] → [Follow-Up]
↑ ↓
[Detection] ← [Prevention Enhancement]
This cyclical model emphasizes continual refinement and resilience building.
Viva Questions on Network & Information Security
1. What is the significance of preserving forensic evidence during the containment phase, and how
does it influence the subsequent investigation?
2. Can you explain the role of user awareness in incident prevention and how it interacts with system-
level controls in a layered defense strategy?
3. Why is the eradication phase essential even after containment is achieved? What risks remain if
eradication steps are skipped or incomplete?
4. How does the follow-up phase contribute to strengthening an organization's long-term security
posture after an intrusion?
Theory Question:
1. Discuss the structured stages involved in handling a cyber intrusion. Highlight the purpose and key
activities of containment, eradication, and follow-up phases using suitable examples.
2. Explain the importance of integrating both human and technical controls in the incident prevention
phase. Illustrate how system hardening and user training complement each other in the prevention
framework.
BEC714B: Computer and Network Security G V Bhat 24
-