Module 2
Malicious Logic
Introduction
Odysseus, of Trojan War fame, found the most effective way to breach a hitherto impregnable fortress
was to have people inside bring him in without knowing they were doing so . The same approach works
for computer systems
Defination: Malicious logic is a set of instructions that cause a site’s security policy to be violated.
Trojan Horses
Definition.:A Trojan horse is a program with an overt (documented or known) effect and a covert
(undocumented or unexpected) effect
Trojan horses can make copies of themselves. One of the earliest Trojan horses was a version of the
game animal. When this game was played, it created an extra copy of itself. These copies spread, taking
up much room. The program was modified to delete one copy of the earlier version and create two
copies of the modi f ied program. Because it spread even more rapidly than the earlier version, the modi
f ied version of animal soon completely supplanted the earlier version. After a preset date, each copy of
the later version deleted itself after it was played.
Definition:. A propagating Trojan horse (also called a replicating Tro jan horse) is a Trojan horse that
creates a copy of itself.
Computer Viruses
This type of Trojan horse propagates itself only as specific programs (in the preced ing example, the
compiler and the login program). When the Trojan horse can propa gate freely and insert a copy of itself
into another file, it becomes a computer virus.
Definition: A computer virus is a program that inserts itself into one or more files and then performs
some (possibly null) action.
Several types of computer viruses have been identified.
1. Boot Sector Infectors: The boot sector is the part of a disk used to bootstrap the system or mount
a disk. Code in that sector is executed when the system “sees” the disk for the first time.
When the system boots, or the disk is mounted, any virus in that sector is executed. (The actual
boot code is moved to another place, possibly another sector.)
Definition: A boot sector infector is a virus that inserts itself into the boot sector of a disk.
2. Executable Infectors
Definition : An executable infector is a virus that infects executable pro grams.
The PC variety of executable infectors are called COM or EXE viruses because they infect programs
with those extensions. Figure illustrates how infection can occur. The virus can prepend itself to the
executable (as shown in the figure) or append itself.
3. Multipartite Viruses
Definition:. A multipartite virus is one that can infect either boot sectors or applications.
Such a virus typically has two parts, one for each type. When it infects an executable, it acts as an
executable infector; when it infects a boot sector, it works as a boot sector infector.
4. TSR Viruses
Definition: A terminate and stay resident (TSR) virus is one that stays active (resident) in memory after
the application (or bootstrapping, or disk mounting) has terminated.
TSR viruses can be boot sector infectors or executable infectors. Both the Brain and Jerusalem viruses
are TSR viruses.
Viruses that are not TSR execute only when the host application is executed (or the disk containing the
infected boot sector is mounted). An example is the Encroacher virus, which appends itself to the ends
of executables.
5. Stealth Viruses
Definition: Stealth viruses are viruses that conceal the infection of files.
These viruses intercept calls to the operating system that access files. If the call is to obtain file
attributes, the original attributes of the file are returned. If the call is to read the file, the file is
disinfected as its data is returned. But if the call is to execute the file, the infected file is executed.
6. Encrypted Viruses
Computer virus detectors often look for known sequences of code to identify com puter viruses . To
conceal these sequences, some viruses enci pher most of the virus code, leaving only a small decryption
routine and a random cryptographic key in the clear. Figure 19–2 summarizes this technique.
Definition:. An encrypted virus is one that enciphers all of the virus code except for a small decryption
routine.
7. Polymorphic Viruses
Definition: A polymorphic virus is a virus that changes its form each time it inserts itself into
another program.
Consider an encrypted virus. The body of the virus varies depending on the key cho sen, so
detecting known sequences of instructions will not detect the virus. However, the decryption
algorithm can be detected. Polymorphic viruses were designed to pre vent this. They change the
instructions in the virus to something equivalent but dif ferent. In particular, the deciphering code is
the segment of the virus that is changed. In some sense, they are successors to the encrypting
viruses and are often used in conjunction with them.
Consider polymorphism at the instruction level. All of the instructions
add 0 to operand
or 1 with operand
no operation
subtract 0 from operand
have exactly the same effect, but they are represented as different bit patterns on most architectures. A
polymorphic virus would insert these instructions into the deciphering segment of code.
8. Macro Viruses
Definition:. A macro virus is a virus composed of a sequence of instructions that is interpreted,
rather than executed directly.
A macro virus can infect either executables or data files (the latter leads to the name data virus). If it
infects executable files, it must arrange to be interpreted at some point. Macro viruses are not
bound by machine architecture. They use specific pro grams, and so, for example, a macro virus
targeted at a Microsoft Word program will work on any system running Microsoft Word. The effects
may differ.
Computer Worms
A computer virus infects other programs. A variant of the virus is a program that spreads from
computer to computer, spawning copies of itself on each one.
Definition : A computer worm is a program that copies itself from one computer to another.
Other Forms of Malicious Logic
1. Rabbits and Bacteria
Some malicious logic multiplies so rapidly that resources become exhausted. This creates a denial
of service attack. A bacterium is not required to use all resources on the system. Resources of a
specific class, such as file descriptors or process table entry slots, may not affect cur rently
running processes. They will affect new processes.
Definition: A bacterium or a rabbit is a program that absorbs all of some class of resource.
2 .Logic Bombs
Some malicious logic triggers on an external event, such as a user logging in .
Definition: A logic bomb is a program that performs an action that violates the security policy
when some external event occurs.
Disaffected employees who plant Trojan horses in systems use logic bombs. The events that
cause problems are related to the troubles the employees have, such as deleting the payroll
roster when that user’s name is deleted.
Defenses
Defending against malicious logic takes advantage of several different characteristics of malicious logic
to detect, or to block, its execution. The defenses inhibit the sus pect behavior. The mechanisms are
imprecise. They may allow malicious logic that does not exhibit the given characteristic to proceed, and
they may prevent programs that are not malicious but do exhibit the given characteristic from
proceeding.
1. Malicious Logic Acting as Both Data and Instructions
Some malicious logic acts as both data and instructions. A computer virus inserts code into another
program. During this writing, the object being written into the file (the set of virus instructions) is
data. The virus then executes itself. The instructions it executes are the same as what it has just
written. Here, the object is treated as an executable set of instructions. Protection mechanisms
based on this property treat all programs as type “data” until some certifying authority changes the
type to “executable” (instructions).
2. Malicious Logic Assuming the Identity of a User
Because a user (unknowingly) executes malicious logic, that code can access and affect objects
within the user’s protection domain. So, limiting the objects accessible to a given process run by
the user is an obvious protection technique.
2a. Information Flow Metrics
Definition : Define the flow distance metric fd(x) for some information x as follows. Initially, all
information has fd(x) = 0. Whenever x is shared, fd(x) increases by 1. Whenever x is used as input to
a computation, the flow distance of the output is the maximum of the flow distance of the input.
Information is accessible only while its flow distance is less than some particular value.
The metric is associated with information and not objects. Rather than tagging specific information
in files, systems implementing this policy would most likely tag objects, treating the composition of
different information as having the maximum flow distance of the information. This will inhibit
sharing
2b. Reducing the Rights
The user can reduce her associated protection domain when running a suspect program.
2c. Sandboxing
Sandboxes and virtual machines implicitly restrict process rights. A common implementation of this
approach is to restrict the program by modifying it. Usually, special instructions inserted into the
object code cause traps whenever an instruction violates the security policy. If the executable
dynamically loads libraries, special libraries with the desired restrictions replace the standard
libraries.
3. Malicious Logic Crossing Protection Domain Boundaries by Sharing
Inhibiting users in different protection domains from sharing programs or data will inhibit malicious
logic from spreading among those domains. This takes advantage of the separation implicit in integrity
policies.
4. Malicious Logic Altering Files
Mechanisms using manipulation detection codes (or MDCs) apply some function to a file to obtain
a set of bits called the signature block and then protect that block. If, after recomputing the
signature block, the result differs from the stored signature block, the file has changed, possibly as
a result of malicious logic altering the file. This mechanism relies on selection of good cryptographic
checksums.
5. The Notion of Trust
The effectiveness of any security mechanism depends on the security of the underlying base on
which the mechanism is implemented and the correctness of the imple mentation. If the trust in
the base or in the implementation is misplaced, the mechanism will not be secure. Thus, “secure,”
like “trust,” is a relative notion, and the design of any mechanism for enhancing computer security
must attempt to balance the cost of the mechanism against the level of security desired and the
degree of trust in the base that the site accepts as reasonable. Research dealing with malicious
logic assumes that the interface, software, and/or hardware used to implement the proposed
scheme will perform exactly as desired, meaning that the trust is in the underlying computing base,
the implementation, and (if done) the verification.
Vulnerability Analysis
Introduction
A “computer system” is more than hardware and software; it includes the policies, procedures, and
organization under which that hardware and software is used. Lapses in security can arise from any of
these areas or from any combination of these areas. Thus, it makes little sense to restrict the study of
vulnerabilities to hardware and software problems. When someone breaks into a computer system, that
person takes advantage of lapses in procedures, technology, or management (or some combination of
these factors), allowing unauthorized access or actions. The specific failure of the controls is called a
vulnerability or security flaw; using that failure to violate the site security policy is called exploiting the
vulnerability. One who attempts to exploit the vulnerability is called an attacker.
For example, many systems have special administrative users who are authorized to create new
accounts. Suppose a user who is not an administrative user can add a new entry to the database of
users, thereby creating a new account. This operation is forbidden to the non-administrative user.
However, such a user has taken advantage of an inconsistency in the way data in the database is
accessed. The inconsistency is the vulnerability; the sequence of steps that adds the new user is the
exploitation. A secure system should have no such problems. In practice, computer systems are so
complex that exploitable vulnerabilities (such as the one described above) exist; they arise from faulty
system design, implementation, operation, or maintenance. Formal verification and property-based
testing are techniques for detecting vulnerabilities. Both are based on the design and/or implementation
of the computer system, but a “computer system” includes policies, procedures, and an operating
environment, and these external factors can be difficult to express in a form amenable to formal
verification or property-based testing. Yet these factors determine whether or not a computer system
implements the site security policy to an acceptable degree. One can generalize the notion of formal
verification to a more informal approach (see Figure 20–1). Suppose a tester believes there to be flaws
in a system. Given the hypothesis (specifically, where the tester believes the flaw to be, the nature of
the flaw, and so forth), the tester determines the state in which the vulnerability will arise. This is the
precondition. The tester puts the system into that state and analyzes the system (possibly attempting to
exploit the vulnerability). After the analysis, the tester will have information about the resulting state of
the system (the post-conditions) that can be compared with the site security policy. If the security policy
and the post conditions are inconsistent, the hypothesis (that a vulnerability exists) is correct.
Penetration testing is a testing technique, not a proof technique. It can never prove the absence of
security flaws; it can only prove their presence. In theory, formal verification can prove the absence of
vulnerabilities. However, to be meaningful, a formal verification proof must include all external factors.
Hence, formal verification proves the absence of flaws within a particular program or design and not the
absence of flaws within the computer system as a whole. Incorrect configuration, maintenance, or
operation of the program or system may introduce flaws that formal verification will not detect
Figure : A comparison between formal verification and penetration testing. In formal verification, the
“preconditions” place constraints on the state of the system when the program (or system) is run, and
the “post conditions” state the effect of running the program. In penetration testing, the
“preconditions” describe the state of the system in which the hypothesized security flaw can be
exploited, and the “post conditions” are the result of the testing. In both verification and testing, the
post conditions must conform to the security policy of the system.
Penetration Studies
A penetration study is a test for evaluating the strengths of all security controls on the computer system.
The goal of the study is to violate the site security policy. A pene tration study (also called a tiger team
attack or red team attack) is not a replacement for careful design and implementation with structured
testing. It provides a method ology for testing the system in to, once it is in place. Unlike other testing
and veri f ication technologies, it examines procedural and operational controls as well as technological
controls
- Goals
A penetration test is an authorized attempt to violate specific constraints stated in the form of a
security or integrity policy. This formulation implies a metric for determining whether the study has
succeeded. It also provides a framework in which to examine those aspects of procedural, operational,
and technological security mechanisms relevant to protecting the particular aspect of system security in
question. Should goals be nebulous, interpretation of the results will also be nebulous, and the test will
be less useful than if the goals were stated precisely. Example goals of penetration studies are gaining of
read or write access to specific objects, files, or accounts; gaining of specific privileges; and disruption or
denial of the availability of objects.
- Layering of Tests
A penetration test is designed to characterize the effectiveness of security mechanisms and controls to
attackers. To this end, these studies are conducted from an attacker’s point of view, and the
environment in which the tests are conducted is that in which a putative attacker would function.
Different attackers, however, have different environments; for example, insiders have access to the
system, whereas outsiders need to acquire that access. This suggests a layering model for a penetration
study.
1. External attacker with no knowledge of the system. At this level, the testers know that the target
system exists and have enough information to identify it once they reach it. They must then determine
how to access the system themselves. This layer is usually an exercise in social engineering and/or
persistence because the testers try to trick the information out of the company or simply dial telephone
numbers or search network address spaces until they stumble onto the system. This layer is normally
skipped in penetration testing because it tells little about the security of the system itself.
2. External attacker with access to the system. At this level, the testers have access to the system and
can proceed to log in or to invoke network services available to all hosts on the network (such as
electronic mail). They must then launch their attack. Typically, this step involves accessing an account
from which the testers can achieve their goal or using a network service that can give them access to the
system or (if possible) directly achieve their goal. Common forms of attack at this stage are guessing
passwords, looking for unprotected accounts, and attacking network servers. Implementation f laws in
servers often provide the desired access.
3. Internal attacker with access to the system. At this level, the testers have an account on the system
and can act as authorized users of the system. The test typically involves gaining unauthorized privileges
or information and, from that, reaching the goal. At this stage, the testers acquire (or have) a good
knowledge of the target system, its design, and its operation. Attacks are developed on the basis of this
knowledge and access.
- Methodology at Each Layer
The penetration testing methodology springs from the Flaw Hypothesis Methodol ogy. The
usefulness of a penetration study comes from the documentation and con clusions drawn from
the study and not from the success or failure of the attempted penetration. Many people
misunderstand this, thinking that a successful penetration means that the system is poorly
protected. Such a conclusion can only be drawn once the study is complete and when the study
shows poor design, poor implementation, or poor procedural and management controls. Also
important is the degree of pene tration. If an attack obtains information about one user’s data,
it may be deemed less successful than one that obtains system privileges because the latter
attack can com promise many user accounts and damage the integrity of the system.
- Flaw Hypothesis Methodology
The Flaw Hypothesis Methodology was developed at System Development Corpora tion and
provides a framework for penetration studies . It consists of four steps.
1. Information gathering. In this step, the testers become familiar with the system’s
functioning. They examine the system’s design, its implementation, its operating procedures,
and its use. The testers become as familiar with the system as possible.
2. Flaw hypothesis. Drawing on the knowledge gained in the first step, and on knowledge of
vulnerabilities in other systems, the testers hypothesize f laws of the system under study.
3. Flaw testing. The testers test their hypothesized flaws. If a flaw does not exist (or cannot be
exploited), the testers go back to step 2. If the flaw is exploited, they proceed to the next step.
4. Flaw generalization. Once a flaw has been successfully exploited, the testers attempt to
generalize the vulnerability and find others similar to it. They feed their new understanding (or
new hypothesis) back into step 2 and iterate until the test is concluded.
5. Flaw elimination. The testers suggest ways to eliminate the flaw or to use procedural
controls to ameliorate it.
Vulnerability Classification
Vulnerability classification frameworks describe security flaws from various perspectives. Some
frameworks describe vulnerabilities by classifying the techniques used to exploit them. Others
characterize vulnerabilities in terms of the software and hardware components and interfaces
that make up the vulnerability. Still others classify vulnerabilities by their nature, in hopes of
discovering techniques for finding previously unknown vulnerabilities. The goal of vulnerability
analysis is to develop methodologies that provide the following abilities.
1. The ability to specify, design, and implement a computer system without vulnerabilities.
2. The ability to analyze a computer system to detect vulnerabilities (which feeds into the Flaw
Hypothesis Methodology step of penetration testing).
3. The ability to address any vulnerabilities introduced during the operation of the computer
system (possibly leading to a redesign or reimplementation of the flawed components).
4. The ability to detect attempted exploitations of vulnerabilities.
Ideally, one can generalize information about security flaws. From these generalizations, one
then looks for underlying principles that lead toward the desired goals. Because the
abstraction’s purpose is tied to the classifiers’ understanding of the goal, and of how best to
reach that goal, both of these factors influence the classification system developed. Hence, the
vulnerability frameworks covering design often differ from those covering the detection of
exploitation of vulnerabilities. Before we present several different frameworks, however, a
discussion of two security flaws will provide a basis for understanding several of the problems of
these frameworks.
Frameworks
The goals of a framework dictate the framework’s structure. For example, if the framework is to
guide the development of an attack detection tool, the focus of the framework will be on the
steps needed to exploit vulnerabilities. If the framework is intended to aid the software
development process, it will emphasize programming and design errors that cause
vulnerabilities. Each of the following classification schemes was designed with a specific goal in
mind. Each of the following frameworks classifies a vulnerability as an n-tuple, the elements of
the n-tuple being the specific classes into which the vulnerability falls. Some have a single set of
categories; others are multidimensional (n > 1) because they are examining multiple
characteristics of the vulnerabilities
The RISOS Study
The RISOS (Research Into Secure Operating Systems) study was prepared to aid computer and
system managers and information processing specialists in understanding security issues in
operating systems and to help them determine the level of effort required to enhance their
system security. The investigators classified flaws into seven general classes.
Figure :(a) The stack frame of fingerd when input is to be read. The arrow indicates the
location to which the parameter to gets refers (it is past the address of the input buffer). (b)
The same stack after the bogus input is stored. The input string overwrites the input buffer
and parameter to gets, allowing a return to the contents of the input buffer. The arrow shows
that the return address of main was overwritten with the address of the input buffer. When
gets returns, it will pop its return address (now the address of the input buffer) and resume
execution at that address.
1. Incomplete parameter validation
2. Inconsistent parameter validation
3. Implicit sharing of privileged/confidential data
4. Asynchronous validation/inadequate serialization
5. Inadequate identification/authentication/authorization
6. Violable prohibition/limit
7. Exploitable logic error