SOFTWARE TESTING
Software testing can be stated as the process of verifying and validating
whether a software or application is bug-free, meets the technical
requirements as guided by its design and development, and meets the user
requirements effectively and efficiently by handling all the exceptional and
boundary cases. The process of software testing aims not only at finding
faults in the existing software but also at finding measures to improve the
software in terms of efficiency, accuracy, and usability. The article focuses
on discussing Software Testing in detail.
What is Software Testing?
Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the
expected requirements and ensures the software is bug-free. The purpose of
software testing is to identify the errors, faults, or missing requirements in
contrast to actual requirements. It mainly aims at measuring the
specification, functionality, and performance of a software program or
application. If youre aiming to excel in roles like Software Tester, QA
Engineer, or Test Automation Engineer
Purpose of Software Testing
The Purpose of Testing Testing consumes at least half of the time and work required
to produce a functional program.
o MYTH: Good programmers write code without bugs. (Its wrong!!!)
o History says that even well written programs still have 1-3 bugs per hundred
statements.
The purpose of the Software Testing is to ensure that The software product is Bug-
Free and to enhance the software quality. It also helps to understand the usability of
developed software from an end-user perspective.
Software testing is essential to prevent starting over from beginning
again: Sometimes during testing of a fully developed software product against
the user requirement and we discover that some basic functionality was
missing. That results in development of the software product again from
Requirement Analysis. This may be caused due to a mistake in the
requirement gathering or during the coding phase. Fixing such kinds of
problems becomes very tedious, time-consuming, and expensive. Therefore,
it is always preferable to test the software in its development phase.
Evaluating the ease of usability of the software: It helps to specify how
easily users can use the final product. The software testing ensures the
construction of the software product in a way such that it meets the users
expectations regarding the products comfortable, and simplicity.
Verification of the software: It helps in validation and verification of all the
aspects of the software in software testing, such as checking the basic
functionalities which are documented in the SRS document. It helps to check
product behavior in unexpected conditions which can be from an incorrect
data input or change in the environment. Therefore, testing helps to make
sure that the system can handle these situations very well and if there is an
error. Then we have the option to correct them in advance.
Software tests help accelerate development: it helps developers find Bugs and
explain the scenarios to reproduce the Bugs, which in turn helps them to understand
the Bug and apply fixes. Besides, software testers can work in parallel with the
development team, thus understanding the design implementation and risk analysis
in detail. This speeds up the development process as the chances of bug occurrence
reduce.
Productivity and Quality in Software:
o In production of consumer goods and other products, every manufacturing stage is
subjected to quality control and testing from component to final stage.
o If flaws are discovered at any stage, the product is either discarded or cycled back
for rework and correction.
o Productivity is measured by the sum of the costs of the material, the rework, and
the discarded components, and the cost of quality assurance and testing.
o There is a tradeoff between quality assurance costs and manufacturing costs: If
sufficient time is not spent in quality assurance, the reject rate will be high and so will
be the net cost. If inspection is good and all errors are caught as they occur,
inspection costs will dominate, and again the net cost will suffer.
o Testing and Quality assurance costs for 'manufactured' items can be as low as 2%
in consumer products or as high as 80% in products such as space-ships, nuclear
reactors, and aircrafts, where failures threaten life. Whereas the manufacturing cost
of software is trivial.
o The biggest part of software cost is the cost of bugs: the cost of detecting them,
the cost of correcting them, the cost of designing tests that discover them, and the
cost of running those tests.
o For software, quality and productivity are indistinguishable because the cost of a
software copy is trivial.
Testing Vs Debugging:
Testing: Testing is the process of verifying and validating that a software or
application is bug-free, meets the technical requirements as guided by its
design and development, and meets the user requirements effectively and
efficiently by handling all the exceptional and boundary cases.
Debugging: Debugging is the process of fixing a bug in the software. It can
be defined as identifying, analyzing, and removing errors. This activity begins
after the software fails to execute properly and concludes by solving the
problem and successfully testing the software. It is considered to be an
extremely complex and tedious task because errors need to be resolved at
all stages of debugging.
The main differences between testing and debugging are:
1. Purpose: The purpose of testing is to identify defects or errors in the
software system, while the purpose of debugging is to fix those defects or
errors.
2. Timing: Testing is done before debugging, while debugging is done after
testing.
3. Approach: Testing involves executing the software system with test
cases, while debugging involves analyzing the symptoms of a problem
and identifying the root cause of the problem.
4. Tools and techniques: Testing can involve using automated or manual
testing tools, while debugging typically involves using tools and
techniques such as logging, tracing, and code inspection.
Testing Debugging
Testing is the process to find bugs and Debugging is the process of correcting the
errors. bugs found during testing.
It is the process to identify the failure It is the process to give absolution to code
of implemented code. failure.
Testing is the display of errors. Debugging is a deductive process.
Debugging is done by either programmer or
Testing is done by the tester.
the developer.
There is no need of design knowledge Debugging cant be done without proper
in the testing process. design knowledge.
Testing Debugging
Testing can be done by insiders as well Debugging is done only by insiders. An
as outsiders. outsider cant do debugging.
Debugging is always manual. Debugging
Testing can be manual or automated.
cant be automated.
It is based on different testing levels
Debugging is based on different types of
i.e. unit testing, integration testing,
bugs.
system testing, etc.
Debugging is not an aspect of the software
Testing is a stage of the software
development life cycle, it occurs as a
development life cycle (SDLC).
consequence of testing.
While debugging process seeks to match
Testing is composed of the validation
symptoms with cause, by that it leads to
and verification of software.
error correction.
Testing is initiated after the code is Debugging commences with the execution
written. of a test case.
MODEL FOR TESTING:
Above figure is a model of testing process. It includes three models:
A model of the environment,
a model of the program and
a model of the expectedbugs.
Environment:
o A Program's environment is the hardware and software required to make it run. For
online systems, the environment may include communication lines, other systems,
terminals and operators.
o The environment also includes all programs that interact with and are used to
create the program under test - such as OS, linkage editor, loader, compiler, utility
routines.
o Because the hardware and firmware are stable, it is not smart to blame the
environment for bugs.
Program:
o Most programs are too complicated to understand in detail.
o The concept of the program is to be simplified in order to test it.
o If simple model of the program doesnt explain the unexpected behavior, we may
have to modify that model to include more facts and details. And if that fails, we may
have to modify the program.
Bugs:
Bugs are more dangerous than we expect them to be. An unexpected test result may
lead us to change our notion of what a bug is and our model of bugs. A lot of
developers and testers have some preconceived notions about the bugs. These are
just myths and need to be removed from the mind in order to test the system
effectively. These are listed below:
1. Benign Bug Hypothesis: The belief that bugs are nice, tame and logical
(Benign: Not Dangerous)
2.Bug locality hypothesis: A bug occurring in a particular module affects only that
module locally and not the other modules. However, this is not the case with the
subtle bugs. Their consequences are far removed from the cause in time and space
from the component in which they exist
3. Control Bug Dominance: Control bug dominance: The belief that errors in the
control structure (if, switch etc) are more dominant. While bugs related to control-
flow, data-flow and data structures flow can be traced easily.
4. Code / Data Separation: The belief that bugs respect the separation of code and
data.
5.Correction abide: The belief that a corrected bug remains corrected. We might
have changed one of the interacting components in the event of a bug believing it as
the cause, however, the bug might re-occur as it was caused by some other
component which we did not change.
6.Silver bullets: The mistaken belief that A (language, design, representation,
environment, etc.) makes the program immune of bugs. It might reduce the severity
of bugs but not the complete occurrence of bugs.
7.Sadism effect: The belief that intuition and cunningness are sufficient to detect
bugs. This is true for easy bugs but the tough bugs need proper methodology and
techniques for detection.
Tests:
o Tests are formal procedures, Inputs must be prepared, Outcomes should predict,
tests should be documented, commands need to be executed, and results are to be
observed. All these errors are subjected to error
o We do three distinct kinds of testing on a typical software system. They are:
1. Unit / Component Testing: A Unit is the smallest testable piece of software that
can be compiled, assembled, linked, loaded etc. A unit is usually the work of one
programmer and consists of several hundred or fewer lines of code. Unit Testing is
the testing we do to show that the unit does not satisfy its functional specification or
that its implementation structure does not match the intended design structure. A
Component is an integrated aggregate of one or more units. Component Testing is
the testing we do to show that the component does not satisfy its functional
specification or that its implementation structure does not match the intended design
structure.
2. Integration Testing: Integration is the process by which components are
aggregated to create larger components. Integration Testing is testing done to show
that even though the components were individually satisfactory (after passing
component testing), checks the combination of components are incorrect or
inconsistent.
3.System Testing: A System is a big component. System Testing is aimed at
revealing bugs that cannot be attributed to components. It includes testing for
performance, security, accountability, configuration sensitivity, startup and recovery.
Bugs:
The importance of bugs depends upon their frequency, correction cost, installation
cost and consequences.
Frequency: It is important to know so as to how regularly a bug occurs and
what are the bugs which occur most regularly.
Correction cost: It is important to have an idea about the cost to correct the
bug after it is found. This cost is the sum of two factors: (i) the cost of
discovery and (ii) the cost of correction. These costs go up drastically if the
bug is found in the later parts of the software life cycle. Correction of larger
programs incurs higher cost.
Installation cost: It depends upon the number of installation: small for a single
user program, but how about a PC operating system bug? Installation cost
can dominate over other costs fixing one simple bug and distributing the fix
could exceed the entire systems development cost.
Consequences: The consequences of the occurrence of bugs can be
measured by the mean size of the damages awarded to the victims of bugs.
The metric for the measurement of bug importance is:
Importance ($) = Frequency* (Correction cost + Installation cost + Consequential cost)
Frequency does not tend to depend upon application or environment but the other three do.
As designers, testers, and QA workers, one must be interested in bug importance and not
only the frequency. Thus, it is required to come up with your own importance model.
How Bugs Affect Us Consequences The consequences of bugs range from mild
to catastrophic. They should be measured in human terms and not in machine terms
because programs are written to be used by the human beings. Let us discuss these
consequences in detail:
1. Mild: The symptoms of the bug offend us aesthetically e.g. a misspelled output, a
misaligned printout.
2. Moderate: Outputs are misleading or redundant. The bug impacts the system
performance.
3. Annoying: The systems behavior is dehumanizing because of the bug. E.g.
names are truncated or arbitrarily modified; bills for Rs. 0.00 have been sent, etc.
4.Disturbing: It refuses to handle legitimate transactions e.g. an ATM not dispensing
money on my debit card and giving card invalid.
5. Serious: The program losing the track of transactions, the transactions occurred
but the accountability is lost.
6. Very serious: Rather than losing your pay check, it is credited to other persons
account.
7. Extreme: The problems are not limited to a few users or transaction types. They
are frequent and arbitrary instead of erratic or for odd cases.
8. Intolerable: Long-term, unrecoverable corruption of data base, etc.
9. Catastrophic: The system fails and the decision to shut down is taken out of our
hands.
10. Infectious: System corrupting other system, one that melts a nuclear reactor, etc;
whose influence of malfunctioning is far more than expected; a system that kills.
Types of Bugs or Taxonomy of bugs:
The major categories are:
(1) Requirements, Features and Functionality Bugs
(2) Structural Bugs
(3) Data Bugs
(4) Coding Bugs
(5) Interface, Integration and System Bugs
(6) Test and Test Design Bugs
1. Requirements, features and functionality
Requirements, Features and Functionality Bugs:
Various categories in Requirements, Features and Functionality bugs include:
1. Requirements and Specifications Bugs:
Requirements and specifications developed from them can be
incomplete ambiguous, or self-contradictory. They can be
misunderstood or impossible to understand.
The specifications that don't have flaws in them may change while the design
is in progress. The features are added, modified and deleted.
Requirements, especially, as expressed in specifications are a major source
of expensive bugs.
The range is from a few percentages to more than 50%, depending on the
application and environment.
What hurts most about the bugs is that they are the earliest to invade the
system and the last to leave.
2. Feature Bugs:
Specification problems usually create corresponding feature problems.
A feature can be wrong, missing, or superfluous (serving no useful purpose).
A missing feature or case is easier to detect and correct.A wrong feature
could have deep design implications.
Removing the features might complicate the software, consume more
resources, and foster more bugs.
3. Feature InteractionBugs:
Providing correct, clear, implementable and testable feature specifications is
not enough.
Features usually come in groups or related features. The features of each
group and the interaction of features within the group are usually well tested.
The problem is unpredictable interactions between feature groups or even
between individual features. For example, your telephone is provided with call
holding and call forwarding. The interactions between these two features may
have bugs.
Every application has its peculiar set of features and a much bigger set of
unspecified feature interaction potentials and therefore result in feature
interaction bugs.
Specification and Feature Bug Remedies:
Most feature bugs are rooted in human to human communication
problems. One solution is to use high-level, formal specification
languages or systems.
Such languages and systems provide short term support but in the
long run, does not solve the problem.
Short term Support: Specification languages facilitate formalization of
requirements and inconsistency and ambiguity analysis.
Long term Support: Assume that we have a great specification language
and that can be used to create unambiguous, complete
specifications with unambiguous complete tests and consistent test
criteria.
The specification problem has been shifted to a higher level but not
eliminated.
Testing Techniques for functional bugs: Most functional test techniques-that is
those techniques which are based on a behavioral description of software, such as
transaction flow testing, syntax testing, domain testing, logic testing and state testing
are useful in testing functional bugs.
2. Structural bugs: Various categories in Structural bugs include:
1. Control and SequenceBugs:
Control and sequence bugs include paths left out, unreachable code,
improper nesting of loops, loop-back or loop termination criteria incorrect,
missing process steps, duplicated processing, unnecessary processing,
rampaging, GOTO's, ill-conceived (not properly planned) switches, spaghetti
code, and worst of all, pachinko code.
One reason for control flow bugs is that this area is amenable (supportive) to
theoretical treatment.
Most of the control flow bugs are easily tested and caught in unit testing.
Another reason for control flow bugs is that use of old code especially ALP &
COBOL code are dominated by control flowbugs.
Control and sequence bugs at all levels are caught by testing, especially structural
testing, more specifically path testing combined with a bottom line functional test
based on a specification.
2. Logic Bugs:
Bugs in logic, especially those related to misunderstanding how case
statements and logic operators behave singly and combinations
Also includes evaluation of boolean expressions in deeply nested IF-THEN-
ELSE constructs.
If the bugs are parts of logical (i.e. boolean) processing not related to control
flow, they are characterized as processing bugs.
If the bugs are parts of a logical expression (i.e. control-flow statement) which
is used to direct the control flow, then they are categorized as control-
flowbugs.
3. Processing Bugs:
Processing bugs include arithmetic bugs, algebraic, mathematical function
evaluation, algorithm selection and general processing.
Examples of Processing bugs include: Incorrect conversion from one data
representation to other, ignoring overflow, improper use of greater-than-or-
equal etc
Although these bugs are frequent (12%), they tend to be caught in good
unittesting.
4. Initialization Bugs:
Initialization bugs are common. Initialization bugs can be improper and
superfluous.
Superfluous bugs are generally less harmful but can affect performance.
Typical initialization bugs include: Forgetting to initialize the variables before
first use, assuming that they are initialized elsewhere, initializing to the wrong
format, representation or type etc
Explicit declaration of all variables, as in Pascal, can reduce some
initialization problems.
5. Data-Flow Bugs and Anomalies:
Most initialization bugs are special case of data flow anomalies.
A data flow anomaly occurs where there is a path along which we
expect to do something unreasonable with data, such as using an
uninitialized variable, attempting to use a variable before it exists,
modifying and then not storing or using the result, or initializing
twice without an intermediate use.
3.Data bugs:
Data bugs include all bugs that arise from the specification of data
objects, their formats, the number of such objects, and their initial
values.
Data Bugs are at least as common as bugs in code, but they are
often treated as if they did not exist at all.
Code migrates data: Software is evolving towards programs in which
more and more of the control and processing functions are stored in
tables.
Because of this, there is an increasing awareness that bugs in code
are only half the battle and the data problems should be given equal
attention.
Dynamic Data Vs Static data:
Dynamic data are transitory. Whatever their purpose their lifetime is
relatively short, typically the processing time of one transaction. A
storage object may be used to hold dynamic data of different types,
with different formats, attributes and residues.
Dynamic data bugs are due to leftover garbage in a shared
resource. This can be handled in one of the three ways: (1) Clean up
after the use by the user (2) Common Cleanup by the resource
manager (3) No Cleanup
Static Data are fixed in form and content. They appear in the source
code or database directly or indirectly, for example a number, a
string of characters, or a bitpattern.
Compile time processing will solve the bugs caused by static data.
Information, parameter, and control:
Static or dynamic data can serve in one of three roles, or in combination of roles: as
a parameter, for control, or for information.
Content, Structure and Attributes:
Content can be an actual bit pattern, character string, or number put into a
data structure. Content is a pure bit pattern and has no meaning unless it is
interpreted by a hardware or software processor. All data bugs result in the
corruption or misinterpretation of content.
Structure relates to the size, shape and numbers that describe the data
object, which is memory location used to store the content. (E.g. A two
dimensional array).
Attributes relates to the specification meaning that is the semantics
associated with the contents of a data object. (E.g. an integer, an
alphanumeric string, a subroutine). The severity and subtlety of bugs
increases as we go from content to attributes because the things get less
formal in that direction.
4.Coding bugs:
Coding errors of all kinds can create any of the other kind of bugs.
Syntax errors are generally not important in the scheme of things if the source
language translator has adequate syntax checking.
If a program has many syntax errors, then we should expect many logic and
coding bugs.
The documentation bugs are also considered as coding bugs which may
mislead the maintenance programmers.
5.Interface, integration, and system bugs:
Various categories of bugs in Interface, Integration, and System Bugs are:
1. External Interfaces:
The external interfaces are the means used to communicate with the world.
These include devices, actuators, sensors, input terminals, printers, and
communication lines.
The primary design criterion for an interface with outside world should be
robustness.
All external interfaces, human or machine should employ a protocol. The
protocol may be wrong or incorrectly implemented.
Other external interface bugs are: invalid timing or sequence assumptions
related to external signals
Misunderstanding external input or output formats.
Insufficient tolerance to bad input data.
2. Internal Interfaces:
Internal interfaces are in principle not different from external interfaces but
they are more controlled.
A best example for internal interfaces is communicating routines.
The external environment is fixed and the system must adapt to it but the
internal environment, which consists of interfaces with other components, can
be negotiated.
Internal interfaces have the same problem as external interfaces.
3. Hardware Architecture:
Bugs related to hardware architecture originate mostly from misunderstanding
how the hardware works.
Examples of hardware architecture bugs: address generation error, i/o device
operation / instruction error, waiting too long for a response, incorrect interrupt
handlingetc.
The remedy for hardware architecture and interface problems is twofold: (1)
Good Programming and Testing (2) Centralization of hardware interface
software in programs written by hardware interface specialists.
4. Operating System Bugs:
Program bugs related to the operating system are a combination of hardware
architecture and interface bugs mostly caused by a misunderstanding of what
it is the operating system does.
Use operating system interface specialists, and use explicit interface modules
or macros for all operating system calls.
This approach may not eliminate the bugs but at least will localize them and
make testing easier.
5. Software Architecture:
Software architecture bugs are the kind that called -interactive.
Routines can pass unit and integration testing without revealing such bugs.
Many of them depend on load, and their symptoms emerge only when the
system is stressed.
Sample for such bugs: Assumption that there will be no interrupts, Failure to
block or un block interrupts, Assumption that memory and registers were
initialized or not initialized etc.
Careful integration of modules and subjecting the final system toa stress test
are effective methods for these bugs.
6. Control and Sequence Bugs (Systems Level): These bugs include: Ignored
timing, Assuming that events occur in a specified sequence, Working on data before
all the data have arrived from disc, Waiting for an impossible combination of
prerequisites, Missing, wrong, redundant or superfluous process steps. The remedy
for these bugs is highly structured sequence control. Specialize, internal, sequence
control mechanisms are helpful.
7. Resource Management Problems:
Memory is subdivided into dynamically allocated resources such as buffer
blocks, queue blocks, task control blocks, and overlay buffers.
External mass storage units such as discs, are subdivided into memory
resource pools.
Some resource management and usage bugs: Required resource not
obtained, Wrong resource used, Resource is already in use, Resource dead
lock etc
Resource Management Remedies:
A design remedy that prevents bugs is always preferable to a test method
that discovers them.
The design remedy in resource management is to keep the resource
structure simple: the fewest different kinds of resources, the fewest pools, and
no private resource management.
8. Integration Bugs:
Integration bugs are bugs having to do with the integration of, and with the
interfaces between, working and tested components.
These bugs results from inconsistencies or incompatibilities between
components.
The communication methods include data structures, call sequences,
registers, semaphores, and communication links and protocols results in
integrationbugs.
The integration bugs do not constitute a big bug category (9%) they are
expensive category because they are usually caught late in the game and
because they force changes in several components and/or data structures.
9. System Bugs:
System bugs covering all kinds of bugs that cannot be ascribed to a
component or to their simple interactions, but result from the totality of
interactions between many components such as programs, data, hardware,
and the operating systems.
There can be no meaningful system testing until there has been thorough
component and integration testing.
System bugs are infrequent (1.7%) but very important because they are often
found only after the system has been fielded.
9.TEST AND TEST DESIGNBUGS:
Testing: testers have no immunity to bugs. Tests require complicated
scenarios and databases.
They require code or the equivalent to execute and consequently they can
have bugs.
Test criteria: if the specification is correct, it is correctly interpreted and
implemented, and a proper test has been designed; but the criterion by which
the software's behavior is judged may be incorrect or impossible. So, a proper
test criteria has to be designed. The more complicated the criteria, the likelier
they are to have bugs.
Remedies: The remedies of test bugs are:
1. Test Debugging: The first remedy for test bugs is testing and debugging the
tests. Test debugging, when compared to program debugging, is easier because
tests, when properly designed are simpler than programs and do not have to make
concessions toefficiency.
2. Test Quality Assurance: Programmers have the right to ask how quality in
independent testing is monitored.
3. Test Execution Automation: The history of software bug removal and prevention
is indistinguishable from the history of programming automation aids. Assemblers,
loaders, compilers are developed to reduce the incidence of programming and
operation errors. Test execution bugs are virtually eliminated by various test
execution automation tools.
4. Test Design Automation: Just as much of software development has been
automated, much test design can be and has been automated. For a given
productivity rate, automation reduces the bug count - be it for software or be it for
tests.
Testing and Design Styles :
Bad designs lead to bugs and are difficult to test; therefore, the bugs remain. Good
designs inhibit bugs before they occur and are easy to test. The two factors are
multiplicative, which explains the productivity difference. The best test techniques
are useless when applied to abominable code; it is sometimes easier to redesign a
bad routine than to attempt to create tests for it. The labor required producing new
code and design for the new code is much lesser than the labor required to design
thorough tests for an undisciplined, unstructured monstrosity. Good testing works
best on good code and good designs