Unit 6
Software Testing
Syllabus
• Introduction to Software Testing, Principles of
Testing, Testing Life Cycle, Phases of Testing,
Types of Testing, Verification & Validation,
Defect Management, Defect Life Cycle, Bug
Reporting, GUI Testing, Test Management and
Automation.
Software Testing
Testing is the process of exercising a
program with the specific intent of finding
errors prior to delivery to the end user.
Done by S/W developers for smaller projects.
Done by separate Testing team for large projects.
3
Testing Vs Debugging
Testing Debugging
1. Process to find Bugs 1. Process to Fix Bugs
2. Done By Testers 2. Done by Developers.
3. Aim is to find maximum 3. Aim is to make bug free
bugs
Charactristics of a good test
• High probability of finding an error.
• Not redundant
• Neither too simple nor too complex.
Principles of Testing
• Software Testing Principles
• Software testing is a procedure of implementing software or the application to
identify the defects or bugs. For testing an application or software, we need to
follow some principles to make our product defects free, and that also helps the
test engineers to test the software with their effort and time. Here, in this
section, we are going to learn about the seven essential principles of software
testing.
• Let us see the seven different testing principles, one by one:
• Testing shows the presence of defects
• Exhaustive Testing is not possible
• Early Testing
• Defect Clustering
• Pesticide Paradox
• Testing is context-dependent
• Absence of errors fallacy
Testing shows the presence of defects
• The test engineer will test the application to make
sure that the application is bug or defects free.
While doing testing, we can only identify that the
application or software has any errors. The primary
purpose of doing testing is to identify the numbers
of unknown bugs with the help of various methods
and testing techniques because the entire test
should be traceable to the customer requirement,
which means that to find any defects that might
cause the product failure to meet the client's needs.
Exhaustive Testing is not possible
• Sometimes it seems to be very hard to test all the
modules and their features with effective and non-
effective combinations of the inputs data throughout
the actual testing process.
• Hence, instead of performing the exhaustive testing
as it takes boundless determinations and most of the
hard work is unsuccessful. So we can complete this
type of variations according to the importance of the
modules because the product timelines will not
permit us to perform such type of testing scenarios.
Early Testing
• Early Testing
• Here early testing means that all the testing activities should
start in the early stages of the software development life
cycle's requirement analysis stage to identify the defects
because if we find the bugs at an early stage, it will be fixed in
the initial stage itself, which may cost us very less as compared
to those which are identified in the future phase of the testing
process.
• To perform testing, we will require the requirement specification
documents; therefore, if the requirements are defined
incorrectly, then it can be fixed directly rather than fixing them
in another stage, which could be the development phase.
Defect clustering
• The defect clustering defined that throughout the testing
process, we can detect the numbers of bugs which are
correlated to a small number of modules. We have various
reasons for this, such as the modules could be complicated;
the coding part may be complex, and so on.
• These types of software or the application will follow
the Pareto Principle, which states that we can identify that
approx. Eighty percent of the complication is present in 20
percent of the modules. With the help of this, we can find the
uncertain modules, but this method has its difficulties if the
same tests are performing regularly, hence the same test will
not able to identify the new defects.
Pesticide paradox
• This principle defined that if we are executing the
same set of test cases again and again over a
particular time, then these kinds of the test will
not be able to find the new bugs in the software or
the application. To get over these pesticide
paradoxes, it is very significant to review all the
test cases frequently. And the new and different
tests are necessary to be written for the
implementation of multiple parts of the application
or the software, which helps us to find more bugs.
Testing is context-dependent
• Testing is a context-dependent principle states that we
have multiple fields such as e-commerce websites,
commercial websites, and so on are available in the
market. There is a definite way to test the commercial
site as well as the e-commerce websites because every
application has its own needs, features, and
functionality. To check this type of application, we will
take the help of various kinds of testing, different
technique, approaches, and multiple methods.
Therefore, the testing depends on the context of the
application.
Absence of errors fallacy
• Once the application is completely tested and there are
no bugs identified before the release, so we can say that
the application is 99 percent bug-free. But there is the
chance when the application is tested beside the
incorrect requirements, identified the flaws, and fixed
them on a given period would not help as testing is
done on the wrong specification, which does not apply
to the client's requirements. The absence of error fallacy
means identifying and fixing the bugs would not help if
the application is impractical and not able to accomplish
the client's requirements and needs.
Testing life cycle
Req. Analysis
Test Planning
Test Case
Development
Environment
Setup
Test
Execution
Test Cycle
Closure
Phases of Testing
1. Unit 8. Smoke
2. Integration 9. Sanity
3. Functional 10. Usability
4. System 11. Recovery
5. Performance 12. Security
6. User acceptance 13. Load
7. Regression 14. Stress
Unit Testing
module
to be
tested
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
16
Unit Test Environment
driver
interface
local data structures
Module boundary conditions
independent paths
error handling paths
stub stub
test cases
RESULTS
17
Unit Testing
• Algorithms and logic
• Data structures (global and local)
• Interfaces
• Independent paths
• Boundary conditions
• Error handling
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy
19
Why Integration Testing Is Necessary
• One module can have an adverse effect on
another
• Subfunctions, when combined, may not
produce the desired major function
Why Integration Testing Is Necessary
(cont’d)
• Interfacing errors not detected in unit
testing may appear
• Timing problems (in real-time systems) are
not detectable by unit testing
• Resource contention problems are not
detectable by unit testing
Top Down Integration
A
top module is tested with
stubs
B F G
stubs are replaced one at
a time, "depth first"
C
as new modules are integrated,
some subset of tests is re-run
D E
22
Top-Down Integration (cont’d)
3. Tests are run as each individual module is
integrated.
4. On the successful completion of a set of
tests, another stub is replaced with a real
module
5. Regression testing is performed to ensure
that errors have not developed as result
of integrating new modules
Problems with Top-Down Integration
• Many times, calculations are performed in the
modules at the bottom of the hierarchy
• Stubs typically do not pass data up to the higher
modules
• Developing stubs that can pass data up is almost as
much work as developing the actual module
Bottom-Up Integration
• Integration begins with the lowest-level modules,
which are combined into clusters, or builds, that
perform a specific software subfunction
• Drivers (control programs developed as stubs) are
written to coordinate test case input and output
• The cluster is tested
• Drivers are removed and clusters are combined
moving upward in the program structure
Bottom-Up Integration
Bottom-Up Integration
A
B F G
drivers are replaced one at a
time, "depth first"
C
worker modules are grouped into
builds and integrated
D E
cluster
27
Problems with Bottom-Up Integration
• The whole program does not exist
until the last module is integrated
• Timing and resource contention
problems are not found until late in
the process
Sandwich Testing
A
Top modules are
tested with stubs
B F G
Worker modules are grouped into
builds and integrated
D E
cluster
29
White Box Testing
• All independent path within module is
executed at least once.
• Check all logical decision
• Execute all loops at their boundaries
• c heck internal data structure
White-Box Testing(Glass box testing)
... our goal is to ensure that all
statements and conditions have
been executed at least once ...
31
Basis Path Testing
Fig:-Flow graph notation
33
34
Independent Program Paths
Independent path is any path through the
program that introduces at least one new set of
processing statement
In terms of flow graph, path must move at least
one edge that has not been traversed before
path 1: 1-11,
path 2: 1-2-3-4-5-10-1-11
path 3: 1-2-3-6-8-9-10-1-11
path 4: 1-2-3-6-7-9-10-1-11
35
Cyclomatic Complexity
• Quantitative measure of the logical complexity of
the program.
• Cyclomatic complexity is the answer
• Cyclomatic Complexity
– The number of regions of the flow graph correspond
to the Cyclomatic complexity.
– Cyclomatic complexity, V(G), for a flow graph, G, is
defined as V(G) = E -N + 2
– Cyclomatic complexity, V(G), for a flow graph, G, is
also defined as V(G) = P + 1
36
Cyclomatic Complexity
• For above example:
• V(G)= 11-9+2=4
• V(G)=3+1=4
i.e V(G) = Cyclomatic Complexity=4
Basis Path Testing
Next, we derive the
independent paths:
1
Since V(G) = 4,
2 there are four paths
3 Path 1: 1,2,3,6,7,8
4
5 6
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
7
Finally, we derive test
cases to exercise these
8
paths.
38
Deriving a test case
39
Contd..
• Using design or code as a foundation, draw
a corresponding flow graph
• Determine Cyclomatic complexity of a
resultant flow graph
• Determine basic set of linear independent
paths
• Prepare test case that will force execution
of each path in the basic set
40
Graph Matrices
A graph matrix is a square matrix whose size (i.e., number of
rows and columns) is equal to the number of nodes on a flow
graph
Each row and column corresponds to an identified node, and
matrix entries correspond to connections (an edge) between
nodes.
By adding a link weight to each matrix entry, the graph matrix
can become a powerful tool for evaluating program control
structure during testing
41
42
Control Structure Testing
• Condition testing — a test case design
method that exercises the logical conditions
contained in a program module
• Data flow testing — selects test paths of a
program according to the locations of
definitions and uses of variables in the
program.
• Loop Testing
43
Loop Testing
Simple
loop
Nested
Loops
Concatenated
Loops Unstructured
Loops
44
Loop Testing: Simple Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
45
Loop Testing: Nested Loops
Nested Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
46
Black-Box Testing
requirements
output
input events
47
Black-Box Testing
• How is functional validity tested?
• How is system behavior and performance
tested?
• What classes of input will make good test cases?
• Is the system particularly sensitive to certain
input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system
tolerate?
• What effect will specific combinations of data
have on system operation?
48
Graph-Based Methods
To understand the
objects that are object
#1
Directed link
(link weight)
object
#2
modeled in
software and the Undirected link
Node weight
(value
relationships that Parallel links
)
connect these object
#
objects 3
(a)
In this context, we
consider the term new menu select generates document
file (generation time 1.0 sec) window
“objects” in the
broadest possible allows editing
context. It is represented as
of Attributes:
encompasses data contains
objects, traditional document background color: white
tex text color: default color
components (modules), t or preferences
and object-oriented
(b)
elements of computer
software.
49
Equivalence Partitioning
user output FK
queries formats input
mouse
picks data
prompts
50
Sample Equivalence Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
51
Boundary Value Analysis
user output FK
queries formats input
mouse
picks data
prompts
output
input domain domain
52
Comparison Testing
• Used only in situations in which the reliability
of software is absolutely critical (e.g., human-
rated systems)
– Separate software engineering teams develop
independent versions of an application using the
same specification
– Each version can be tested with the same test
data to ensure that all provide identical output
– Then all versions are executed in parallel with real-
time comparison of results to ensure consistency
53
VERIFICATION VALIDATION
It includes checking documents, design, It includes testing and validating the
codes and programs. actual product.
Verification is the static testing. Validation is the dynamic testing.
It does not include the execution of the
It includes the execution of the code.
code.
Methods used in verification are Methods used in validation are Black
reviews, walkthroughs, inspections and Box Testing, White Box Testing and non-
desk-checking. functional testing.
It checks whether the software It checks whether the software meets
conforms to specifications or not. the requirements and expectations of a
customer or not.
It can find the bugs in the early stage of It can only find the bugs that could not
the development. be found by the verification process.
The goal of verification is application The goal of validation is an actual
and software architecture and product.
specification.
Quality assurance team does Validation is executed on software code
verification. with the help of testing team.
It comes before validation. It comes after verification.
Verification means-
Are we building the product right?
Validation means-
Are we building the right product?
Debugging:
A Diagnostic Process
56
The Debugging Process
test cases
new test results
cases
regression
tests suspected
causes
corrections
Debugging
identified
causes
57
Debugging Effort
time required
to diagnose the
symptom and
time required determine the
to correct the error cause
and conduct
regression tests
58
Symptoms & Causes
symptom and cause may be
geographically separated
symptom may disappear when
another problem is fixed
cause may be due to a
combination of non-errors
cause may be due to a system
or compiler error
symptom cause may be due to
assumptions that everyone
cause believes
symptom may be intermittent(occurring at any ti
59
Consequences of Bugs
infectious
damage
Catastrophic(causing sudden damage)
extreme
serious
disturbing
Annoying(irritating)
mild
Bug Type
Bug Categories: function-related bugs,
system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.
60
Debugging Techniques
brute force / testing
backtracking
Cause
elimination
61
Defect Management
• Identification
• Categorization
• Prioritization
• Assignment
• Resolution
• Verification
• Closure
• Management Reporting
Defect Life Cycle
GUI Testing/Web Application Testing
• Links Interactivity
• Forms Time sensitivity
• CGI scripts Layout
• Cookies Aesthetics
• Dynamic HTML Display
• Pop up windows Personalization
• Client side scripting Accessibility
• Readability
Positive Testing
Positive testing can be performed on the system by providing the
valid data as input. It checks whether an application behaves as
expected with the positive input. . This is to test to check the
application that does what it is supposed to do so
For example –
There is a text box in an application which can accept only
numbers. Entering values up to 99999 will be acceptable by the
system and any other values apart from this should not be
acceptable. To do positive testing, set the valid input values from
0 to 99999 and check whether the system is accepting the values.
65
Negative Testing
Negative Testing can be performed on the system by providing
invalid data as input. It checks whether an application behaves
as expected with the negative input. This is to test the
application that does not do anything that it is not supposed to
do so. For example -
For the example above Negative testing can be performed by
testing by entering alphabets characters from A to Z or from a
to z. Either system text box should not accept the values or else
it should throw an error message for these invalid data inputs. 66
Test Plans
The goal of test planning is to establish the list of tasks to
perform at d time of testing. The main work product is the
test plan.
The test plan documents the overall approach to the
test. In many ways, the test plan serves as a summary
of the test activities that will be performed.
It shows how the tests will be organized, and outlines
all of the testers’ needs which must be met in order to
properly carry out the test.
http://www.stellman-greene.com 67
Test Plan Outline
http://www.stellman-greene.com 68
Test Cases
A test case is a description of a specific interaction that a
tester will have in order to test a single behavior of the
software.
A typical test case is laid out in a table, and includes:
• A unique name and number
• A requirement which this test case is exercising
• Preconditions which describe the state of the
software before the test case (which is often a
previous test case that must always be run before
the current test case)
• Steps that describe the specific steps which make
up the interaction
• Expected Results which describe the expected state
of the software after the test case is executed
http://www.stellman-greene.com 69
Test Cases – Good Example
http://www.stellman-greene.com 70
Test management & Automation
• This is used to store information on how
testing is to be done, plan testing activities, &
report the status of QA activities.
• Maintain & plan manual testing & automated
testing.
• Provide collaborative environment intended to
make test automation efficient.
Test Driven Development