0% found this document useful (0 votes)
108 views93 pages

Software Testing Essentials

In the waterfall model: 1. Verification is undertaken during the design phase to check if the design conforms to requirements specified in previous phase. 2. Testing is undertaken after coding phase to check if the program works as expected. 3. Validation is undertaken after testing phase to check if the final product meets the actual user needs as specified in the requirements document.

Uploaded by

vedant kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views93 pages

Software Testing Essentials

In the waterfall model: 1. Verification is undertaken during the design phase to check if the design conforms to requirements specified in previous phase. 2. Testing is undertaken after coding phase to check if the program works as expected. 3. Validation is undertaken after testing phase to check if the final product meets the actual user needs as specified in the requirements document.

Uploaded by

vedant kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Software Testing

Dr. RAJIB MALL


Professor
Department Of Computer Science &
Engineering
IIT Kharagpur.

1
Faults and Failures
• A program may
fail during testing:

▪ A manifestation of a fault
(also called defect or bug).

▪ Mere presence of a fault may not


lead to a failure. 2
Errors, Faults, Failures
• Programming is human effort-intensive:
▪ Therefore, inherently error prone.

• IEEE std 1044, 1993 defined errors and


faults as synonyms :
• IEEE Revision of std 1044 in 2010
introduced finer distinctions:
▪ For more expressive communications
distinguished between Errors and Faults 3
Error or mistake
Fault, defect, or bug
Failure

Specification Design Code

4
Error Tit-Bits
• Even experienced programmers
make many errors:
▪ Avg. 50 bugs per 1000 lines of source code

• Extensively tested software contains:


▪ About 1 bug per 1000 lines of Bug Source

source code. Spec and


Design
• Bug distribution: Code

▪ 60% spec/design, 40% implementation.


6
How are Bugs Reduced?
• Review

• Testing

• Formal specification and


verification

• Use of development process


7
Cost of Not Adequately Testing
• Can be enormous

• Example:
• Ariane 5 rocket self-destructed 37 seconds after launch

• Reason: A software exception bug went undetected…


– Conversion from 64-bit floating point to 16-bit signed integer
value had caused an exception

• The floating point number was larger than 32767

• Efficiency considerations had led to the disabling of the


exception handler.

• Total Cost: over $1 billion 8


How to Test?
• Input test data to the program.
• Observe the output:
▪ Check if the program behaved as
expected.

9
Examine Test Result…
• If the program does not
behave as expected:

▪Note the conditions under


which it failed (Test report).

▪Later debug and correct.


10
Testing Facts
• Consumes the largest effort among
all development activities:
▪ Largest manpower among all roles
▪ Implies more job opportunities

• About 50% development effort


▪ But 10% of development time?
▪ How? 11
Testing Facts
• Testing is getting more complex and
sophisticated every year.

▪ Larger and more complex programs

▪ Newer programming paradigms

▪ Newer testing techniques

▪ Test automation 12
Testing Perception
• Testing is often viewed as not very
challenging --- less preferred by novices,
but:
▪ Over the years testing has taken a center
stage in all types of software development.

▪ “Monkey testing is passe” --- Large number of


innovations have taken place in testing area ---
requiring tester to have good knowledge of
test techniques.
▪ Challenges of test automation 13
Testing Activities Now Spread Over Entire Life Cycle

14
Test How Long?
One way:
# Bugs

Time

• Another way:
▪ Seed bugs… run test cases
▪ See if all (or most) are getting
detected 15
Verification versus Validation
• Verification is the process of
determining:
▪ Whether output of one phase of
development conforms to its previous phase.

• Validation is the process of


determining:
▪ Whether a fully developed system
conforms to its SRS document..
16
Verification versus Validation
• Verification is concerned with
phase containment of errors:

▪Whereas, the aim of validation is


that the final product is error
free.
17
Verification and Validation Techniques

• Review • System testing

• Simulation

• Unit testing

• Integration
testing
18
Verification Validation
Are you building it Have you built the right
right? thing?
Checks whether an Checks the final product
artifact conforms to against the specification.
its previous artifact.
Done by developers. Done by Testers.

Static and dynamic Dynamic activities:


activities: reviews, Execute software and
unit testing. check against
requirements. 19
Testing Levels

1/29/2020 20
4 Testing Levels
• Software tested at 4 levels:
▪Unit testing
▪Integration testing
▪System testing

▪Regression testing
21
Test Levels
• Unit testing
▪ Test each module (unit, or component)
independently
▪ Mostly done by developers of the modules
• Integration and system testing
▪ Test the system as a whole
▪ Often done by separate testing or QA team
• Acceptance testing
▪ Validation of system functions by the
customer 22
Levels of Testing
What users Acceptance
really need testing

Requirements System testing

Design Integration testing

Code Unit testing

Maintenance Regression Testing


Overview of Activities During System and
Integration Testing
• Test Suite Design
• Run test cases Tester
• Check results to detect failures.
• Prepare failure list
• Debug to locate errors
Developer
• Correct errors. 24
Quiz 1
• As testing proceeds more and more bugs
are discovered.
▪ How to know when to stop testing?

• Give examples of the types of bugs


detected during:
▪ Unit testing?

▪ Integration testing?

▪ System testing? 25
Unit testing
• During unit testing, functions (or
modules) are tested in isolation:
▪ What if all modules were to be tested
together (i.e. system testing)?

•It would become difficult to


determine which module has the
error. 26
Integration Testing
• After modules of a system have
been coded and unit tested:

▪ Modules are integrated in


steps according to an integration plan

▪ The partially integrated system is


tested at each integration step. 27
Integration and System Testing
• Integration test evaluates a group of
functions or classes:
▪ Identifies interface compatibility, unexpected
parameter values or state interactions, and
run-time exceptions

▪ System test tests working of the entire


system

• Smoke test:
▪ System test performed daily or several times
a week after every build.
28
Types of System Testing
• Based on types test:
▪ Functionality test
▪ Performance test
• Based on who performs testing:
▪ Alpha
▪ Beta
▪ Acceptance test 29
Performance test
• Determines whether a system or
subsystem meets its non-functional
requirements:
• Response times
• Throughput
• Usability
• Stress
• Recovery
• Configuration, etc. 30
User Acceptance Testing
• User determines whether the
system fulfills his requirements

▪ Accepts or rejects delivered


system based on the test
results.
31
Who Tests Software?
• Programmers:
▪ Unit testing
▪ Test their own or other’s programmer’s code
• Users:
▪ Usability and acceptance testing
▪ Volunteers are frequently used to test beta
versions
• Test team:
▪ All types of testing except unit and acceptance
▪ Develop test plans and strategy 32
Feasibility Study
V&V

Req. Analysis

Design Testing by developers

Coding Testing by
Tester
Review,
Simulation, etc.
Testing

Maintenance
33
Pesticide Effect
• Errors that escape a fault detection
technique:
▪ Can not be detected by further
applications of that technique.

F F F F
I I I I
L L L L
T T T T
E E E E
R R R R

34
Capers Jones Rule of Thumb

Capers Jones

• Each of software review,


inspection, and test step will
find 30% of the bugs present.

In IEEE Computer, 1996


35
Pesticide Effect
• Assume to start with 1000 bugs
• We use 4 fault detection
techniques :
▪ Each detects only 70% bugs existing
at that time
▪ How many bugs would remain at end?
▪ 1000*(0.3)4=81 bugs 36
Quiz
1. When are verification undertaken in waterfall model?

2. When is testing undertaken in waterfall model?

3. When is validation undertaken in waterfall model?

Feasibility Study

Req. Analysis

Design

Coding

Testing

Maintenance
37
Quiz: Solution
1. When are verification undertaken in waterfall model?

2. When is testing undertaken in waterfall model?

Ans: Coding phase and Testing phase

3. When is validation undertaken in waterfall model?

Feasibility Study
Verification
Req. Analysis

Design Validation
Coding

Testing

Maintenance
38
V Life Cycle
Model

1/29/2020 39
V Model
• It is a variant of the Waterfall
▪ Emphasizes verification and
validation (V&V) activities.
▪ V&V activities are spread over the
entire life cycle.
• In every phase of development:
▪ Testing activities are planned in
parallel with development. 40
Production,
Project Planning Operation &
Maintenance

Requirements
Analysis & System Testing
Specification

High Level Integration&


Design Testing

Detailed Design Unit testing

Coding 41
V Model Steps
• Planning

• Requirements • System and


Specification and acceptance testing
Analysis

• Integration and
• Design
Testing 42
V Model: Strengths
• Starting from early stages of
software development:

▪ Emphasize planning for verification


and validation of the software

• Each deliverable is made testable

• Intuitive and easy to use 43


V Model Weaknesses
• Does not support overlapping of
phases
• Does not support iterations
• Not easy to handle later changes in
requirements
• Does not support any risk handling
method 44
When to Use V Model
• Natural choice for systems
requiring high reliability:

• Example: embedded control


applications:

▪ All requirements are known up-front

▪ Solution and technology are known 45


A Few More Basic
Concepts on
Testing…

1/29/2020 46
How Many Latent Errors?
• Several independent studies
[Jones],[schroeder], etc. conclude:
▪ 85% errors get removed at the end
of a typical testing process.
▪ Why not more?
▪ All practical test techniques are
basically heuristics… they help to
reduce bugs… but do not guarantee
complete bug removal… 47
Evolution of Test Automation
1960-1990 1990-2000 2000-

Model-Based
Testing

Scripting

Capture and
Replay

Manual Manual Automated


Test design Test design Test design
and and and
Execution Automated Execution
Execution
48 48
Fault Model
• Types of faults possible in a
program.

• Some types can be ruled out:

▪For example, file related-


problems in a program not
using files. 49
Fault Model of an OO Program
Faults in OO Programs

Structural
Algorithmic
Faults
Faults

Procedural Traceability OO Incorrect Inadequate


Faults Faults Faults Result Performance
50
Hardware Fault-Model
• Essentially only four types:
▪ Stuck-at 0
▪ Stuck-at 1
▪ Open circuit
▪ Short circuit
• Testing is therefore simple:
▪ Devise ways to test the presence of each
• Hardware testing is usually fault-
based testing.
51
Test Cases
• Each test case typically tries to
establish correct working of some
functionality:
▪ Executes (covers) some program
elements.
▪ For certain restricted types of
faults, fault-based testing can be
used.
52
Test data versus test cases
• Test data:
▪ Inputs used to test the system

• Test cases:
▪ Inputs to test the system,
▪ State of the software, and
▪ The predicted outputs from the inputs 53
Test Cases and Test Suites
• A test case is a triplet [I,S,O]
▪I is the data to be input to the
system,
▪S is the state of the system at
which the data will be input,
▪O is the expected output of the
system. 54
What are Negative Test Cases?
• Purpose:
▪ Helps to ensure that the application
gracefully handles invalid and unexpected
user inputs and the application does not
crash.
• Example:
▪ If user types letter in a numeric field, it
should not crash and display the message:
“incorrect data type, please enter a
number…” 55
Test Cases and Test Suites

• Test a software using a set


of carefully designed test
cases:
▪The set of all test cases is
called the test suite.
56
Test Execution Example: Return Book
• Test case [I,S,O]
1.Set the program in the required
state: Book record created, member
record created, Book issued
2.Give the defined input: Select renew
book option and request renew for a
further 2 wk period.
3.Observe the output:
▪ Compare it to the expected output.
57
Sample: Recording of Test Case & Results
Test Case number
Test Case author
Test purpose
Pre-condition:
Test inputs:
Expected outputs (if any):
Post-condition:
Test Execution history:
Test execution date
Test execution person
Test execution result (s) : Pass/Fail
If failed : Failure information 58

: fix status
Test Team- Human Resources
• Test Planning: Experienced people
• Test scenario and test case design: Experienced
and test qualified people
• Test execution: semi-experienced to
inexperienced
• Test result analysis: experienced people
• Test tool support: experienced people
• May include external people:
▪ Users

▪ Industry experts 59
Why Design of Test Cases?
• Exhaustive testing of any non-trivial
system is impractical:

▪ Input data domain is extremely large.

• Design an optimal test suite, meaning:

▪ Of reasonable size, and

▪ Uncovers as many errors as possible. 60


Design of Test Cases
• If test cases are selected randomly:
▪ Many test cases would not contribute to
the significance of the test suite,
▪ Would only detect errors that are already
detected by other test cases in the suite.

• Therefore, the number of test cases in


a randomly selected test suite:
▪ Does not indicate the effectiveness of
testing.
61
Design of Test Cases
• Testing a system using a large
number of randomly selected test
cases:
▪ Does not mean that most errors in
the system will be uncovered.
• Consider following example:
▪ Find the maximum of two integers x
and y. 62
Design of Test Cases
• The code has a simple programming
error:

• If (x>y) max = x;
else max = x; // should be max=y;

• Test suite {(x=3,y=2);(x=2,y=3)} can


detect the bug,

• A larger test suite {(x=3,y=2);(x=4,y=3);


(x=5,y=1)} does not detect the bug. 63
Test Plan
• Before testing activities start, a test
plan is developed.
• The test plan documents the following:
▪ Features to be tested
▪ Features not to be tested
▪ Test strategy
▪ Test suspension criteriastopping criteria
▪ Test effort
▪ Test schedule 64
Design of Test Cases
• Systematic approaches are
required to design an effective
test suite:

▪Each test case in the suite


should target different faults.

65
Testing Strategy
• Test Strategy primarily addresses:
▪ Which types of tests to deploy?

▪ How much effort to devote to which


type of testing?
• Black-box: Usage–based testing (based
on customers’ actual usage pattern)

• White-box testing can be guided by


black box testing results 66
Consider Past Bug Detection Data…

Quiz: How would you use


40% this for planning test
effort?
# of Bugs
Detected 25%

15%
10%
10%

Reviews Unit Integration System customer 67


test test test reported
Consider Past Bug Detection Data…

Quiz: How would you


use this for planning
test effort?
Problems 50%
Detected
30%

10% 10%

test test test customer


Technique 1 Technique 2 Technique 3 reported
68
Distribution of Error Prone Modules
customer reported bugs for Release 1

Quiz 6: How would you use


this for planning Release 2 testing?

# bugs
Detected

M1 M2 M3 M4 M5 M6
Defect clustering: A few modules usually contain most defects…69
Unit Testing

03/08/10 70
When and Why of Unit Testing?
• Unit testing carried out:

• After coding of a unit is complete


and it compiles successfully.

• Unit testing reduces


debugging effort substantially.
71
Why unit test?
• Without unit test:
▪ Errors become
difficult to track
down.

▪ Debugging cost
increases Failure

substantially…
Unit Testing
• Testing of individual methods, modules,
classes, or components in isolation:
▪ Carried out before integrating with other
parts of the software being developed.
• Support required for for Unit testing:
▪ Driver
• Simulates the behavior of a function that calls
and possibly supplies some data to the function
being tested. Driver

▪ Stub Unit
Stub
• Simulates the behavior of a function that has
73
not yet been written.
Unit Testing

PROCEDURE
STUB DRIVER
CALL UNDER TEST

Access To Nonlocal Variables

74
Quiz
• Unit testing can be considered as
which one of the following types
of activities?

▪ Verification

▪ Validation
75
Design of Unit Test Cases
• There are essentially three main
approaches to design test cases:

▪Black-box approach

▪White-box (or glass-box)


approach

▪Grey-box approach 76
Black-Box Testing
• Test cases are designed using only
functional specification of the
software: Input
Software Output

▪ Without any knowledge of the


internal structure of the software.

• Black-box testing is also known as


functional testing. 77
What is Hard about BB Testing

• Data domain is large

• A function may take multiple


parameters:

▪ We need to consider the


combinations of the values of the
different parameters.
78
What’s So Hard About Testing?
• Consider int check-equal(int x, int y)
• Assuming a 64 bit computer
▪ Input space = 2128

• Assuming it takes 10secs to key-in an integer


pair:

▪ It would take about a billion years to enter all


possible values!

▪Automatic testing has its own problems!


79
Solution
• Construct model of the data
domain:

▪ Called Domain based testing

▪ Select data based on the domain


model

80
Black-box Testing

03/08/10 81
Black Box Testing
• Considers the software as a black box:
▪Test data derived from the specification
• No knowledge of code necessary

• Also known as:


Input System
▪ Data-driven or Output

▪ Input/output driven testing


• The goal is to achieve the thoroughness
of exhaustive input testing:
82
▪With much less effort!!!!
Black-Box Testing
• Scenario coverage
•Equivalence class partitioning
•Special value (risk-based) testing
▪Boundary value testing
▪Cause-effect (Decision Table)
testing
▪Combinatorial testing
▪Orthogonal array testing
83
Black-Box Testing
• Scenario coverage
• Equivalence class partitioning
• Special value (risk-based) testing
▪ Boundary value testing
▪ Cause-effect (Decision Table) testing
▪ Combinatorial testing
▪ Orthogonal array testing 84
Equivalence Class
Testing

03/08/10 85
Equivalence Class Partitioning
• The input values to a program:
▪ Partitioned into equivalence classes.

• Partitioning is done such that:


▪ Program behaves in similar ways to
every input value belonging to an
equivalence class.
▪ At the least there should be as many
equivalence classes as scenarios.
86
Why Define Equivalence Classes?
E1 E2 E3
• Premise:
▪ Testing code with any one
representative value from a
equivalence class:

▪As good as testing using any


other values from the
equivalence class. 87
Equivalence Class Partitioning
• How do you identify equivalence
classes?
▪ Identify scenarios
▪ Examine the input data.
▪ Examine output

• Few guidelines for determining the


equivalence classes can be given… 88
Guidelines to Identify Equivalence Classes
•If an input condition specifies a range, one valid and two
invalid equivalence class are defined.

•If an input condition specifies a member of a set, one


valid and one invalid equivalence classes are defined.

•If an input condition is Boolean, one valid and one invalid


classes are defined.

•Example:

•Area code: range --- value defined between 10000 and


90000

•Password: value - six character string.


Jerry Gao Ph.D. 7/20002 All Rights Reserved
89
Equivalent class partition: Example

• Given three sides, determine the


type of the triangle:
▪ Isosceles

▪ Scalene

▪ Equilateral, etc.

• Hint: scenarios expressed in output


in this case.
90
Equivalence Partitioning
• First-level partitioning:
▪ Valid vs. Invalid test cases

Valid Invalid

91
Equivalence Partitioning
• Further partition valid and invalid
test cases into equivalence classes

Invalid
Valid
92
Equivalence Partitioning
• Create a test case for at least one
value from each equivalence class

Invalid
Valid
93

You might also like