0% found this document useful (0 votes)
24 views117 pages

St-Mod 2

Uploaded by

sathvikshetty15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views117 pages

St-Mod 2

Uploaded by

sathvikshetty15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

MODULE

2
CONTENTS

 Functional Testing
 Boundary Value Testing (BVT)
Boundary Value Analysis
Robustness Testing
(Robust) Worst Case Testing
Special Value Testing
Test Cases
 Equivalence Class Testing
Weak Normal and Strong Normal Equivalence Class Testing
Weak Robust and Strong Robust Equivalence Class Testing
Test Cases
 Decision Table Based testing
Decision Table
Test Cases
OVERVIEW

Any program can be considered to be a function


 Program inputs form its domain
 Program outputs form its range

 Boundary value analysis is the best known functional testing technique.

 The objective of functional testing is to use knowledge of the functional nature


of a program to identify test cases.

 Historically, functional testing has focused on the input domain, but it is a good
supplement to consider test cases based on the range as well.
BOUNDARY VALUE ANALYSIS

 Boundary value analysis focuses on the boundary of the input space to identify
test cases.

 The rationale behind boundary value analysis is that errors tend to occur near the
extreme values of an input variable.

 Programs written in non-strongly typed languages are more appropriate


candidates for boundary value testing.

 In our discussion we will assume a program P accepting two inputs y1 and y2


such that a ≤ y1 ≤ b and c ≤ y2 ≤ d
Valid Input for Program P
Value Selection in Boundary Value Analysis

The basic idea in boundary value analysis is to select input variable values at

their:

 Minimum

 Just above the minimum

 A nominal value

 Just below the maximum

 Maximum
Single Fault Assumption

 Boundary value analysis is also augmented by the single fault assumption


principle.

 “Failures occur rarely as the result of the simultaneous occurrence of two


(or more) faults”

 In this respect, boundary value analysis test cases can be obtained by


holding the values of all but one variable at their nominal values, and
letting that variable assume its extreme values.
BOUNDARY VALUE ANALYSIS FOR PROGRAM P
Example Test Case for Boundary Value Analysis
Generalizing Boundary Value Analysis

 The basic boundary value analysis can be generalized in two ways:


 By the number of variables - (4n +1) test cases for n variables
 By the kinds of ranges of variables
 Programming language dependent
 Bounded discrete
 Unbounded discrete (no upper or lower bounds clearly defined)
 Logical variables
Limitations of Boundary Value Analysis

 Boundary value analysis works well when the program to be tested is a


function of several independent variables that represent bounded
physical quantities.

 Boundary value analysis selected test data with no consideration of the


function of the program, nor of the semantic meaning of the variables.

 We can distinguish between physical and logical type of variables as


well (e.g. temperature, pressure speed, or PIN numbers, telephone
numbers etc.)
Robustness Testing
Robustness testing is a simple extension of boundary value analysis.
In addition to the five boundary value analysis values of variables, we
add values slightly greater that the maximum (max+) and a value
slightly less than the minimum (min-).

The main value of robustness testing is to force attention on exception


handling.

In some strongly typed languages values beyond the predefined range


will cause a run-time error.

It is a choice of using a weak typed language with exception handling


or a strongly typed language with explicit logic to handle out of range
values.
Robustness Test Case for program P
Worst Case Testing
 In worst case testing we reject the single fault assumption which means
we are interested what happens when more than one variable has an
extreme value.

 Considering that we have five different values that can be considered


during boundary value analysis testing for one variable, now we take the
Cartesian product of these possible values for 2, 3, … n variables.

 In this respect we can have 5n test cases for n input variables.

 The best application of worst case testing is where physical variables


have numerous interactions and failure of a program is costly.

 Worst case testing can be further augmented by considering robust


worst case testing (i.e. adding slightly out of bounds values to the five
already considered).
Worst Case Testing for Program P
Robust-Worst Case Testing for Program P
Special Value Testing

 Special value testing is probably the most widely practiced form of functional
testing, most intuitive, and least uniform.

 Utilizes domain knowledge and engineering judgment about program’s “soft


spots” to devise test cases.

 It is also called as adhoc testing.

 Event though special value testing is very subjective on the generation of test
cases, it is often more effective on revealing program faults.

 Special value testing is highly subjective, it often results in a set of test cases
that is more effective in revealing faults than the test sets generated by
boundary value methods—testimony to the craft of software testing.
Examples
Test Cases for triangle Problem
{0,1,2,100,199,200,201}

Key Differences:
Number of Variables at Boundary:
• Normal BVA: Tests one variable at its boundary while others remain typical.
• Worst-Case BVA: Tests all variables at their boundaries in different
combinations.
Number of Test Cases:
• Normal BVA results in fewer test cases since you test each variable
individually.
• Worst-Case BVA results in more test cases since you're testing the cross-
product of all boundary values.

20
Test Cases for Next Date Function
• In worst-case testing, we generate all combinations of the boundary values.
This leads to 5 test values for each of the variables (lower boundary, upper
boundary, just outside boundaries), resulting in 5×5×5=125 test cases.
• Worst-case testing generates a comprehensive set of test cases by covering
every possible boundary scenario for the variables—ensuring that the
NextDate function is tested against all edge cases and extreme input
conditions.
Examples of Worst-Case Test Values:
Day = 1, Month = 1, Year = 1900 (lowest boundary for all inputs)
Day = 31, Month = 12, Year = 2100 (highest boundary for all inputs)
Day = 0, Month = 6, Year = 1950 (invalid day, normal month, and mid-range year)
Day = 15, Month = 0, Year = 1950 (valid day, invalid month)
Day = 29, Month = 2, Year = 2020 (valid leap year case)

22
The Commission Problem
• Rifle salespersons in the Arizona Territory sold rifle locks,
stocks, and barrels made by a gunsmith in Missouri
• Lock = $45.00, stock = $30.00, barrel = $25.00
• Each salesperson had to sell at least one complete rifle per
month ($100)
• The most one salesperson could sell in a month was 70
locks, 80 stocks, and 90 barrels
• Each salesperson sent a telegram to the Missouri company
with the total order for each town (s)he visits
• 1≤towns visited≤10, per month
• Commission: 10% on sales up to $1000, 15% on the next
$800, and 20% on any sales in excess of $1800

28
What You Should Test:
• Test near $1000: Try selling different combinations of locks, stocks, and
barrels that result in a commission just below $1000, exactly $1000, and just
above $1000. This helps ensure the system calculates commissions correctly
when crossing that threshold.
• Test near $1800: Similarly, try combinations that give a commission just
below, exactly, and just above $1800. This checks that the commission
percentage changes are handled properly when you reach $1800 in sales.
• By testing around these threshold points, you can make sure the commission
calculation works as expected at the key points where the percentage changes,
without needing to test every single possible combination of locks, stocks, and
barrels.

29
Example Test Cases Using Output Range Values

Barrels
90
72

40
Locks

22.2 70
60

33.3

60
Stocks
80 30
Output Boundary Value Test Cases
Case # Locks Stocks Barrels Sales Comm. Comments

1 1 1 1 100 10 min

2 10 10 9 975 97.5 border-

3 10 9 10 970 97 border-

4 9 10 10 955 95.5 border-

5 10 10 10 1000 100 border

6 10 10 11 1025 103.75 border+

7 10 11 10 1030 104.5 border+

8 11 10 10 1045 106.75 border+

31
• The goal is to find combinations of input values (locks, stocks, barrels) that
stress the system right around the threshold points (e.g., slightly below or
above $1000, $1800).

32
Output Special Value Test Cases

Case # Locks Stocks Barrels Sales Comm. Comment

1 10 11 9 1005 100.75 border+

2 18 17 19 1795 219.25 Border-

3 18 19 17 1805 221 border+

33
Guidelines for Boundary Value
Testing
• With the exception of special value testing, the test methods based on
the boundary values of a program are the most rudimentary.

• Issues in producing satisfactory test cases using boundary value


testing:

– Truly independent variables versus not independent variables


– Normal versus robust values
– Single fault versus multiple fault assumption

• Boundary value analysis can also be applied to the output range of a


program (i.e. error messages), and internal variables (i.e. loop control
variables, indices, and pointers).

34
Equivalence Class Testing
Equivalence Class Testing
• The use of equivalence class testing has two
motivations:
– Sense of complete testing
– Avoid redundancy
• Equivalence classes form a partition of a set that
is a collection of mutually disjoint subsets whose
union is the entire set.
• Two important implications for testing:
1. The fact that the entire set is represented provides a
form of completeness
2. The disjointedness assures a form of non-redundancy
Equivalence Classes
• The idea of equivalence class testing is to identify
test cases by using one element from each
equivalence class.
• If the equivalence classes are chosen wisely this
greatly reduces the potential redundancy among
test cases.
• The key point in equivalence class testing is the
choice of the equivalence relation that determines
the classes (partitions).
Types of Equivalence testing
• There are four types of equivalence testing:
– Weak Normal equivalence class testing
– Strong Normal equivalence class testing
– Weak Robust equivalence class testing
– Strong Robust equivalence class testing
Weak Normal Equivalence Class
Testing
• Weak equivalence class testing is accomplished by using one variable
from each equivalence class in a test case (single fault assumption).
• The minimum number of test cases is equal to the number of classes in
the partition with the largest number of subsets.
Strong Normal Equivalence
Class Testing
• Strong equivalence class testing is based on the Cartesian Product
of the partition subsets (multiple fault assumption).
• Generates more test cases which test for any interaction between
the representative values from each of the subsets.
Weak Robust Equivalence Class
• Valid Inputs
Testing
– For valid inputs, use one value from each valid class (as in what we have
called weak equivalence class testing). In this context, each input in these
test cases will be valid.
• Invalid Inputs
– For invalid inputs, a test case will have one invalid value and the remaining
values will be valid. In this context, a “single failure” should cause the test
case to fail.
Strong Robust Equivalence Class
Testing
• Robust- from invalid values
• Strong- from redundant values
Example
• For example consider a program with two input
variables size and weight:
– valid ranges:
S1: 0 < size < 200
W1: 0 < weight < 1500
– corresponding invalid ranges might be:
S2 : size 200
S3 : size 0
W2 : weight 1500
W3 : weight 0
Test Cases Example (Traditional View)

Test Case size weight Expected Output


WR1 100 750 whatever it should be
WR2 100 -1 invalid input
WR3 100 1500 invalid input

WR4 -1 750 invalid input

WR5 200 750 invalid input


Equivalence Test Cases for the Triangle
Problem (Output Domain)
• In the problem statement we note that there are four possible outputs:
– Not a Triangle
– Isosceles
– Equilateral
– Scalene
• We can use these to identify output (range) equivalence classes:

R1= {<a, b, c> | the triangle with sides a, b, c, is equilateral}


R2= {<a, b, c> | the triangle with sides a, b, c, is isosceles}
R3= {<a, b, c> | the triangle with sides a, b, c, is scalene}
R4= {<a, b, c> | sides a, b, c do not form a triangle}

• These classes yield the following simple set of test cases:


Sample Test Cases based on Output
Domain
Test Case a b c Expected
Output
WN1 5 5 5 Equilateral

WN2 2 2 3 Isosceles

WN3 3 4 5 Scalene

WN4 4 1 2 Not a
Triangle

Table 1: Weak and Strong Normal class test cases


Table 2: Weak Robust class test cases
Table 3: Strong Robust class test cases
Equivalence Test Cases for the Triangle
Problem (Input Domain)
• If we base the equivalence classes on the input domain, we will obtain a larger
set of test cases. We can define the sets:

D1= {<a,b,c> | a=b=c}


D2= {<a,b,c> | a=b, a≠c}
D3= {<a,b,c> | a=c, a≠b}
D4= {<a,b,c> | b=c, a≠b}
D5= {<a,b,c> | a≠b, a≠c, b≠c}

• As a separate property we can apply the triangle property to see even if the
input constitutes a triangle
D6= {<a, b, c> | a ≥ b+c}
D7= {<a, b, c> | b ≥ a+c}
D8= {<a, b, c> | c ≥ a+b}

• If we wanted also we could split D6 into


D6’={<a, b, c> | a = b+c} and
D6’’= {<a, b, c> | a > b+c}
Equivalence Test Cases for the NextDate
Problem (Input Domain)
• Nextdate is a function of three variables, month,
day, and year and these have ranges defined as:

1 ≤ month ≤ 12
1 ≤ day ≤ 31
1812 ≤ year ≤ 2012

• We will examine below the valid, invalid


equivalence classes, strong, and weak equivalence
class testing.
50
Traditional Test Cases
• The valid equivalence classes are:
M1= {month | 1 ≤ month ≤ 12}
D1= {day | 1 ≤ day ≤ 31}
Y1= {year | 1812 ≤ year ≤ 2012}

The invalid equivalence classes are:


M2= {month | month < 1}
M3= {month | month > 12}
D2= {day | day < 1}
D3= {day | day > 31}
Y2= {year | year < 1812}
Y3= {year | year > 2012}

These classes yield the following test cases, where the valid inputs are
mechanically selected from the approximate middle of the valid range:

51
Traditional Test Cases
Case ID Month Day Year Expected
Output
TE1 6 15 1912 6/16/1912
TE2 -1 15 1912 Invalid
TE3 13 15 1912 Invalid
TE4 6 -1 1912 Invalid
TE5 6 32 1912 Invalid
TE6 6 15 1811 Invalid
TE7 6 15 2013 Invalid
52
Summary of Test Case Strategy
Weak Normal and Strong Normal Test Cases:
• Focus on validating the basic functionality of the NextDate function without
testing for invalid inputs.
• Example: Testing a date like 6/15/2000 to ensure the next date is computed
correctly.
Weak Robust and Strong Robust Test Cases:
• Include both valid and invalid inputs, ensuring the function handles erroneous
data gracefully.
• Example: Testing for a negative day or month should produce an appropriate
error message.

53
Choice of Equivalence Classes
• If we more carefully chose the equivalence relation, the resulting
equivalence classes will be more useful

M1= {month | month has 30 days}


M2= {month | month has 31 days}
M3= {month | month is February}
D1= {day | 1 ≤ day ≤ 28}
D2= {day | day = 29}
D3= {day | day = 30}
D4= {day | day=31}
Y1= {year | year = 1900}
Y2= {year | 1812 ≤ year ≤ 2012 AND year ≠ 1900 AND (0 = year mod 4}
Y3= {year | 1812 ≤ year ≤ 2012 AND 0 ≠ year mod 4}

54
Strong Equivalence Test Cases
CASE ID Month Day Year Output
SE1 6 14 1900 6/15/1900
SE2 6 14 1912 6/15/1912
SE3 6 14 1913 6/15/1913
SE4 6 29 1900 6/30/1900
SE5 6 29 1912 6/30/1912
SE6 6 29 1913 6/30/1913
SE7 6 30 1900 7/1/1900
SE8 6 30 1912 7/1/1912
SE9 6 30 1913 7/1/1913
SE10 6 31 1900 ERROR
SE11 6 31 1912 ERROR
SE12 6 31 1913 ERROR
SE13 7 14 1900 7/15/1900
SE14 7 14 1912 7/15/1912
SE15 7 14 1913 7/15/1913
SE16 7 29 1900 7/30/1900
SE17 7 29 1912 7/30/1912
SE18 7 29 1913 7/30/1913

55
Strong Equivalence Test Classes
CASE ID Month Day Year Output
SE19 7 30 1900 7/31/1900
SE20 7 30 1912 7/31/1912
SE21 7 30 1913 7/31/1913
SE22 7 31 1900 8/1/1900
SE23 7 31 1912 8/1/1912
SE24 7 31 1913 8/1/1913
SE25 2 14 1900 2/15/1900
SE26 2 14 1912 2/15/1912
SE27 2 14 1913 2/15/1913
SE28 2 29 1900 ERROR
SE29 2 29 1912 3/1/1912
SE30 2 29 1913 ERROR
SE31 2 30 1900 ERROR
SE132 2 30 1912 ERROR
SE33 2 30 1913 ERROR
SE34 2 31 1900 ERROR
SE35 2 31 1912 ERROR
SE36 2 31 1913 ERROR

56
Commission Problem
Weak Robust Test Cases (Valid + Invalid Inputs):
• These test cases ensure that the system handles both valid and invalid input
classes.
Strong Robust Test Cases (Combinations of Invalid Inputs):
• These test cases test combinations of invalid inputs, ensuring the system's
robustness in handling errors.
Output Range-Based Test Cases (Testing Sales and Commission Ranges):
• These test cases focus on ensuring the commission function outputs correct
results for various sales ranges based on the values of locks, stocks, and
barrels.

57
Guidelines and Considerations
• The traditional form of equivalence testing is generally not as
thorough as weak equivalence testing, and in its turn, not as thorough
as strong equivalence testing
• If error conditions is a priority we can extend strong equivalence
testing to include invalid classes
• Equivalence class testing is appropriate when input data is defined in
terms of ranges and sets of discrete values.
• Logic of functionality of the program can help define the equivalence
classes
• Strong equivalence takes the presumption that variables are
independent, otherwise it generates some “error” test cases
• Can be strengthened by using it with domain testing (boundary value):
– Reuse the work to define the ranges
– Does not consider elements at equivalence class boundaries
– Need to expand ECT to include BVT-like requirements (domain testing)

58
Decision Table-Based Testing
Decision Tables - General
 Decision tables have been used to represent and analyze
complex logical relationships (since 1960s).
 Decision tables, like if-then-else and switch-case
statements, associate conditions with actions to perform.
 A decision table has four portions: the left-most column is
the stub portion; and to the right is the entry portion.
 The condition portion is noted by c’s, and the action
portion is noted by a’s.
 Thus, decision table consisting of 4 areas called the
condition stub, condition entry, action stub and the action
entry
Decision Tables - Structure

Conditions - (Condition stub) Condition Alternatives –


(Condition Entry)
Actions – (Action Stub) Action Entries

• Each condition corresponds to a variable, relation or predicate


• Possible values for conditions are listed among the condition alternatives
• Boolean values (true/false, yes/no, 0/1) – Limited Entry Decision
Tables
• Several values – Extended Entry Decision Tables
• Don’t care value

• Each action is a procedure or operation to perform


• The entries specify whether (or in what order) the action is to be
performed
 A column in the entry portion is a rule.
 Rules indicate which actions, if any, are taken for the
circumstances indicated in the condition portion of the rule.
Condition entry

Condition
stub

Action stub

Action Entry

• The condition portion of a DT is a truth table that has been rotated 90 o


Decision Table - Example

Printer does not print Y Y Y Y N N N N

Conditions A red light is flashing Y Y N N Y Y N N

Printer is unrecognized
Y N Y N Y N Y N

Heck the power cable X

Check the printer-computer cable X X


Actions
Ensure printer software is installed X X X X

Check/replace ink X X X X

Check for paper jam X X


Printer Troubleshooting
Decision Tables - Usage

 The use of the decision-table model is applicable when :


 the specification is given or can be converted to a
decision table .
 once a rule is satisfied and the action selected, no other
rule need be examined.
 the order of executing actions in a satisfied rule is of no
consequence.
Decision Tables - Issues
Before using the tables, ensure:

• Rules must be complete:


every combination of predicate truth values
plus default cases are explicit in the decision
table

• Rules must be consistent:


every combination of predicate truth values
results in only one action or set of actions
Test Case Design

To identify test cases with decision tables, we


interpret conditions as inputs, and actions as
outputs.
Sometimes conditions end up referring to
equivalence classes of inputs, and actions refer to
major functional processing portions of the item
being tested.
The rules are then interpreted as test cases.
One helpful style of producing DT is to add an
action to show when a rule is logically impossible.
• Mutually exclusive conditions are situations
where only one condition can be true at any
time. For instance, if you have conditions
like "Is it Day?" and "Is it Night?", both
can't be true simultaneously.

69
Decision table with mutually Exclusive
conditions

conditions R1 R2 R3
C1:month in M1? T - -
C2:month in M2? - T -
C3:month in M3? - - T
a1
a2
a3

70
Decision table with rule count
Stub R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
C1:a<b+c? F T T T T T T T T T T
C2:b<a+c? - T T T T T T T T F T
C3:c<a+b? - T T T T T T T T - F
C4:a=b? - T T T T F F F F - -
C5=a=c? - T T F F T T F F - -
C6:b=c? - T F T F T F T F - -
Rule count 32 1 1 1 1 1 1 1 1 16 8
A1:not a X X X
triangle
A2:scalene X
A3:isosceles X X X
A4:equilateral X
A5:impossible X X X

71
Rule count Decision table with mutually Exclusive
conditions
conditions R1 R2 R3
C1:month in M1? T - -
C2:month in M2? - T -
C3:month in M3? - - T
Rule count 4 4 4
a1
a2
a3

72
Expanded version of previous table

conditions 1.1 1.2 1.3 1. 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4
4
C1:month in M1? T T T T T T F F T T F F
C2:month in M2? T T F F T T T T T F T F
C3:month in M3? T F T F T F T F T T T T
Rule count 1 1 1 1 1 1 1 1 1 1 1 1
a1

73
Mutually Exclusive conditions with impossible rules

condition 1.1 1.2 1.3 1.4 2.3 2.4 3.4


s
C1:mont T T T T F F F F
h in M1?
C2:mont T T F F T T F F
h in M2?
C3:mont T F T F T F T F
h in M3?
Rule 1 1 1 1 1 1 1 1
count
A1:impos X X X X X
sible

74
• A redundant decision table has duplicate or
unnecessary rules. Identifying and
removing these helps simplify decision-
making and clarifies the outcomes. In the
example, we removed the redundant rule,
leading to a clearer decision table.

75
A Redundant Decision Table

Conditi 1-4 5 6 7 8 9
ons

C1 T F F F F T
C2 - T T F F F
C3 - T F T F F
a1 X X X - - X
a2 - X X X - -
a3 X - X X X X

76
• An inconsistent decision table contains
rules that conflict with one another, leading
to ambiguity in decision-making. This
means that the same set of conditions can
lead to different actions, which can cause
confusion.

77
An Inconsistent Decision Table

Conditio 1-4 5 6 7 8 9
ns

C1 T F F F F T
C2 - T T F F F
C3 - T F T F F
a1 X X X - - -
a2 - X X X - X
a3 X - X X X -

78
Test Cases for the Triangle Problem
Case ID a b c Expected
Output
DT1 4 1 2 Not a Triangle
DT2 1 4 2 Not a Triangle
DT3 1 2 4 Not a Triangle
DT4 5 5 5 Equilateral
DT5 ? ? ? Impossible
DT6 ? ? ? Impossible
DT7 2 2 3 Isosceles
DT8 ? ? ? Impossible
DT9 2 3 2 Isosceles
DT10 3 2 2 Isosceles
DT11 3 4 5 Scalene
79
Decision Table for NextDate
(First Attempt)
• Let us consider the following equivalence classes:

M1= {month | month has 30 days}


M2= {month | month has 31 days}
M3= {month | month is February}
D1= {day | 1 ≤ day ≤ 28}
D2= {day | day = 29}
D3= {day | day = 30}
D4= {day | day=31}
Y1= {year | year = 1900}
Y2= {year | 1812 ≤ year ≤ 2012 AND year ≠ 1900 AND (0 = year mod 4}
Y3= {year | 1812 ≤ year ≤ 2012 AND 0 ≠ year mod 4}

80
Decision Table for NextDate (1)
Conditions 1 2 3 4 5 6 7 8
C1: month in M1 M1 M1 M1 M2 M2 M2 M2

C2: day in D1 D2 D3 D4 D1 D2 D3 D4

C3: year in - - - - - - - -
3 3 3 3 3 3 3 3
Rule count
Actions
A1: Impossible X

A2: Increment day X X X X X

A3: Reset day X X

A4: Increment month X ?

A5: reset month ?

A6: Increment year ?


81
Decision Table for NextDate (2)
Conditions 9 10 11 12 13 14 15 16
C1: month in M3 M3 M3 M3 M3 M3 M3 M3

C2: day in D1 D1 D1 D2 D2 D2 D3 D3

C3: year in Y1 Y2 Y3 Y1 Y2 Y3 - -

Rule count 1 1 1 1 1 1 3 3

Actions
A1: Impossible X X X X

A2: Increment day X

A3: Reset day X X X

A4: Increment month X X X

A5: reset month


A6: Increment year
82
Decision Table for NextDate
(Third Attempt)
• Let us consider the following equivalence classes:

M1= {month | month has 30 days}


M2= {month | month has 31 days}
M3= {month | month is December}
M4= {month | month is February}
D1= {day | 1 ≤ day ≤ 27}
D2= {day | day = 28}
D3= {day | day = 29}
D4= {day | day = 30}
D5= {day | day=31}
Y1= {year | year is a leap year}
Y2= {year | year is a common year}

83
Decision Table for NextDate (1)
Conditions 1 2 3 4 5 6 7 8 9 10
C1: month in M1 M1 M1 M1 M1 M2 M2 M2 M2 M2

C2: day in D1 D2 D3 D4 D5 D1 D2 D3 D4 D5

C3: year in - - - - - - - - - -

Actions
A1: Impossible X

A2: Increment day X X X X X X X

A3: Reset day X X

A4: Increment month X X

A5: reset month


A6: Increment year

84
Decision Table for NextDate (2)
Conditions 11 12 13 14 15 16 17 18 19 20 21 22
C1: month in M3 M3 M3 M3 M3 M4 M4 M4 M4 M4 M4 M4

C2: day in D1 D2 D3 D4 D5 D1 D2 D2 D3 D3 D4 D5

C3: year in - - - - - - Y1 Y2 Y1 Y2 - -

Actions
A1: Impossible X X X

A2: Increment day X X X X X X

A3: Reset day X X X

A4: Increment month X X

A5: reset month X

A6: Increment year X

85
Guidelines and Observations
• Decision Table testing is most appropriate for programs
where
– there is a lot of decision making
– there are important logical relationships among input variables
– There are calculations involving subsets of input variables
– There are cause and effect relationships between input and output
– There is complex computation logic (high cyclomatic complexity)

• Decision tables do not scale up very well

• Decision tables can be iteratively refined

86
FAULT BASED TESTING
• Fault based testing uses a fault model
directly to hypothesize potential faults in a
program under test, as well as to create or
evaluate test suites based on its efficacy in
detecting those hypothetical faults

87
•Fault Model: Testers make an educated guess about the kinds of
common mistakes that might be in the program (e.g., off-by-one errors,
wrong conditions).
•Hypothesizing Faults: Based on those guesses, testers come up with
test cases designed specifically to check for those mistakes.
•Test Creation: They create tests to see if the program has any of those
guessed faults.
•Evaluating Tests: The tests are then run, and if they find the guessed
faults, the tests are considered effective.

88
Overview
• The basic concept of fault based testing is
to select test cases that would distinguish
the program under test from alternative
programs that contain hypothetical faults.

• It is approached by modifying the program


under test to actually produce the
hypothetical faulty programs.

89
•Alternative Programs (Faulty Programs): These are versions of the
original program where small mistakes have been purposely introduced.
These mistakes represent possible faults that might occur in the real
program, like a misplaced condition or an incorrect calculation.
•Test Cases: The goal is to select or design test cases that can detect
these faulty programs by showing that they behave differently from the
original, correct program. If a test case causes the faulty program to fail but
the correct program to pass, it’s an effective test.
•Mutation Testing (Producing Faulty Programs): Often, the program
under test is modified (mutated) to intentionally introduce small errors (like
changing an operator or a condition). These modified versions are called
"mutants." The test suite is then used to "kill" these mutants by identifying
the difference between the original and faulty versions.

90
Fault seeding
• Fault seeding is a technique for evaluating
the effectiveness of a testing process. One
or more faults are deliberately introduced
into a code base, without informing the
testers.
• Fault seeding can be used to evaluate the
thoroughness of a test suite, or for selecting
test cases to augment a test suite, or to
estimate the number of faults in a program

91
Assumptions in Fault based
testing
• Effectiveness of fault based testing depends
on..
– The quality of the fault model and
– On some basic assumptions about the relation
of the seeded faults to faults that might actually
be present.

92
• Seeded faults are small syntactic changes
• For example..
– Replacing one variable reference by another in
an expression
– Changing a comparison from < to <=
• We may hypothesize that these are
representative of faults actually present in
the program

93
Example of Seeded Faults:
• Replacing one variable reference by another: In a code
expression, if a variable a is mistakenly replaced by b, this
seeded fault simulates a potential real mistake.
• Changing a comparison operator: Modifying x < y to x <=
y could represent a common boundary condition error.

94
Competent programmer
hypothesis
• An assumption that the program under test
is “close to” a correct program.
• If the program under test has an actual
fault, we may hypothesize that it differs
from another, corrected program by only a
small textual change.
• If so, then we need merely distinguish the
program from all such variants to ensure
detection of all such faults.
95
Coupling effect hypothesis
• Sometimes, an error of logic will result in
much more complex differences in program
text.
• It may not invalidate fault based testing
with a simpler fault model, provided test
cases sufficient for detecting the simpler
faults are sufficient also for detecting the
more complex fault.
• This is known as coupling effect
96
• The coupling effect hypothesis can be
justified by appeal to a more reasonable
hypothesis about interaction of faults

• A complex change is equivalent to several


smaller change in program text

97
• Fault based testing can guarantee fault
detection only if the competent programmer
hypothesis and coupling effect hypothesis
hold.
• These testing techniques can be useful even
if we decline to take the leap of faith
required to fully accept their underlying
assumptions

98
• It is essential to recognize the dependence
of these techniques,
• and any inferences about software quality
based on fault based testing,
• and also on the quality of the fault model
• This also implies that developing better
fault models, based on hard data about real
faults rather than guesses, is a good
investment of effort

99
Fault based testing: Terminology
• Original program
The program unit (e.g., C function or java
class)to be tested
• Program location
– A region in the source code
– Typical locations are statements, arithmetic and
Boolean expressions , and procedure calls

100
• Alternate expression
– Source code text that can be legally substituted
for the text at a program location
– A substitution is legal if the resulting program
is syntactically correct(i.e, it compiles without
errors)
• Alternate program
– A program obtained from the original program
by substituting an alternate expression for the
text at some program location

101
• Distinct behavior of an alternate program R
for a test t
– The behavior of an alternate program R is
distinct from the behavior of the original
program P for a test t, if R and P produce a
different result for t, or if the output of R is not
defined for t
– Distinguished set of alternate programs for a
test suite T
– A set of alternate programs are distinct if each
alternate program in the set can be
distinguished from the original program by at
least one test in T
102
Mutation analysis
• It is the most common form of software
fault-based testing
• A fault model is used to produce
hypothetical faulty programs by creating
variants of the program under test
• Variants are created by “seeding” faults
• i.e, by making a small change to the
program under test following a pattern in
the fault model
103
• The patterns for changing program text are
called mutation operators
• Each variant program is called a mutant
• Mutants should be acceptable as faulty
programs
• Mutant programs that are rejected by a
compiler, or that fail almost all tests, are not
good models of the faults we seek to
uncover with systematic testing

104
Mutation analysis:Terminology
• Original program under test
– The program or procedure(function) to be
tested
• Mutant
– A program that differs from the original
program for one syntactic element
– (e.g., a statement, a condition, a variable, a
label)s

105
• Distinguished mutant
– A mutant that can be distinguished for the
original program by executing at least one test
case
• Equivalent mutant
– A mutant that cannot be distinguished from the
original program
• Mutation operator
– A rule for producing a mutant program by
syntactically modifying the original program

106
107
Valid mutant
• A mutant is valid if it is syntactically
correct
Useful mutant
• A mutant is useful if, in addition to being
valid, its behavior differs from the behavior
of the original program for no more than a
small subset of program test cases

108
109
Fault based adequacy criteria`
Given a program and a test suite T, mutation
analysis consists of the following steps:
1. Select mutation operators
Specific classes of faults
2. Generate mutants
mutation operators to the original program
3. Distinguish mutants
– execute original program and each generated mutant
with the test cases in T
– A mutant is killed when it can be distinguished from the
original program

110
111
Test Suite TS={1U,1D,2U,2D,2M,End,Long}
• Kills Mj
• Mi, Mk and Ml are not distinguished from
the original program by any test in TS
• Mutants not killed by a test suite are live
• A mutant can remain live for 2 reasons:
– The mutant can be distinguished from the
original program, but the test suite T does not
contain a test case that distinguishes them
– The mutant cannot be distinguished from the
original program by any test case
112
• Given a set of mutants SM and a test suite T
• The fraction of non equivalent mutants
killed by T measures the adequacy of T with
respect to SM

113
Variations on mutation analysis
• In Mutation analysis process, mutants are
killed based on the outputs produced by
execution of test cases, it is known as
strong mutation.
• Each mutant must be compiled and
executed with each test case untill it is
killed
• The time and space required for compiling
all mutants and for executing all test cases
for each mutant may be impractical 114
• The computational effort required for
mutation analysis can be reduced by
decreasing the no. of mutants generated and
the no. of test cases to be executed
• Weak mutation analysis decreases the no.
of tests to be executed by killing mutants
when they produce a different intermediate
state, rather than waiting for a difference in
the final result or observable program
behavior
115
• A single program can be seeded with many
faults with weak mutation
• A “meta mutant” program is divided into
segments containing original as well as
mutated source code, with a mechanism to
select which segments to execute.
• Mutation analysis can be used either to
judge the thoroughness of a test suite or to
guide selection of additional test cases
116
• Statistical mutation analysis
Statistical sampling may keep the sample
small enough to permit careful examination of
equivalent mutants.
• The limitation is that partial coverage is
meaningful only to the extent that the
generated mutants are a valid statistical
model of occurrence frequencies of actual
faults
• Fault seeding can be used statistically in
another way: To estimate the no. of faults
remaining in a program 117

You might also like