SE 433/333 Software Testing
& Quality Assurance
Dennis Mumaugh, Instructor
dmumaugh@[Link]
Office: CDM, Room 428
Office Hours: Tuesday, 4:00 – 5:30
April 18, 2017 SE 433: Lecture 4 1 of 101
Administrivia
Comments and feedback
Announcements
Solution to Assignment 1 and 2 have been posted to D2L
Hints
Look at the Java documentation. For example:
[Link]
Look at the examples ([Link]) mentioned on the reading list and
provided in D2L.
In solving a problem, try getting the example working first.
April 18, 2017 SE 433: Lecture 4 2 of 101
SE 433 – Class 4
Topic:
Black Box Testing, JUnit Part 2
Reading:
Pezze and Young: Chapters 9-10
JUnit documentation: [Link]
An example of parameterized test: [Link] in D2L
See also reading list
April 18, 2017 SE 433: Lecture 4 3 of 101
Assignments 4 and 5
Assignment 4: Parameterized Test
The objective of this assignment is to develop a parameterized JUnit
test and run JUnit tests using Eclipse IDE. [See [Link]]
Due Date: April 25, 2017, 11:59pm
Assignment 5: Black Box Testing – Part 1: Test Case Design
The objective of this assignment is to design test suites using black-
box techniques to adequately test the programs specified below. You
may use any combination of black-box testing techniques to design
the test cases.
Due date: May 2, 2017, 11:59pm
April 18, 2017 SE 433: Lecture 4 4 of 101
Thought for the Day
“More than the act of testing, the act of designing tests is one of the
best bug preventers known. The thinking that must be done to create a
useful test can discover and eliminate bugs before they are coded –
indeed, test-design thinking can discover and eliminate bugs at every
stage in the creation of software, from conception to specification, to
design, coding and the rest.” – Boris Beizer
April 18, 2017 SE 433: Lecture 4 5 of 101
Assignment 1 Lessons Learned
The assignment requirements may have some unwritten
items:
Being able to test the program
Programs need to “defend” themselves from
Illegal input
Boundary conditions
April 18, 2017 SE 433: Lecture 4 6 of 101
Assignment 1 Part I, Grading rubric
Part I: 10 Points
Compiles and runs and handles standard tests
Handles invalid cases -1
Not a triangle (a + b < c)
Degenerate triangle (a + b = c), aka line
Handles illegal input -1
Negative numbers
Floating point numbers
Extra large integers and MAXINT [See above triangle test].
Text
If the code handles most of the special condition give it the point.
April 18, 2017 SE 433: Lecture 4 7 of 101
Assignment 1 Part I, Grading rubric
For triangles:
10 + 20 = 30 hence it is a line
12 + 8 < 30 hence it is not a triangle
Negative length triangles are bad.
Also triangles with long numbers such as 2147483647. This number
is 2^31 - 1 ) i.e. MAX_VALUE “A constant holding the maximum
value an int can have, 231-1”.
» The problem is in testing for illegal triangles one must check a + b
<= c and we get arithmetic overflows. [See example on next
slide.]
» I got bit the first time myself.
The problem in using 21474836471 is simply that it is too large
April 18, 2017 SE 433: Lecture 4 8 of 101
Assignment 1 Lessons Learned
Consider:
public class Triangle {
public static void main(String[] args) {
int a = 2147483647;
int b = 2147483647;
int c = 2147483647;
if (a + b <= c) // Check a c b
{
[Link]("Not a triangle");
} else { [Link]("Triangle");}
}
}
Result: Not a triangle
April 18, 2017 SE 433: Lecture 4 9 of 101
Case Study – Knight Capital
High Frequency Trading (HFT)
April 18, 2017 SE 433: Lecture 4 10 of 101
Case Study – Knight Capital: High Frequency Trading (HFT)
The Knight Capital Group is an American global financial
services firm.
Its high-frequency trading algorithms Knight was the largest
trader in U.S. equities
with a market share of 17.3% on NYSE and 16.9% on NASDAQ.
April 18, 2017 SE 433: Lecture 4 11 of 101
Case Study – Knight Capital
Aug 1, 2012.
In the first 45 minutes, Knight Capital's computers executed a series
of unusually large automatic orders.
“… spit out duplicate buy and sell orders, jamming the market with
high volumes of trades that caused the wild swings in stock prices.”
By the end of day: $460 million loss
“Trading Program Ran Amok, With No ‘Off’ Switch”
In two days, the company's market value plunged by 75%
April 18, 2017 SE 433: Lecture 4 12 of 101
Case Study – Knight Capital: What Happened?
"Zombie Software" Blamed for Knight Capital Trading Snafu
A new algorithmic trading program had just been
installed, and began operation on Aug 1.
A dormant legacy program was somehow "inadvertently
reactivated"
Once activated, the dormant system started multiplying
stock trades by one thousand
Sent 4 million orders when attempting to fill just 212
customer orders
“Knight’s staff looked through eight sets of software
before determining what happened.”
April 18, 2017 SE 433: Lecture 4 13 of 101
Case Study – Knight Capital: The Investigation and Findings
SEC launched an investigation in Nov 2012.
Findings:
Code changes in 2005 introduced defects. Although
the defective function was not meant to be used, it
was kept in.
New code deployed in late July 2012. The defective
function was triggered under new rules. Unable to
recognize when orders have been filled.
Ignored system generated warning emails.
Inadequate controls and procedures for code
deployment and testing.
Charges filed in Oct 2013
Knights Capital settled charges for $12 million
April 18, 2017 SE 433: Lecture 4 14 of 101
Regression Test
April 18, 2017 SE 433: Lecture 4 15 of 101
Software Evolution
Change happens throughout the software
development life cycle.
Before and after delivery
Change can happen to every aspect of
software
Changes can affect unchanged areas
» break code, introduce new bugs
» uncover previous unknown bugs
» reintroduce old bugs
April 18, 2017 SE 433: Lecture 4 16 of 101
Regression Test
Testing of a previously tested program
following modification to ensure that new
defects have not been introduced or
uncovered in unchanged areas of the
software, as a result of the changes made.
It should be performed whenever the
software or its environment is changed.
It applies to testing at all levels.
April 18, 2017 SE 433: Lecture 4 17 of 101
Regression Test
Keep a test suite
Use the test suite after every change
Compare output with previous tests
Understand all changes
If new tests are needed, add to the test suite.
April 18, 2017 SE 433: Lecture 4 18 of 101
Test Driven Development
(TDD)
April 18, 2017 SE 433: Lecture 4 19 of 101
Test Early
Testing should start as early as possible
design test cases
Test early has several advantages
independence from design & code
discover inconsistencies and
incompleteness of the specifications
serve as a compendium of the
specifications
April 18, 2017 SE 433: Lecture 4 20 of 101
Test Driven Development
Test driven development (TDD) is one of the corner stones
of agile software development
Agile, iterative, incremental development
Small iterations, a few units
Verification and validation carried out for each iteration.
Design & implement test cases before implementing the functionality
Run automated regression test of whole system continuously
April 18, 2017 SE 433: Lecture 4 21 of 101
Process of Test Driven Development
Tests should be written first (before any code)
Execute all test cases => all fail
Implement some functions
Execute all test cases => some pass
Repeat implement and re-execute all test cases
Until all test cases => pass
Refactoring, to improve design & implementation
re-execute all test cases => all pass
Every time changes are made
re-execute all test cases => all pass
April 18, 2017 SE 433: Lecture 4 22 of 101
Black Box Testing
April 18, 2017 SE 433: Lecture 4 23 of 101
Black Box View
The system is viewed as a black box
Providesome input
Observe the output
April 18, 2017 SE 433: Lecture 4 24 of 101
Functional Testing: A.k.a.: Black Box Testing
Derive test cases from the functional specifications
functional refers to the source of information
not to what is tested
Also known as:
specification-based testing (from specifications)
black-box testing (no view of the code)
Functional specification = description of intended
program behavior
either formal or informal
April 18, 2017 SE 433: Lecture 4 25 of 101
Systematic vs. Random Testing
Random (uniform) testing
Pick possible inputs randomly and uniformly
Avoids designer bias
But treats all inputs as equally valuable
Systematic (non-uniform) testing
Select inputs that are especially valuable
Choose representatives
Black box testing is systematic testing
April 18, 2017 SE 433: Lecture 4 26 of 101
Why Not Random Testing?
Non-uniform distribution of defects
Example:
Program: solve a quadratic equation: a x2 + b x + c = 0
Defect: incomplete implementation logic
does not properly handle special cases: b2 - 4ac = 0 and
a=0
Failing values are sparse in the input space —
needles in a very big haystack.
Random sampling is unlikely to choose a=0.0 and b=0.0
April 18, 2017 SE 433: Lecture 4 27 of 101
Systematic Partition of Input Space
Failures are sparse
Failure ... but dense in some
in the space of
No failure parts of the space
possible inputs ...
The space of possible input values
If we systematically test some Functional testing is one way of
cases from each part, we will drawing lines to isolate regions
include the dense parts with likely failures
April 18, 2017 SE 433: Lecture 4 28 of 101
The Partition Principle
Exploit knowledge in problem domain to choose samples for
testing
Focus on “special” or trouble-prone regions of the input
space
Failures are sparse in the whole input space ...
... but we may find regions in which they are dense
(Quasi*-) Partition testing
Separates the input space into classes whose union is
the entire space
*Quasi because the classes may overlap
April 18, 2017 SE 433: Lecture 4 29 of 101
The Partition Principle
Desirable case for partitioning
Input values that lead to failures are dense (easy
to find) in some classes of input space
Sampling each class in the quasi-partition by
selecting at least one input value that leads to a
failure, revealing the defect
Seldom guaranteed, depend on experience-
based heuristics
April 18, 2017 SE 433: Lecture 4 30 of 101
Black Box Testing
Exploiting the functional specification
Uses the specification to partition the input
space
e.g., specification of “roots” program suggests division
between cases with zero, one, and two real roots
Test each partition, and boundaries between
partitions
No guarantees, but experience suggests failures often lie
at the boundaries (as in the “roots” program)
April 18, 2017 SE 433: Lecture 4 31 of 101
Why Black Box Testing?
Early.
can start before code is written
Effective.
find some classes of defects, e.g., missing logic
Widely applicable
any description of program behavior as spec
at any level of granularity, from module to system testing.
Economical
less expensive than structural (white box) testing
The base-line technique for designing test cases
April 18, 2017 SE 433: Lecture 4 32 of 101
Early Black Box Testing
Program code is not necessary
Only a description of intended behavior is needed
Even incomplete and informal specifications can be used
» Although precise, complete specifications lead to
better test suites
Early test design has side benefits
Often reveals ambiguities and inconsistency in spec
Useful for assessing testability
» And improving test schedule and budget by improving
spec
Useful explanation of specification
» or in the extreme case (as in XP), test cases are the
spec
April 18, 2017 SE 433: Lecture 4 33 of 101
Functional versus Structural: Classes of faults
Different testing strategies (functional, structural, fault-
based, model-based) are most effective for different classes
of faults
Functional testing is best for missing logic faults
A common problem: Some program logic was simply
forgotten
Structural (code-based) testing will never focus on code
that isn’t there!
April 18, 2017 SE 433: Lecture 4 34 of 101
Functional vs. Structural Test
Functional test is applicable in testing at all
granularity levels:
Unit test (from module interface spec)
Integration test (from API or subsystem spec)
System test (from system requirements spec)
Regression test (from system requirements + bug
history)
Structural test is applicable in testing relatively
small parts of a system:
Unit test
April 18, 2017 SE 433: Lecture 4 35 of 101
Steps: From specification to test cases
1. Decompose the specification
If the specification is large, break it into independently testable
features to be considered in testing
2. Select representatives
Representative values of each input, or
Representative behaviors of a model
• Often simple input/output transformations don’t describe a
system. We use models in program specification, in program
design, and in test design
3. Form test specifications
• Typically: combinations of input values, or model behaviors
4. Produce and execute actual tests
April 18, 2017 SE 433: Lecture 4 36 of 101
From specification to test cases
Functional
Specifications
Independently
Testable
Feature
Representative
Model
Values
Test Test
Case Cases
Specifications
April 18, 2017 SE 433: Lecture 4 37 of 101
An Example: Postal Code Lookup
Input: ZIP code (5-digit US Postal
code)
Output: List of cities
What are some representative values to
test?
April 18, 2017 SE 433: Lecture 4 38 of 101
Example: Representative Values
Simple example with
one input, one output
Correct zip code
With 0, 1, or many cities
Malformed zip code
Empty; 1-4 characters; 6 Note prevalence of boundary
characters; very long values (0 cities, 6 characters)
Non-digit characters and error cases
Non-character data
April 18, 2017 SE 433: Lecture 4 39 of 101
Summary
Functional testing, i.e., generation of test cases from
specifications is a valuable and flexible approach to
software testing
Applicable from very early system specs right through module
specifications
(quasi-)Partition testing suggests dividing the input space
into (quasi-)equivalent classes
Systematic testing is intentionally non-uniform to address special
cases, error conditions, and other small places
Dividing a big haystack into small, hopefully uniform piles where the
needles might be concentrated
April 18, 2017 SE 433: Lecture 4 40 of 101
Basic Techniques of
Black Box Testing
April 18, 2017 SE 433: Lecture 4 41 of 101
Single Defect Assumption
Failures are rarely the result of the
simultaneous effects of two (or more)
defects.
April 18, 2017 SE 433: Lecture 4 42 of 101
Functional Testing Concepts
The four key concepts in functional testing are:
Precisely identify the domain of each input and each output
variable
Select values from the data domain of each variable having
important properties
Consider combinations of special values from different input
domains to design test cases
Consider input values such that the program under test
produces special values from the domains of the output
variables
April 18, 2017 SE 433: Lecture 4 43 of 101
Developing Test Cases
Consider: Test cases for input box accepting numbers
between 1 and 1000
If you are testing for an input box accepting numbers from 1 to 1000
then there is no use in writing thousand test cases for all 1000 valid
input numbers plus other test cases for invalid data.
Using equivalence partitioning method, above test cases
can be divided into three sets of input data called as
classes. Each test case is a representative of respective
class.
We can divide our test cases into three equivalence classes
of some valid and invalid inputs.
April 18, 2017 SE 433: Lecture 4 44 of 101
Developing Test Cases
1. One input data class with all valid inputs. Pick a single
value from range 1 to 1000 as a valid test case. If you
select other values between 1 and 1000 then result is going
to be same. So one test case for valid input data should be
sufficient.
2. Input data class with all values below lower limit. I.e. any
value below 1, as a invalid input data test case.
3. Input data with any value greater than 1000 to represent
third invalid input class.
So using equivalence partitioning you have categorized all
possible test cases into three classes. Test cases with other
values from any class should give you the same result.
April 18, 2017 SE 433: Lecture 4 45 of 101
Equivalence Classes
Equivalence classes are the sets of values in a (quasi-)
partition of the input, or output domain
Values in an equivalence class cause the program to
behave in a similar way:
failure or success
Motivation:
gain a sense of complete testing and avoid redundancy
First determine the boundaries … then determine the
equivalencies
April 18, 2017 SE 433: Lecture 4 46 of 101
Determining Equivalence Classes
Look for ranges of numbers or values
Look for memberships in groups
Some may be based on time
Include invalid inputs
Look for internal boundaries
Don’t worry if they overlap with each other —
better to be redundant than to miss something
However, test cases will easily overlap with
boundary value test cases
April 18, 2017 SE 433: Lecture 4 47 of 101
Selecting Data Points
Determining equivalence classes for each
input variable or field
Single input variable
Normal test
» Select one data point from each valid equivalence
class
Robustness test
» Include invalid equivalence class
April 18, 2017 SE 433: Lecture 4 48 of 101
Selecting Data Points
Multiple input variables
Weak normal test:
» Select one data point from each valid equivalence
class
Strong normal test:
» Select one data point from each combination of (the
cross product of) the valid equivalence classes
Weak/strong robustness test:
» Include invalid equivalence classes
How many test cases do we need?
April 18, 2017 SE 433: Lecture 4 49 of 101
Example of Selecting Data Points
Suppose a program has 2 input variables, x and y
Suppose x can lie in 3 valid equivalence classes:
a ≤ x < b
b ≤ x < c
c ≤ x ≤ d
Suppose y can lie in 2 valid equivalence classes:
e ≤ y < f
f ≤ y ≤ g
April 18, 2017 SE 433: Lecture 4 50 of 101
Weak Normal Test
Every normal, i.e., valid, equivalence class of every input
variable is tested in at least one test case.
A representative value of each normal equivalence class of
each input variable appears in at least one test case.
Economical, requires few test cases if the values are
selected prudently.
Complete.
April 18, 2017 SE 433: Lecture 4 51 of 101
Weak Normal Test
y
a b c d x
April 18, 2017 SE 433: Lecture 4 52 of 101
Strong Normal Test
Every combination of normal equivalence classes of every
input variable is tested in at least one test cases.
More comprehensive.
Requires more test cases.
May not be practical for programs with large number of input
variables.
April 18, 2017 SE 433: Lecture 4 53 of 101
Strong Normal Test
y
a b c d x
April 18, 2017 SE 433: Lecture 4 54 of 101
Weak Robustness Test
Add robustness test cases to weak normal test suite.
Every invalid equivalence class of every input variable is
tested in at least one robustness test case.
Each robustness test case include only one invalid input
value.
No combination of invalid input values.
April 18, 2017 SE 433: Lecture 4 55 of 101
Weak Robustness Test
y
a b c d x
April 18, 2017 SE 433: Lecture 4 56 of 101
Strong Robustness Test
Add robustness test cases to strong normal test suite.
Every invalid equivalence class of an input variable is tested
with all combinations of valid equivalence classes of other
input variable.
Each robustness test case include only one invalid input
value.
No combination of invalid input values.
April 18, 2017 SE 433: Lecture 4 57 of 101
Strong Robustness Test Cases
y
a b c d x
April 18, 2017 SE 433: Lecture 4 58 of 101
Summary
For Multiple input variables
Weak normal test:
» Select one data point from each valid equivalence
class
Strong normal test:
» Select one data point from each combination of (the
cross product of) the valid equivalence classes
Weak/strong robustness test:
» Include invalid equivalence classes
April 18, 2017 SE 433: Lecture 4 59 of 101
Example: nextDate() Function
This program reads a date in the format of
mm/dd/yyyy
and prints out the next date.
For example, an input of
03/31/2014
gives an output of
04/01/2014
A constraint (arbitrary, for illustration purpose only)
The year is between 1800 and 2200 inclusive
April 18, 2017 SE 433: Lecture 4 60 of 101
Example: nextDate(): Valid Equivalence Classes
The valid equivalence classes for the Day
{ 1 ≤ Day ≤ 28 }
{ Day = 29 }
{ Day = 30 }
{ Day = 31 }
The valid equivalence classes for the Month
{ Month has 30 days }
{ Month has 31 days }
{ Month = February }
The valid equivalence classes for the Year
{ Year is not a leap year }
{ Year is a leap year }
April 18, 2017 SE 433: Lecture 4 61 of 101
Example: nextDate(): Invalid Equivalence Classes
The invalid equivalence classes for the Day
{ Day < 1 } { Day > 31 }
{ Incorrect format of Day } { Illegal characters of Day }
The invalid equivalence classes for the Month
{ Month < 1 } { Month > 12 }
{ Incorrect format of Month } { Illegal characters of Month }
The invalid equivalence classes for the Year
{ Year < 1800 } { Year > 2200 }
{ Incorrect format of Year } { Illegal characters of Year }
Other invalid equivalence classes
{ Incorrect order of Day, Month, Year }
{ Missing Day, Month, or Year }
{ Extra number or character }
April 18, 2017 SE 433: Lecture 4 62 of 101
Example: nextDate(): Test Cases: Weak Normal
Valid equivalence classes and
data points
Day Data Points
» { 1 ≤ Day ≤ 28 } 10
» { Day = 29 } 29
Weak normal
» { Day = 30 } 30 test cases (4
» { Day = 31 } 31 cases)
Month
» { Month has 30 days } 04
» { Month has 31 days } 03
1. 02/10/2009
» { Month = February } 02 2. 04/29/2009
Year 3. 03/30/2008
» { Year is not a leap year } 2009
» { Year is a leap year } 2008
4. 03/31/2008
April 18, 2017 SE 433: Lecture 4 63 of 101
Example: nextDate(): Test Cases: Strong Normal
Strong normal test cases (17 cases)
02/10/2008 02/29/2008
02/10/2009
03/10/2008 03/29/2008 03/30/2008 03/31/2008
03/10/2009 03/29/2009 03/30/2009 03/31/2009
04/10/2008 04/29/2008 04/30/2008
04/10/2009 04/29/2009 04/30/2009
Note: some combinations are invalid, thus excluded
e.g., 02/30/2008
April 18, 2017 SE 433: Lecture 4 64 of 101
Example: nextDate(): Test Cases: Weak Robustness
Add a test case for each invalid equivalence
class
{ Day < 1 } 02/00/2008
{ Day > 31 } 03/36/2009
{ Incorrect format of Day } 02/7/2008
{ Illegal characters of Day } 02/First/2008
{ Month < 1 } 00/10/2009
{ Month > 12 } 15/10/2008
{ Incorrect format of Month } 3/10/2008
{ Illegal characters of Month } Mar/10/2009
{ Year < 1800 } 02/10/1745
{ Year > 2200 } 02/10/2350
{ Incorrect format of Year } 02/10/10
{ Illegal characters of Year } 02/10/’00
{ Incorrect order of Day, Month, Year } 29/03/2008
{ Missing Day, Month, or Year } 02/10
{ Extra number or character } 02/20/2008/2009
April 18, 2017 SE 433: Lecture 4 65 of 101
Example: nextDate(): Test Cases: Strong Robustness
Add invalid test cases resulting from combination of valid
equivalence classes
04/31/2008
02/29/2009 02/30/2009 02/31/2009
02/30/2008 02/31/2008
Ensure each invalid test case contains only one invalid
value.
Single defect assumption
April 18, 2017 SE 433: Lecture 4 66 of 101
Boundary Value Testing
Test values, sizes, or quantities near the design
limits
value limits, length limits, volume limits
null strings vs. empty strings
Errors tend to occur near the extreme values of
inputs (off by one is an example)
Robustness:
How does the software react when boundaries are
exceeded?
Use input values
at their minimum, just above the minimum
at a nominal value,
at the maximum, just below the maximum
April 18, 2017 SE 433: Lecture 4 67 of 101
Input Boundary Values
Test cases for a variable x, where a ≤ x ≤ b
a b
x
x(min) x(min+) x(nom) x(max -) x(max)
Experience shows that errors occur more frequently for
extreme values of a variable.
April 18, 2017 SE 433: Lecture 4 68 of 101
Input Boundary Values – 2 Variables
Test cases for a variables x1 and x2, where a ≤ x1 ≤ b and c ≤
x2 ≤ d
x2
x1
a b
Single defect assumption
April 18, 2017 SE 433: Lecture 4 69 of 101
Example: nextDate() – Test Cases: Boundary Values
Additional test cases, valid input
04/01/2009 04/30/2009
03/01/2009 03/31/2009
02/01/2009 02/28/2009
02/29/2008
01/01/2008 12/31/2008
01/01/1800 12/31/2200
April 18, 2017 SE 433: Lecture 4 70 of 101
Robustness Testing
Test cases for a variable x, where a ≤ x ≤ b
a b
x
Stress input boundaries
Acceptable response for invalid inputs?
Leads to exploratory testing (test hackers)
Can discover hidden functionality
April 18, 2017 SE 433: Lecture 4 71 of 101
Robustness Testing – 2 Variables
x2
x1
a b
April 18, 2017 SE 433: Lecture 4 72 of 101
Example: nextDate() – Test Cases: Boundary Values
Additional robustness test cases, invalid
input
04/00/2009 04/31/2009
03/00/2009 03/32/2009
02/00/2009 02/29/2009
02/30/2008
01/00/2008 12/32/2008
12/31/1799 01/01/2201
00/01/2009 13/01/2009
April 18, 2017 SE 433: Lecture 4 73 of 101
Worst-Case Testing
Discard the single-defect assumption
Worst-case boundary testing:
Allow the input values to simultaneously approach their boundaries
Worst-case robustness testing:
Allow the input values to simultaneously approach and exceed their
boundaries
April 18, 2017 SE 433: Lecture 4 74 of 101
Worst Case Boundary Testing – 2 Variables
x2
x1
a b
April 18, 2017 SE 433: Lecture 4 75 of 101
Worst Case Robustness Testing – 2 Variables
x2
x1
a b
April 18, 2017 SE 433: Lecture 4 76 of 101
Limitations of Boundary Value Testing
Doesn’t require much thought
May miss internal boundaries
Usually assumes the variables are independent
Values at the boundary may not have meaning
April 18, 2017 SE 433: Lecture 4 77 of 101
Special Value Testing
The most widely practiced form of functional testing
The tester uses his or her domain knowledge, experience,
or intuition to probe areas of probable errors
Other terms: “hacking”, “out-of-box testing”, “ad hoc testing”,
“seat of the pants testing”, “guerilla testing”
April 18, 2017 SE 433: Lecture 4 78 of 101
Uses of Special Value Testing
Complex mathematical (or algorithmic) calculations
Worst case situations (similar to robustness)
Problematic situations from past experience
“Second guess” the likely implementation
April 18, 2017 SE 433: Lecture 4 79 of 101
Characteristics of Special Value Testing
Experience really helps
Frequently done by the customer or user
Defies measurement
Highly intuitive
Seldom repeatable
Often, very effective
April 18, 2017 SE 433: Lecture 4 80 of 101
Summary: Key Concepts
Black-box testing
vs. random testing, white-box testing
Partitioning principle
Black box testing techniques
Equivalence class
Boundary value testing
Special value testing
Single defect assumption
Normal vs. robustness testing
Weak and strong combinations
April 18, 2017 SE 433: Lecture 4 81 of 101
Guidelines and observations
Equivalence Class Testing is appropriate when input data is
defined in terms of intervals and sets of discrete values.
Equivalence Class Testing is strengthened when combined
with Boundary Value Testing
Strong equivalence takes the presumption that variables are
independent. If that is not the case, redundant test cases
may be generated
April 18, 2017 SE 433: Lecture 4 82 of 101
An Introduction to JUnit
Part 2
April 18, 2017 SE 433: Lecture 4 83 of 101
JUnit Best Practices
Each test case should be independent.
Test cases should be independent of
execution order.
No dependencies on the state of previous
tests.
April 18, 2017 SE 433: Lecture 4 84 of 101
JUnit Test Fixtures
The context in which a test case is executed.
Typically include:
Common objects or resources that are available
for use by any test case.
Activities to manage these objects
Set-up: object and resource allocation
Tear-down: object and resource de-allocation
April 18, 2017 SE 433: Lecture 4 85 of 101
Set-Up
Tasks that must be done prior to each test
case
Examples:
Createsome objects to work with
Open a network connection
Open a file to read/write
April 18, 2017 SE 433: Lecture 4 86 of 101
Tear-Down
Tasks to clean up after execution of each
test case.
Ensures
Resources are released
the system is in a known state for the next test
case
Clean up should not be done at the end of a
test case,
since a failure ends execution of a test case at
that point
April 18, 2017 SE 433: Lecture 4 87 of 101
Method Annotations for Set-Up and Tear-Down
@Before annotation: set-up
code to run before each test case.
@After annotation: Teardown
code to run after each test case.
will run regardless of the verdict, even if exceptions are
thrown in the test case or an assertion fails.
Multiple annotations are allowed
all methods annotated with @Before will be run before
each test case
but no guarantee of execution order
April 18, 2017 SE 433: Lecture 4 88 of 101
Example: Using a File as a Test Fixture
public class OutputTest { @Test
private File output; public void test1WithFile() {
@Before // code for test case
public void createOutputFile() { …
output = new File(...); }
}
@After @Test
public void deleteOutputFile() { public void test2WithFile() {
[Link]();
// code for test case
[Link]();
} …
}
}
April 18, 2017 SE 433: Lecture 4 89 of 101
Method Execution Order
1. createOutputFile()
2. test1WithFile()
3. deleteOutputFile()
4. createOutputFile()
5. test2WithFile()
6. deleteOutputFile()
Not guaranteed:
test1WithFile runs before test2WithFile
April 18, 2017 SE 433: Lecture 4 90 of 101
Once-Only Set-Up
@BeforeClass annotation on a static method
one method only
Run the method once only for the entire test class
before any of the tests, and
before any @Before method(s)
Useful for starting servers, opening connections,
etc.
No need to reset/restart for each test case
Shared, non-destructive
@BeforeClass
public static void anyName() {
// class setup code here
}
April 18, 2017 SE 433: Lecture 4 91 of 101
Once-Only Tear-Down
@AfterClass annotation on a static method
one method only
Run the method once only for the entire test
class
after any of the tests
after any @After method(s)
Useful for stopping servers, closing connections,
etc.
@AfterClass
public static void anyName() {
// class clean up code here
}
April 18, 2017 SE 433: Lecture 4 92 of 101
Timed Tests
Useful for simple performance test
Network communication
Complex computation
The timeout parameter of @Test annotation
in milliseconds
@Test(timeout=5000)
public void testLengthyOperation() {
...
}
The test fails
if timeout occurs before the test method completes
April 18, 2017 SE 433: Lecture 4 93 of 101
Parameterized Tests
Repeat a test case multiple times with different data
Define a parameterized test
Class annotation, defines a test runner
@RunWith([Link])
Define a constructor
» Input and expected output values for one data point
Define a static method returns a Collection of data
points
» Annotation @Parameter [or @Parameters,
depending]
» Each data point:
an array, whose elements match the constructor
arguments
April 18, 2017 SE 433: Lecture 4 94 of 101
Running a Parameterized Test
Use a parameterized test runner
For each data point provided by the
parameter method
Construct an instance of the class with the data
point
Execute all test methods defined in the class
April 18, 2017 SE 433: Lecture 4 95 of 101
Parameterized Test Example – Program Under Test
public class Calculator {
public long factorial(int n) {
…
return result;
}
}
See [Link]
April 18, 2017 SE 433: Lecture 4 96 of 101
Parameterized Test Example – The Test Class
@RunWith([Link])
public class CalculatorTest {
private long expected; // expected output
private int value; // input value
public CalculatorTest(long expected, int value) {
[Link] = expected;
[Link] = value;
}
April 18, 2017 SE 433: Lecture 4 97 of 101
Parameterized Tests Example – The Parameter Method
@Parameters
public static Collection<Integer[]> data() {
return [Link](new Integer[][] {
{ 1, 0 }, // expected, value
{ 1, 1 },
{ 2, 2 },
{ 24, 4 },
{ 5040, 7 }, });
}
April 18, 2017 SE 433: Lecture 4 98 of 101
Parameterized Tests Example – The Test Method
private long expected; // expected output
private int value; // input value
….
@Test
public void factorialTest() {
Calculator calc = new Calculator();
assertEquals(expected, [Link](value));
}
April 18, 2017 SE 433: Lecture 4 99 of 101
Readings and References
Chapter 10 of the textbook.
JUnit documentation
[Link]
An example of parameterized test
[Link] in D2L
April 18, 2017 SE 433: Lecture 4 100 of 101
Next Class
Topic:
Black Box Testing Part 2, JUnit & Ant
Reading:
Chapter 10 of the textbook.
Articles on the class page and reading list
Assignment 4 – Parameterized Test
Due April 25, 2017
Assignment 5 – Black Box Testing – Part 1: Test Case
Design
Due May 2, 2017
April 18, 2017 SE 433: Lecture 4 101 of 101