Software Quality
Assurance
1
Introduction
Software quality assurance (SQA) is a means
and practice of monitoring all software
engineering practices, methods, and work
products to ensure compliance against defined
standards.
It is a set of activities that verifies everyone
involved with the project has correctly
implemented all procedures and processes.
It encompasses the entire SDLC.
The goal of SQA is to catch a products problems
or shortcomings and missed concepts before
releasing to the general public.
SQA encompasses code reviews and testing
2
activities conducted by the engineering team.
What is Testing?
Several definitions:
“Testing is the process of establishing confidence that a
program or system does what it is supposed to.” ( Hetzel
1973)
“Testing is any activity aimed at evaluating an attribute
or capability of a program or system and determining that
it meets its required results.” (Hetzel 1983)
“Testing is the process of executing a program or system
with the intent of finding errors.” (Myers 1979)
The process of operating a system or component under
specified conditions, observing or recording the results,
and making an evaluation of some aspect of the system
or component (IEEE)
Testing is not
the process of demonstrating that errors are not present
3 in a software entity.
Introduction
Main objectives of a software project is to be
productive and producing high quality software
products
Quality has many dimensions: reliability, maintainability,
portability etc.
Reliability is perhaps the most important
Reliability refers the chances of software failing
More defects implies more chances of failure, which in
turn means lesser reliability
Hence to develop high quality software, minimize
bugs/errors as much as possible in the delivered software
Generally, testing involves
Demonstrating the system customer that the software meets
its requirements; (Validation testing)
Discovering faults or defects in the software where its
4 behavior is incorrect or not in conformance with its
specification (Defect testing)
Cont’d…
Testing only reveals the presence of defects
It doesn’t identify nature and location of defects
Identifying & removing the defect is role of
debugging and rework
Testing is expensive
Preparing test cases, performing testing, defects
identification & removal all consume effort
Overall testing becomes very expensive : 30-50%
development cost
Who is involved in testing?
Software Test Engineers and Testers
Test manager
Development Engineers
Quality Assurance Group and Engineers
5
Software Testing:
Terminologies
Error, Mistake, Bug, Fault and Failure
People make errors/mistake.
This may be a syntax error, misunderstanding
of specifications or logical errors.
Bugs are coding mistakes/errors.
A fault/defect is the representation of an error,
where representation is the mode of expression,
such as narrative text, data flow diagrams, ER
diagrams, source code etc.
A failure occurs when a fault executes.
A particular fault may cause different failures,
depending on how it has been exercised.
6
Software Testing:
Terminologies…
Test Case and Test Suite
Test cases are inputs to test the system and the predicted
outputs from these inputs if the system operates according
to its specification
During testing, a program is executed with a set of test
cases
failure during testing shows presence of defects.
Test Suite: A set of one or more test cases
Verification and Validation
Verification is the process of evaluating a system or
component to determine whether the products of a given
development phase satisfy the conditions imposed at the
start of that phase.
Validation is the process of evaluating a system or
component during or at the end of development process
to determine whether it satisfies the specified
requirements .
7
o Testing= Verification +Validation
Testing and the life cycle
Testing is available in every software development
phases
Requirements engineering
Requirements should be tested for completeness,
consistency, feasibility, testability, … through reviews & the
like
Typical errors that could discovered are missing, wrong,
extra information, …
Design
The design itself can be tested using review or formal
verification techniques if the design conforms with the
requirements
Implementation
Check consistency implementation and previous documents
using all kinds of functional and structural test techniques
Maintenance
Regression testing: either retest all, or a more selective
8 retest
Software Testing Life Cycle
(STLC)
Software testing has its own life cycle that
intersects with every stage of the SDLC.
It identifies what test activities to carry out
and when (what is the best time) to
accomplish those test activities.
Even though testing differs between
organizations, STLC consists of the following
(generic) phases:
Test Planning
Test Analysis
Test Design
Construction and verification
Testing Execution
Final Testing
Post Implementation.
9
Test Plan
Testing usually starts with test plan and ends
with acceptance testing
Test plan is a general document that defines the
scope and approach for testing for the whole
project
Inputs are SRS, project plan, design, code, …
Test plan identifies what levels of testing will be
done, what units will be tested, etc in the project
It usually contains
Test unit specifications: what units need to be tested
separately
Features to be tested: these may include functionality,
performance, usability,…
Approach: criteria to be used, when to stop, how to
evaluate, etc
Test deliverables
1 Schedule and task allocation
0
Test case design
Test case design involves designing the test
cases (inputs and outputs) used to test the
system.
The goal of test case design is to create a set
of tests that are effective in validation and
defect testing.
Two approaches to design test cases are
Functional/ behavioral/ black box testing
Structural or white box testing
Spending sufficient time in test case design
helps to get “good” test cases.
1
1
Black Box testing
In black box testing the software to be tested is
treated as a black box
the structure of the program is not consider
The test cases are decided solely on the basis of
the requirements or specifications of the program
or module
the internals of the module or the program are not considered
for selection of test cases.
the tester only knows the inputs that can be given to the
system and what output the system should give.
The most obvious functional testing procedure is
exhaustive testing
which involves testing the software with all elements in
the input space
1However it is infeasible, because of very high a cost
2
Black Box testing…
So better method for selecting test cases is
needed
Different approaches have been proposed
Advantages
Tester can be non-technical.
Test cases can be designed as soon as the functional
specifications are complete
Disadvantages
The tester can never be sure of how much of the
system under test has been tested.
i.e. chances of having unidentified paths during
this testing
The test inputs needs to be from large sample space.
1
3
Requirements-based testing
A general principle of requirements engineering
is that requirements should be testable.
Requirements-based testing is a validation
testing technique where you consider each
requirement and derive a set of tests for that
requirement.
Example: LIBSYS requirements
Given the following requirements, the test cases are
The user shall be able to search either all of the
initial set of databases or select a subset from it.
Test cases (Descriptions)
Initiate user search for searches for items that are
known to be present and known not to be present,
1
4 where the set of databases includes 1 database.
Requirements-based testing …
Initiate user searches for items that are known to be
present and known not to be present, where the set
of databases includes 2 databases
Initiate user searches for items that are known to be
present and known not to be present where the set
of databases includes more than 2 databases.
Select one database from the set of databases and
initiate user searches for items that are known to be
present and known not to be present.
Select more than one database from the set of
databases and initiate searches for items that are
known to be present and known not to be present.
Inputs
Formally, the Item
test cases
Tes Case No.should
DatabasesbeExpected
written in the
Output
following
No. way
1 1 “Item in database” “One Database” “Found!”
5
Equivalence Class
partitioning
Divide the input space into equivalent classes
If the software works for a test case from a
class then it is likely to work for all
Can reduce the set of test cases if such
equivalent classes can be identified
Getting ideal equivalent classes is
impossible, without looking at the internal
structure of the program
For robustness, include equivalent classes for
invalid inputs
Income
also Tax
Example: Look at the following Percentage
taxation table
Up to and including 500 0
More than 500, but less 30
1 than 1,300
6
Equivalence Class
partitioning…
Based on the above table 3 valid and 4 invalid
equivalent classes can be found
Valid Equivalent Classes
Values between 0 to 500, 500 to 1300 and 1000
to 5000
Invalid Equivalent Classes
Values less than 0, greater than 5000, no input at
all and inputs containing letters
From thisTest
classes weTax
Case ID Income can generate the following
1 200 0
test cases
2 1000 300
3 3500 1400
4 -4500 Income can’t be negative
5 6000 Tax rate not defined
6 Please enter income
1
7 7 98ty Invalid income
Boundary value analysis
It has been observed that programs that work correctly
for a set of values in an equivalence class fail on some
special values.
These values often lie on the boundary of the
equivalence class.
A boundary value test case is a set of input data that
lies on the edge of a equivalence class of input/output
Example
Using an example in ECP generate test cases that
provides 100% BVA coverage.
< 0 0<income<500 500<income<1300 1300<income<5000 > 5000
SO, we need from12 – 14 ( 2 for no and character entries)
1
8 test cases to achieve the aforementioned coverage
White box testing
Black box testing focuses only on functionality
What the program does; not how it is implemented
White box testing on the other hand focuses on
implementation
The aim of white box testing is to exercise different
program structures with the intent of uncovering
errors
To test the structure of a program, structural
testing aims to achieve test cases that will force
the desired coverage of different structures.
Unlike the criteria for functional testing, which are
frequently imprecise, the criteria for structural
testing are
generally quite precise as they are based on program
structure, which are formal and precise.
1 There are different approaches for structural testing
9
Control flow based criteria
Considers the program as control flow graph - Nodes
represent code blocks – i.e. set of statements always
executed together
An edge (i, j) represents a possible transfer of control
from node i to node j.
Any control flow graph has a start node and an end
node
A complete path (or a path) is a path whose first node
is the start node and the last node is an exit node.
Control flow graph has a number of coverage criteria.
These are
Statement Coverage Criterion
Branch coverage
Linearly Independent paths
(ALL) Path coverage criterion
2
0
Statement Coverage Criterion
The simplest coverage criteria is statement
coverage;
Which requires that each statement of the program be
executed at least once during testing.
I.e. set of paths executed during testing should include
all nodes
This coverage criterion is not very strong, and can
leave errors undetected.
Because it has a limitation in that it does not require a
decision to evaluate to false if no else clause
E.g. : abs (x) : if ( x>=0) x = -x; return(x)
The set of test cases {x = 0} achieves 100%
statement coverage, but error not detected
Guaranteeing 100% coverage not always possible due to
2
possibility of unreachable nodes
1
Branch coverage
A little more general coverage criterion is branch
coverage
which requires that each edge in the control flow graph be
traversed at least once during testing.
i.e. branch coverage requires that each decision in the program
be evaluated to true and false values at least once during
testing.
Branch coverage implies statement coverage, as each
statement is a part of some branch.
The trouble with branch coverage comes if a decision has
many conditions in it (consisting of a Boolean expression
with Boolean operators and & or).
In such situations, a decision can evaluate to true and false
without actually exercising all the conditions.
This problem can be resolved by requiring that all
conditions evaluate to true and false (Condition Coverage)
2
2
Limitations of Testing
Testing has its own limitations.
You cannot test a program completely - Exhaustive
testing is impossible
You cannot test every path
You cannot test every valid input
You cannot test every invalid input
We can only test against system requirements
- May not detect errors in the requirements.
- Incomplete or ambiguous requirements may lead to
inadequate or incorrect testing.
Time and budget constraints
Compromise between thoroughness and budget.
You will run out of time before you run out of test
cases
2 Even if you do find the last bug, you’ll never know it
3
Limitations of Testing …
These limitations require that additional care be
taken while performing testing.
As testing is the costliest activity in software
development, it is important that it be done
efficiently
Test Efficiency – Relative cost of finding a bug in SUT
Test Effectiveness –ability of testing strategy to find bugs
in a software
Testing should not be done on-the-fly, as is
sometimes done.
It has to be carefully planned and the plan has to
be properly executed.
The testing process focuses on how testing should
2
proceed for a particular project.
4 Various methods of selecting test cases are discussed
Levels of Testing
Execution-based software testing, especially for
large systems, is usually carried out at different
levels.
In most cases there will be 3–4 levels, or major
phases of testing: unit test, integration test,
system test, and some type of acceptance test
The code contains requirement defects, design
defects, and coding defects
Nature of defects is different for different injection
stages
One type of testing will be unable to detect the
different types of defects
different levels of testing are used to uncover these
defects
Levels of Testing
User needs Acceptance testing
Requirement System testing
specification
Design Integration testing
code Unit testing
The major testing levels are similar for both
object-oriented and procedural-based software
systems.
Basically the levels differ in
the element to be tested
responsible individual
testing goals
Different Levels of Testing…
Unit Testing
Element to be tested : individual component
(method, class or subsystem)
Responsible individual: Carried out by developers
Goal: Confirm that the component or subsystem is
correctly coded and carries out the intended
functionality
Focuses on defects injected during coding: coding
phase sometimes called “coding and unit testing”
Integration Testing
Element to be tested : Groups of subsystems
(collection of subsystems) and eventually the entire
system
Responsible individual: Carried out by developers
Goal: Test the interfaces among the subsystems.
i.e. for problems that arise from component interactions.
Different Levels of Testing…
System Testing
Element to be tested : The entire system
Responsible individual: Carried out by separate test
team
Goal: Determine if the system meets the requirements
(functional and nonfunctional)
Most time consuming test phase
Acceptance Testing
Element to be tested : Evaluates the system delivered by
developers
Responsible individual: Carried out by the client. May
involve executing typical transactions on site on a trial basis
Goal: Demonstrate that the system meets/satisfies user
needs?
Only after successful acceptance testing the software
deployed
Different Levels of Testing…
If the software has been developed for the mass
market (shrink wrapped software), then testing it
for individual users is not practical or even
possible in most cases.
Very often this type of software undergoes two
stages of acceptance test: Alpha and Beta testing
Alpha test.
This test takes place at the developer’s site.
Testing done using simulated data in a lab setting
Developers observe the users and note problems.
Beta testing
the software is sent to a cross-section of users who
install it and use it under real world working conditions
with real data.
The users send records of problems with the software to
the development organization
Different Levels of Testing…
The levels discussed earlier are performed when a system is
being built from the components that have been coded
Another level of testing, called regression testing, is
performed when some changes are made to an existing
system.
Regression testing usually refers to testing activities during software
maintenance phase.
Regression testing
makes sure that the modification has not had introduced new errors
ensures that the desired behavior of the old services is maintained
Uses some test cases that have been executed on the old system
Since regression testing is supposed to test all functionality
and all previously done changes, regression tests are usually
large.
Thus, regression testing needs automatic execution & checking
Why test at different levels?
Implementing all of these levels of testing
require a large investment in time and
organizational resources.
However, it has the following advantages
Goes with software development phases because
software is naturally divided into phases
Especially true for some software process model
V & W models
Makes tracking bugs easy
Ensures a working subsystem/ component/ library
Makes software reuse more practical
3
1
Other Forms of testing
Top-down and Bottom-up testing
System is hierarchy of modules - modules coded
separately
Integration can start from bottom or top
Bottom-up requires test drivers while top-down requires
stubs
Drivers and stubs are code pieces written only for testing
Both may be used, e.g. for user interfaces top-down, for
services bottom-up
Incremental Testing
Incremental testing involves adding untested parts
incrementally to tested portion
Increasing testing can catch more defects, but cost also
goes up
Testing of large systems is always incremental
Incremental testing can be done in top-down or bottom
3
2 fashion
Object Oriented Testing
Testing begins by evaluating the OOA and OOD
models
OOA models (requirements and use cases) & OOD models
(class and sequence diagrams) are tested using
Structured walk-throughs, prototypes
Formal reviews of correctness, completeness and consistency
In OO programs the components to be tested are
object classes that are instantiated as objects
Larger grain than individual functions so approaches to
white-box testing have to be extended
conventional black box methods can be still used
No obvious ‘top’ to the system for top-down integration and
testing
Object-oriented testing levels
Testing operations associated with objects
Testing object classes
Testing clusters of cooperating objects
3
3 Testing the complete OO system
Static testing
Static testing is defined as:
Testing of a component or system at specification or
implementation level without execution of that software
(e.g., reviews or static code analysis).“ - (ISEB/ISTQB)
In contrast dynamic testing is testing of software where the
object under testing, the code, is being executed on a
computer.
Static testing is primarily syntax checking of the
code or and manually reading of the code or any
document to find errors
There are a number of different static testing types
or techniques Management review
Informal reviews
Walk-through Inspection
Technical review Audit
The difference b/n these the techniques is depicted in
3
4
next slide
Test automation
Testing is an expensive process phase.
Testing workbenches provide a range of tools to
reduce the time required and total testing costs.
Systems such as Junit support the automatic
execution of tests.
Most testing workbenches are open systems
because testing needs are organisation-specific.
They are sometimes difficult to integrate with
closed design and analysis workbenches.
3
5
A testing workbench
3
6
Debugging
Debugging is the process of locating and
fixing or bypassing bugs (errors) in computer
program code
To debug a program is to start with a problem,
isolate the source of the problem, and then fix it
Testing does not include efforts associated
with tracking down bugs and fixing them.
The separation of debugging from testing was
initially introduced by Glenford J. Myers in 1979
Debugging typically happens during three
activities in software development:
Coding
Testing
37
Production/deployment
Types of bugs
Types of bugs
Compile time: syntax, spelling, static type mismatch.
Usually caught with compiler
Design: flawed algorithm.
Incorrect outputs
Program logic (if/else, loop termination, select case, etc).
Incorrect outputs
Memory nonsense: null pointers, array bounds, bad types,
leaks.
Runtime exceptions
Interface errors between modules, threads, programs (in
particular, with shared resources: sockets, files, memory,
etc).
Runtime Exceptions
Off-nominal conditions: failure of some part of software of
underlying machinery (network, etc).
Incomplete functionality
Deadlocks: multiple processes fighting for a resource.
Freeze ups, never ending processes
Debugging…
Debugging, in general, consists of the following
main stages
Describe the bug - Maybe this isn't part of debugging itself
Get the program snapshot when the bug 'appears'.
Try to reproduce the bug and catch the state (variables,
registers, files, etc.) and action (what the program is
doing at the moment, which function is running).
Analyze the snapshot (state/action) and search for the
cause of the bug.
Fix the bug.
Debugging Techniques
Execution tracing
running the program
Print
trace utilities - follows the program through the
39
execution ; breakpoints, watchs…
Debugging…
single stepping in debugger
hand simulation
Interface checking
check procedure parameter number/type (if not enforced
by compiler) and value
defensive programming: check inputs/results from other
modules
documents assumptions about caller/callee relationships
in modules, communication protocols, etc
Assertions: include range constraints or other information
with data.
Skipping code: comment out suspect code, then check if
error remains.
Debugging tools (called debuggers) help identify coding
errors at various development stages.
Some programming language packages include a facility
40
for checking the code for errors as it is being written.
Debugging vs testing
Testing and debugging go together like
peas in a pod:
Testing finds errors; debugging localizes
and repairs them.
Together these form the “testing/debugging
cycle”: we test, then debug, then repeat.
Any debugging should be followed by a
reapplication of all relevant tests,
particularly regression tests.
This avoids (reduces) the introduction of new
bugs when debugging.
Testing and debugging need not be done by
the same people (and often should not be).