SQA :: Defect Detection
Through Testing
by
Dr. Rizwan
Defect Detection Process
Defect identification since they have been injected into the software
systems is in fact most traditional QA activities fall into this category. It
is a process of finding deviations of observed behaviour from the expected
behaviour.
Apply input Observe output
Software
Validate the observed output
Is the observed output the same as the expected output?
Defect Detection Process
Basic idea of testing involves:
Execution of software and
Observation of its behavior/outcome
If failure is observed; Resolve it
Else: Quality demonstrated
Two primary purposes:
To detect and fix problems
To demonstrate quality or proper behavior
Example
returns the product of the param multiplied by 2.
• If param = 2, result is 4
1. int double (int param) { • If Param = 3, result is 9
2. int result;
3. result = param * param; Result 9 is Fault
Failure is caused by Line 3 – Fault
4. return result; Fault is caused by Error by the
5. } developer/designer
Testing
Testing is the act of checking if a part or a product performs as
expected
Its goal is to maximize the number and severity of defects found per
dollar spent … thus: test early
We need to test our work, because we will make mistakes.
What is Not testing: Testing can only determine the presence of
defects, never their absence
Who should test: Someone other than the developer.
–Why?
Testing Philosophy
An activity in which a system or component is executed under
specified conditions, the results are observed or recorded, and an
evaluation is made of some aspect of the system or component.
Testing has a different philosophy compared to other activities in the
development life cycle e.g.
Requirement, design, implementation (constructive activities)
Testing:
Is aimed at breaking the Software.
Should attempt to find the cases where the system behaviour deviates from
the specified behaviour.
Testing Philosophy
Quality expectations of a user are that a software performs the right
functions as specified, and performs these specified functions correctly
over repeated use or over a long period of time.
One type of testing is validation in which a function needed and
expected by the customers is checked for its presence in a software
product.
An absence of an expected function or feature is clearly linked to a
deviation of expected behavior, or linked to a software failure.
When an unexpected function is present, it can be considered as a
failure of this kind as well, because a customer is not likely willing to
pay for something not needed.
Functional Vs. Structural Testing: What to Test?
Main difference: Perspective/view & focus
Functional testing (External behavior)
Views the object to be tested as black-box
Focus is on input-output relationship
Without involving internal knowledge in making the test cases
Structural testing (Internal Implementation)
Views the object to be tested as white-box
Focus is on internal elements
Develops test cases on the basis of knowledge of internal
implementations i.e. code
Levels of Testing
There are different levels of testing corresponding to
different views of the code and different levels of abstraction
At the most detailed level→ White box
Individual statements, data items, functions, sub routines, methods
At the intermediate level
Components, sub systems
At the most abstract level
Whole software systems can be treated as a “black-box”
Higher Level of abstraction → Black box
Test planning: Goals, strategies, and
techniques
High-level task
To set goals & To determine a general testing strategy
The test strategy specifies
The techniques that will be used to accomplish the test mission
How the techniques will be used
Strategy based on following decisions:
Overall objectives and goals
Objects to be tested and the specific focus
If External functions→ ?
If Internal implementations → ?
Levels of Testing
Corresponding to these different levels of abstraction
Actual testing for large software systems is divided into various sub-
phases
Starting from the coding phase up to post-release product support
Including unit testing, component testing, integration testing, system testing,
acceptance testing, beta testing, etc.
Component
Concepts may vary, but generally include a collection of smaller units that
together accomplish something
Unit
Is the smallest testable part of an application
In procedural programming a unit may be an individual function or
procedure
Software Testing Principles
Testing shows the presence of defects, not their absence
Testing helps find bugs, but it can't definitively prove software is bug-free
Exhaustive testing is impossible
There are always ways software can fail that tests haven't covered
Early testing saves time and money
The sooner you find bugs, the cheaper they are to fix
Finding bugs late in development can be very expensive
Defects cluster together
Some parts of the code are more bug-prone than others
Testing should focus on these areas.
Software Testing Principles
Pesticide paradox
Repeating the same tests over and over is unlikely to find new
bugs. Testers need to vary their tests to find new issues.
Testing is context dependent
The way you test software will vary depending on the type of
software, its purpose, and who will be using it.
Absence-of-errors fallacy
Just because software works without errors doesn't mean it
meets user needs
Testing should ensure the software fulfills its intended purpose.
Test Concerns
Basic questions about
Testing
What artifacts are tested?,
What to test, and what kind of faults are found?,
When, or at what defect level, to stop testing?
Testing techniques
What is the specific testing technique used?,
What is the underlying model used in a specific testing technique?,
Are techniques for testing in other domains applicable to software
testing? Etc.
Test Concerns
Test activities and management
Who performs which specific activities?,
When can specific test activities be performed?,
What process is followed for these test activities? Etc.
Informal vs Formal Testing
Informal testing
Could require minimal prior knowledge
Simple way is “run-and-observe" by testers
Some formal forms of testing, such as usability testing, can be
performed with little prior knowledge as well
Novice user is asked to use the product and related information is recorded for
usability assessment and improvement
May also involve experienced testers who observe and record the testing
information.
Deceptively easy, but not all failures or problems easy to recognize.
17
So?
Good knowledge, technical skills and experience required.
Informal Vs. Formal Testing
Formal testing
Model the software system, operational environment, users, usage scenarios,
sequences, patterns etc.
Derive test cases from the models
Who performs these activities
Individual testers or teams can perform these activities
Various other development personnel also need to be involved
Example: Designers/Developers whose code is being tested - to
resolve the issues
Developers can play dual roles – developer & tester
For overall operation - Professional testers are typically employed
3rd party independent verification and validation (IV&V) testing
Testing Teams
Testers and testing teams can be organized into
A vertical model
A horizontal model
Vertical Model – Product Oriented
Where dedicated people perform one or more testing tasks for the
product
E.g., one or more teams doing different tests like unit, acceptance
for the same product
Staffing/resource management hard (mismatch between people’s
expertise and their assignments …)
Testing Teams
Horizontal Model – Task Oriented
A testing team only performs one kind of testing for many different
products
E.g., many products may share the same system testing team
Better management of staff/resources because: Different schedules
and demands by different projects
Practically a mix of both is used
Low level testing by dedicated testers/teams
System testing shared across similar products
Testing Plan
A test plan is a document describing the approach to be taken for
intended testing activities and serves as an agreement between
the quality assurance and other interested parties, such as
development.
Test Planning and preparation will set the goals for testing, select
an overall testing strategy, and prepare specific test cases and the
general test procedure.
Testing Plan
A document describing the scope, approach, resources, and schedule of
intended test activities.
It identifies test items, the features to be tested, the testing tasks, who will
do each task, and any risks requiring contingency planning.
A test plan defines:
Scope of testing
Magnitude of the testing effort, whole system or part of system.
Test approach/ Strategy
Basis of the test design approach, black box, white box etc.
Level of tests
Unit, integration, system, or acceptance
Testing Plan
Test plan is designed against goals.
However, it is generally more concrete here, because the quality
views and attributes have been decided by the overall quality
engineering process.
What remains to be done is the specific testing goals, such as
reliability or coverage goals, to be used as the exit criteria,
“When to stop testing?”.
Most important activity in the generic testing process
Most of the key decisions about testing are made during this
stage
Testing Process
The overall organization of these activities can be described
by a generic testing process
Testing Process
Basic concepts of testing can be best described in the context of
major activities involved
Test planning and preparation
Set goals
Select testing strategy
Prepare test cases
Prepare test procedure
Test execution
Also includes observation and measurement
Analysis and follow-up
Also includes result checking to determine if a failure has been observed, and
if so, follow-up activities to remove the underlying cause
Activities, People, and Management
Test Management – Responsibilities and Roles
Test Manager
Is tasked with the overall responsibility for the test effort's success. He is the
primary person in charge of advocating and assessing product quality
Test Analyst
Is responsible for initially identifying and defining the required tests, and
subsequently evaluating the results of the test effort.
Test Designer
Is responsible for defining the test approach and ensuring its successful
implementation.
27
Testing Cycle
Although variations exist in testing life cycle, there is a typical cycle
for testing.
The sample below is common among organizations employing the
Linear development model.
Testing should begin in the requirements phase of the software
development life cycle.
Test Planning
Test strategy, test bed creation etc
Since many activities will be carried out during testing, a plan is needed.
Test Development:
Test procedures, test scenarios, test cases, test datasets, test scripts to use in
testing software.
Testing Cycle
Test Execution:
Testers execute the software based on the plans and test documents then
report any errors found to the development team.
Test Reporting:
Once testing is completed, testers generate metrics and make final reports
on their test effort and whether or not the software tested is ready for
release.
Test Result Analysis: Or Defect Analysis,
is done by the development team usually along with the client, in order to
decide what defects should be treated, fixed, rejected (i.e. found software
working properly) or deferred to be dealt with later.
Test Closure
Once the test meets the exit criteria, the activities such as capturing the key
outputs, lessons learned, results, logs, documents related to the project are
archived and used as a reference for future projects.
Software Testing Lifecycle – Another
way
Types of Testing
Software verification activities check the conformance of a
software system to its specifications.
In performing verification activities, we assume that we have a well
defined set of specifications.
A deviation from the specification is a fault, depending on whether
the behavior is specified or other software related entities are
specific, such as through coding standards, design patterns, etc.
When a function or feature expected by the customers is present, the
activity to determine whether it performs or behaves expectedly is
then a verification activity.
Types of Testing
When we are checking specifications, non-conformance indicates
the presence of faults or errors.
For example, a wrong algorithm or an inappropriate data structure is
used, some coding standard is violated, etc.
These problems are typically associated with various types of software
faults which if triggered may cause system failures.
Similarly, not following prescribed processes or selected
methodologies or misunderstanding of needed algorithms and data
structures, is associated with errors or error sources that cause
injection of faults.
Types of Testing
Validation checks the conformance to quality expectations of
customers and users in the form of whether the expected functions
or features are present or not. Therefore, validation deals directly with
users and their requirements; while verification deals with internal
product specifications.
Types of Testing
Verification is a process performed by the developers to ensure that
the software is correctly developed.
Are we building the product right
Validation is a process performed by the users (Acceptance Testing)
to ensure that the software is in accordance to their satisfaction.
Are we building the right product
Requirement Specifications
This can be some kind of formal or Informal specification that
claims to be comprehensive.
Often it is much more informal comprising a short English
description of the inputs and their relationship with the
outputs.
For some classes of system the specification is hard to provide
(e.g. a GUI, since many of the important properties relate to
hard to formalize issues like usability).
Requirement Specifications
Functional aspects–Business processes
Services
Non-functional aspects
Reliability
Performance,
Security,
Usability
–…
There could be specific functional as well as non-functional
requirements
Non Functional Requirement (NFR) Testing
Usability (human factors) testing
Since the purpose of a software test is to demonstrate that the program
does not meet its objectives, test cases must be designed to show that the
program does not satisfy its Usability objectives e.g. HCI principles are
properly followed or documented and de facto standards are met
To find out the human usability factor problems in terms of:
Understandability
Operability
Learnability etc
Non Functional Requirement (NFR) Testing
Performance testing
Since the purpose of a software test is to demonstrate that the program
does not meet its objectives, test cases must be designed to show that the
program does not satisfy its performance objectives:
Many programs have specific performance or efficiency objectives/
requirements, e.g. Load, Stress, Response time
Worst case, best case, average case time to complete a specified set of
operations, e.g.,
Transactions per second
Memory usage (wastage)
Handling extra-ordinary situations
Performance testing
Load Testing
Since the purpose of a software test is to demonstrate that the program
does not meet its objectives, test cases must be designed to show that
the program does not satisfy its Load/stress objectives.
A heavy load is a peak volume of data, or activity, encountered over a
short span of time.
System is subjected to identify peak load conditions at which the
program will fail to handle
Transactions
Processing
Parallel connections
Performance testing
Stress Testing
The resources are denied
Finding out the boundary conditions where the system would
crash
Finding situations where software usage would become
harmful
We test the system behavior upon lack of resources
Non Functional Requirement (NFR) Testing
Security testing
Since the purpose of a software test is to demonstrate that the
program does not meet its objectives, test cases must be designed
to show that the program does not satisfy its security objectives.
Security testing is the process of attempting to devise test cases that
challenge the program’s security checks.
For example, we can formulate test cases that get around an operating
system’s memory protection mechanism.
We can try to challenge a database management system’s data security
mechanisms.
Non Functional Requirement (NFR) Testing
One way to devise such test cases is to study known security
problems in similar systems and generate test cases that attempt
to demonstrate similar problems in the software you are testing.
Here we try to generate errors while moving data from one
system to another.
For example, when upgrading a DBMS we want to ensure that our
existing data fit inside the new system.
Non Functional Requirement (NFR) Testing
ReliabilityTesting
Since the purpose of a software test is to demonstrate that the
program does not meet its objectives, test cases must be designed to
show that the program does not satisfy its reliability objectives.
The goal of all types of testing is the improvement of the program reliability,
but if the program’s objectives contain specific statements about reliability,
specific reliability tests might be devised.
Reliability Testing
For example, a modern online software such as a corporate
wide area network (WAN) or an Internet service provider (ISP)
generally has a targeted uptime of 99.97 percent over the life of
the system.
There is no known way that we could test this objective with a
test period of months or even years.
Test performed on a software before it is
released to a large user’s community:
Alpha testing
Conducted at a developer’s site by a user
Tests conducted in a controlled environment
Beta testing
Conducted at one or more user sites by the end user
It is a live use of the product in an environment over which
developer has no control
Test performed on a software before it is
released to a large user’s community:
Regression testing
Re-run of previous tests to ensure that software already tested
has no regressed (go back) to an early error level after making
changes to software:
Bug regression (Show that a bug was not fixed.)
Old fix regression (Show that an old bug fix was broken.)
General functional regression (Show that a change caused a
working area to break.)
Testing
• Testing Strategies Types of Testing
Black Box Usability
White Box Reliability
Gray Box Performance
Levels of testing Performed by
o Unit (Module) Testing Programmer
o Integration Testing Development team
o Function Testing and Non functional Testing Independent test group
o AcceptanceTesting Customer
When to stop Testing?
•We want to find out when to stop
testing
•Unlike when to start testing it is
difficult to determine when to stop
testing.
• This is because testing is a never
ending process.
• It can never be guaranteed that when
is any software is 100% tested i.e.
error-free.
When to stop Testing?
The two most common criteria are these:
Resource Constraints:
Budget
Time
Not good criteria
When to stop Testing?
State the test completion criteria in term of number of errors to be
found.
This includes:
An estimate of total no. of errors in program.
An estimate of the % of errors in that can be found through testing.
Estimates of what fraction of errors originate in particular design processes and
during what phases of testing they are detected.
When to stop Testing?
Plot the number of errors found per unit time during the test
phase.
The rate of error detection falls below a specified threshold.
When to stop Testing?
Week
When to stop Testing?