0% found this document useful (0 votes)
126 views65 pages

QA Testing Techniques Guide

The document discusses various testing techniques including defect taxonomies, checklist testing, error guessing, and exploratory testing. It provides an example defect taxonomy categorizing defects as functional, system, process, data, code, documentation, and other. It also discusses checklist testing, describing checklists as lists that guide testers to check for issues, rules, criteria, and data conditions. The benefits of checklist testing include reusability, focusing testing efforts, and providing structure during time pressure.

Uploaded by

VidhyaVishali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views65 pages

QA Testing Techniques Guide

The document discusses various testing techniques including defect taxonomies, checklist testing, error guessing, and exploratory testing. It provides an example defect taxonomy categorizing defects as functional, system, process, data, code, documentation, and other. It also discusses checklist testing, describing checklists as lists that guide testers to check for issues, rules, criteria, and data conditions. The benefits of checklist testing include reusability, focusing testing efforts, and providing structure during time pressure.

Uploaded by

VidhyaVishali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Defect Taxonomies,

Checklist Testing,
Error Guessing and
Exploratory Testing
Ivan Stanchev
QA Engineer
System Integration
Team

Telerik QA Academy
Table of Contents
Defect Taxonomies
Popular Standards and Approaches
An Example of a Defect Taxonomy
Checklist Testing
Error Guessing

Improving Your Error Guessing


Techniques
Designing Test Cases
Exploratory Testing

2
Defect Taxonomies
Using Predefined Lists of Defects
Classify Those Cars!

4
Possible Solution?

5
Possible Solution? (2)
Black
White
Red
Green
Blue
Another color

Up to 33 kW
34-80 kW
81-120 kW
Above 120 kW

Real
Imaginary
6
Testing Techniques
Chart
Testing

Dynami
Static
c Dynami
c
analysi
Static s
Review Analysi
s Defec
Black- White- Experien t-
box box ce-based base
d
Non-
Function
functiona
al
l

7
Defect Taxonomy
Defect Taxonomy
Many different contexts
Does not have single definition

A system of (hierarchical)
categories designed to be a
useful aid for reproducibly
classifying defects

8
Defect Taxonomy
A Good Defect Taxonomy for
Testing Purposes
1. Is expandable and ever-evolving
2. Has enough detail for a motivated,
intelligent newcomer to be able to
understand it and learn about the
types of problems to be tested for
3. Can help someone with moderate
experience in the area (like me)
generate test ideas and raise
issues
9
Defect-based Testing
We are doing defect-based testing
anytime the type of the defect
sought is the basis for the test
The underlying model is some list of
defects seen in the past
If this list is organized as a
hierarchical taxonomy, then the
testing is defect-taxonomy based

10
The Defect-based
Technique
The Defect-based Technique
A procedure to derive and/or select
test cases targeted at one or more
defect categories
Tests are being developed from
what is known about the specific
defect category

11
Defect Based Testing
Coverage
Creating a test for every defect
type is a matter of risk
Does the likelihood or impact of the
defect justify the effort?
Creating tests might not be
necessary at all
Sometimes several tests might be
required

12
The Bug Hypothesis
The underlying bug hypothesis is
that programmers tend to
repeatedly make the same
mistakes
I.e., a team of programmers will
introduce roughly the same types of
bugs in roughly the same
proportion from one project to the
next
Allows us to allocate test design
and execution effort based on the
likelihood and impact of the bugs 13
Practical
Implementation
Most practical implementation of
defect taxonomies is Brainstorming
of Test Ideas in a systematic
manner
How does the functionality fail with
respect to each defect category?

They need to be refined or adapted


to the specific domain and project
environment 14
An Example of a
Defect Taxonomy
Example of a Defect
Taxonomy
Here we can review an example of
a Defect taxonomy
Provided by Rex Black
See "Advanced Software Testing Vol.
1"
(ISBN: 978-1-933952-19-2)
The example is focused on the root
causes of bugs

16
Exemplary Taxonomy
Categories
Functional
Specification
Function
Test

17
Exemplary Taxonomy
Categories (2)
System
Internal Interfaces
Hardware Devices
Operating System
Software Architecture
Resource Management

18
Exemplary Taxonomy
Categories (3)
Process
Arithmetic
Initialization
Control of Sequence
Static Logic
Other

19
Exemplary Taxonomy
Categories (4)
Data
Type
Structure
Initial Value
Other
Code
Documentation

Standards

20
Exemplary Taxonomy
Categories (5)
Other
Duplicate

Not a Problem

Bad Unit

Root Cause Needed

Unknown

21
Testing Techniques
Chart
Testing

Dynami
Static
c Dynami
c
analysi
Static s
Review Analysi
s
Experi
Black- White- Defect-
box box ence- based
based

Non-
Function
functiona
al
l
22
Experience-based
Techniques
Tests are based on people's skills,
knowledge, intuition and
experience with similar
applications or technologies
Knowledge of testers, developers,
users and other stakeholders
Knowledge about the software, its
usage and its environment
Knowledge about likely defects and
their distribution

23
Checklist Testing
What is Checklist
Testing?
Checklist-based testing involves
using checklists by testers to guide
their testing
The checklist is basically a high-
level list (guide or a reminder list)
of:
issues to be tested
Items to be checked
Lists of rules
Particular criteria
Data conditions to be verified
25
What is Checklist
Testing? (2)
Checklists are usually developed
over time on the base of:
The experience of the tester
Standards
Previous trouble-areas
Known usage

26
The Bug Hypothesis
The underlying bug hypothesis in
checklist testing is that bugs in the
areas of the checklist are likely,
important, or both
So what is the difference with

quality risk analysis?


The checklist is predetermined
rather than developed by an
analysis of the system

27
Theme Centered
Organization
A checklist is usually organized
around a theme
Quality characteristics
User interface standards
Key operations
Etc.

28
Checklist Testing in
Methodical Testing
The list should not be a static
Generated at the beginning of the
project
Periodically refreshed during the
project through some sort of
analysis, such as quality risk analysis

29
Exemplary Checklist
A checklist for usability of a system
could be:
Simple and natural dialog
Speak the user's language
Minimize user memory load
Consistency
Feedback

30
Exemplary Checklist (2)
A checklist for usability of a system
could be:
Clearly marked exits
Shortcuts
Good error messages
Prevent errors
Help and documentation

31
Real-Life Example

A good example for real-life


checklist:
http://
www.eply.com/help/eply-form-testing
-checklist.pdf

Usability checklist:
http://userium.com/

32
Advantages of
Checklist Testing
Checklists can be reused
Saving time and energy
Help in deciding where to
concentrate efforts
Valuable in time-pressure

circumstances
Prevents forgetting important
issues
Offers a good structured base for
testing
33
Recommendations
Checklists should be tailored
according to the specific situation

Use checklists as an aid, not as


mandatory rule

Standards for checklists should be


flexible
Evolving according to the new
experience
34
Error Guessing
Using the Tester's Intuition
What is Error Guessing?
It is not actually guessing. Good
testers do not guess
They build hypothesis where a bug

might exist based on:


Previous experience
Early cycles
Similar systems

Understanding of the system under


test
Design method
Implementation technology

Knowledge of typical implementation


errors 36
Gray Box Testing
Error Guessing can be called Gray
box testing
Requires the tester to have some
basic programming understanding
Typical programming mistakes
How those mistakes become bugs
How those bugs manifest themselves
as failures
How can we force failures to happen

37
Objectives of Error
Guessing
Focus the testing activity on areas
that have not been handled by the
other more formal techniques
E.g., equivalence partitioning and
boundary value analysis
Intended to compensate for the
inherent incompleteness of other
techniques
Complement equivalence

partitioning and boundary value


analysis 38
Experience Required
Testers who are effective at error
guessing use a range of experience
and knowledge:
Knowledge about the tested
application
E.g., used design method or
implementation technology
Knowledge of the results of any
earlier testing phases
Particularly important in Regression
Testing
39
Experience Required
(2)
Testers who are effective at error
guessing use a range of experience
and knowledge:
Experience of testing similar or
related systems
Knowing where defects have arisen
previously in those systems
Knowledge of typical
implementation errors
E.g., division by zero errors
General testing rules
40
More Practical
Definition

Error guessing involves asking

"What if"

41
How to Improve Your
Error Guessing
Techniques?
Improve your technical

understanding
Go into the code, see how things
are implemented
Learn about the technical context in
which the software is running,
special conditions in your OS, DB or
web server
Talk with Developers

42
How to Improve Your
Error Guessing
Techniques?
Look for errors not only in the (2)
code, but also:
Errors in requirements
Errors in design
Errors in coding
Errors in build
Errors in testing
Errors in usage

43
Effectiveness
Different people with different
experience will show different
results
Different experiences with

different parts of the software will


show different results
As tester advances in the project

and learns more about the system,


he/she may become better in Error
Guessing
44
Why using it?
Advantages of Error Guessing
Highly successful testers are very
effective at quickly evaluating a
program and running an attack that
exposes defects
Can be used to complement other
testing approaches
It is more a skill then a technique
that is well worth cultivating
It can make testing much more
effective
45
Exploratory Testing
Learn, Test and Execute
Simultaneously
What is Exploratory
Testing?
What is Exploratory Testing?

Simultaneous test design,


test execution, and learning.

James Bach,
1995

47
What is Exploratory
Testing? (2)
What is Exploratory Testing?

Simultaneous test design,


test execution, and learning,
with an emphasis on
learning.
Cem Kaner,
2005
The term "exploratory testing" is
coined by Cem Kaner in his book
"Testing Computer Software"
48
What is Exploratory
Testing?
What is Exploratory Testing?

A style of software testing that


emphasizes the personal
freedom and responsibility of the
individual tester to continually
optimize the quality of his/her 200
work by treating test-related 7
learning, test design, test
execution, and test result
interpretation as mutually
supportive activities that run in
parallel throughout the project. 49
What is Exploratory
Testing? (3)
Exploratory testing is an approach
to software testing involving
simultaneous exercising the three
activities:
Learning
Test design
Test execution

50
Control as You Test
In exploratory testing, the tester
controls the design of test cases as
they are performed
Rather than days, weeks, or even
months before
Information the tester gains from
executing a set of tests then
guides the tester in designing and
executing the next set of tests

51
When Do We Use ET?
When do we use exploratory
testing (ET)?
Anytime the next test we do is
influenced by the result of the last
test we did
We become more exploratory when
we can't tell what tests should be
run, in advance of the test cycle

52
Exploratory vs. Ad hoc
Testing
Exploratory testing is to be
distinguished from ad hoc testing
The term "ad hoc testing" is often
associated with sloppy, careless,
unfocused, random, and unskilled
testing

53
Is This a Technique?
Exploratory testing is not a testing
technique
Its a way of thinking about testing
Capable testers have always been
performing exploratory testing
Widely misunderstood and foolishly
disparaged
Any testing technique can be used
in an exploratory way

54
Scripted Testing vs.
Exploratory
We can say the opposite of
exploratory testing is scripted
testing
A Script (low level test case)

specifies:
the test operations
the expected results
the comparisons the human or machine
should make
These comparison points are useful in
general, but many times fallible and
incomplete, criteria for deciding whether
the program behaves properly 55
Scripted Testing vs.
Exploratory
In contrast with scripting
exploratory testing:
Execute the test at time of design
Design the test as needed
Vary the test as appropriate
The exploratory tester is always
responsible for managing the value of
her own:
Reusing old tests
Creating and running new tests
Creating test-support artifacts, such as
failure mode lists
56

Scripted vs. Exploratory
Testing
Scripted Testing Exploratory Testing
Directed from
Directed from within
elsewhere
Determined in the
Determined in advance
moment
Is about confirmation Is about investigation
Is about controlling Is about improving
tests test design
Emphasizes Emphasizes
predictability adaptability
Emphasizes
Emphasizes learning
decidability
57
Issues with Exploratory
Testing
Depends heavily on the testing skills
and domain knowledge of the tester
Limited test reusability
Limited test reusability
Limited reproducibility of failures
Cannot be managed
Low Accountability
Most of them are myths!

58
Session-Based Test
Management

Software test method that aims to


combine accountability and exploratory
testing to provide rapid defect
discovery, creative on-the-fly test
design, management control and
metrics reporting

59
Session-Based Test
Management
Elements:
Charter - A charter is a goal or agenda
for a test session
Session - An uninterrupted period of
time spent testing
Session report - The session report
records the test session
Debrief - A debrief is a short
discussion between the manager and
tester (or testers) about the session
report
60
Session-Based Test
Management
CHARTER
Analyze View menu functionality and report on areas
of potential risk

AREAS
OS | Windows 7Menu | View Strategy | Function
Testing Start: 08.07.2013 10:00
End: 07.08.2013 11:00
Tester:

61
Session-Based Test
Management
TEST NOTES
I touched each of the menu items, below, but focused
mostly on zooming behavior with various combinations of
map elements displayed.
View: Welcome Screen, Navigator, Locator Map, Legend,
Map Elements Highway Levels, Zoom
Levels Risks:- Incorrect display of a map element.- Incorrect
display due to interrupted
BUGS
#BUG 1321 Zooming in makes you put in the CD 2 when you
get to acerta in level of granularity (the street names level)
--even if CD 2 is already in the drive.
ISSUES
How do I know what details should show up at what zoom
levels?

62
Scripted vs. Exploratory
Testing usually falls in between the
both sides:
Depends on the context of the
project
Role-based
Vague Fragment sandboxes
Pure script test cases Freestyle
script s Charters
(scenarios)
ed

63
References
Testing Computer Software, 2nd Edition
Lessons Learned in Software Testing: A
Context-Driven
Approach
A Practitioner's Guide to Software Test
Design
http://www.satisfice.com/articles.shtml
http://kaner.com
http://searchsoftwarequality.techtarget.
com/tip/Finding-software-flaws-with-erro
r-guessing-tours
http://en.wikipedia.org/wiki/Error_guess
Defect Taxonomies,
Checklist Testing, Error
Guessing,
?and Exploratory Testing
?
?
? ?
Questions
?

?? ? ? ?
?

You might also like