2 Module Test Manager + Test Analyst
Trình bày: Tạ Thị Thinh
Email: thinhtt0204@[Link]
Skype: ta.thinh0204
Mobile: 0986.775.464
Process Metrics Issue
Risk Improvement
You must work through all the exercises in the book
You must work through all the sample exam questions
in the book
You must read the ISTQB glossary term definitions
where they occurred in the chapters
You must read every chapter of this book and the entire
ISTQB Advanced syllabus
Sequential Lifecycle Models
Planning and designing tests start early in the project
The planning, analysis, and design work might identify
defects in the requirements, making testing a preventive
activity
Failure detection would start much later in the
lifecycle, once system test execution began.
Activities of each test level occurs concurrently with
project activities
ti
Issues for testing that the test manager must manage:
Schedule compression during testing at the end of the
project
Development groups, likewise pressured to achieve
dates, delivering unstable and often untestable systems
to the test team
The test team is involved late
The test team won't receive a complete set of requirements
early in the project.
Analyzing requirements at the outset of the project, the best
the test team can do is to identify and prioritize key quality
risk areas
Specific test designs and implementation will occur
immediately before test execution, potentially reducing the
preventive role of testing
Defect detection starts very early in the project, at the end of
the first sprint, and continues in repetitive, short cycles
throughout the project
Testing process overlap and are concurrent with each other as
well as with major activities in the software lifecycle
Test issues:
Regression test all the functions and capabilities after
the first iteration
Failure to plan for bugs and how to handle them
The lack of rigor in and respect for testing
The designs of the system will change
Schedules can be quite unpredictable
Use a less formalized process and a much closer working
relationship that allows changes to occur more easily
within the project
Less comprehensive test documentation in favor of
having a more rapid method of communication such as
daily “stand up” meetings
Require the earliest involvement from the Test Analyst
and throughout project lifecycle:
◦ Working with the developers as they do their initial architecture
and design work
◦ Reviews may not be formalized but are continuous as the
software evolves
Good change management and configuration
management are critical for testing
Foundation test levels:
Unit or component
Integration
System
Acceptance
With Integration testing:
Component integration testing—integrating a set of
components to form a system, testing the builds throughout
that process.
System integration testing —integrating a set of systems to
form a system of systems, testing the system of systems as
it emerges from the conglomeration of systems.
Additional test levels in Advanced level:
Hardware-software integration testing
Feature interaction testing
Customer product integration testing
You should expect to find most, if not all, of the following
for each level:
Clearly defined test goals and scope
Traceability to the test basis (if available)
Entry and exit criteria, as appropriate both for the level and
for the system lifecycle
Test deliverables, including results reporting that will be
expected
Test techniques that will be applied, as appropriate for the
level, for the team, and for the risks inherent in the system
Measurements and metrics
Test tools, where applicable and as appropriate for the level
And, if applicable, compliance with organizational or other
standards
System of systems: Multiple heterogeneous, distributed systems
that are embedded in networks at multiple levels and in multiple
domains and are interconnected, addressing large-scale
interdisciplinary common problems and purposes.
The integration of commercial off-the-shelf (COTS)
software, along with some amount of custom development,
often taking place over a long period.
Significant technical, lifecycle, and organizational
complexity and heterogeneity.
Different development lifecycles and other process among
disparate teams, especially—as is frequently the case—when
distributed work, insourcing, and outsourcing are involved.
Serious potential reliability issues due to intersystem
coupling, where one inherently weaker system creates
ripple-effect failures across the entire system of systems.
System integration testing, including interoperability
testing, is essential.
Safety-critical system: A system whose failure
or malfunction may result in death or serious
injury to people, or loss or severe damage to
equipment, or environmental harm.
Safety-critical systems are those systems
upon which lives depend
Defects can cause death, and deaths can cause civil and
criminal penalties, so proof of adequate testing can be
and often is used to reduce liability.
Focus on quality as a very important project priority.
Various regulations and standards often apply to safety
critical systems
Traceability all the way from regulatory requirements to
test results, and helps demonstrate compliance.
Measure: The number or category assigned to an
attribute of an entity by making a measurement.
Measurement: The process of assigning a number or
category to an entity to describe an attribute of that
entity.
Measurement scale: A scale that constrains the type of
data analysis that can be performed on it.
Metric: A measurement scale and the method used for
measurement.
The measurements objectives:
Analysis, to discover what trends and causes may be
discernible via the test results
Reporting, to communicate test findings to interested
project participants and stakeholders
Control, to change the course of the testing or the project
as a whole and to monitor the results of that course
correction
Metric benefit:
Enables testers to report results in a consistent way
Enables coherent tracking of progress over time.
Determine the overall success of a project
Measurement control:
Revising the quality risk analysis, test priorities, and/or
test plans
Adding resources or otherwise increasing the project or
test effort
Delaying the release date
Relaxing or strengthening the test exit criteria
Changing the scope (functional and/or non-functional) of
the project
Main project metrics:
Scope: size, goals,
requirement
Time: task duration,
dependences, critical path
Cost both from time spent
(people, equipment,
material, license fees)
Quality: defects, feelings
Test progress monitoring and control dimensions
Measurement process
Defining Tracing Report Validity
• Objectives • Use tool • Visible • Verify
• Goals • Subjective • Understand • Review
analyses
Metrics related to product risks include:
Percentage of risks completely covered by passing tests
Percentage of risks for which some or all tests fail
Percentage of risk not yet completely tested
Metric benefit:
Enables testers to report results in a consistent way
Enables coherent tracking of progress over time.
Determine the overall success of a project
Percentage of risks covered, sorted by risk category
Percentage of risks identified after the initial quality risk
analysis
Metrics related to defects include:
Cumulative number reported (found) versus cumulative number
resolved (fixed)
Mean time between failure or failure arrival rate
Breakdown of the number or percentage of defects categorized by
the following:
◦ Particular test items or components
◦ Root causes
◦ Source of defect (e.g., requirement specification, new feature, regression,
etc.)
◦ Test releases
◦ Phase introduced, detected, and removed
◦ Priority/severity
◦ Reports rejected or duplicated
Trends in the lag time from defect reporting to resolution
Number of defect fixes that introduced new defects (sometimes
called daughter bugs)
Trends in the lag time
1.6
1.4
1.2
0.8
Average time( day)
0.6
0.4
0.2
0
W1 W2 W3 W4 W5 W6 W7 W8 W9 W10 W11 W12 W13 W14 W15 W16 W17
Metrics related to tests include:
Total number of tests planned, specified (implemented),
run, passed, failed, blocked, and skipped
Regression and confirmation test status, including trends
and totals for regression test and confirmation test
failures
Hours of testing planned per day versus actual hours
achieved
Availability of the test environment (percentage of
planned test hours when the test environment is usable
by the test team)
Metrics related to test coverage include:
Requirements and design elements coverage
Risk coverage
Environment/configuration coverage
Code coverage
Metric type:
Project metrics Product Process People metrics
metrics metrics
Ob measure measure some measure the measure the
j progress toward attribute of the capability of capability of
established exit product the testing individuals or
criteria process groups
Ex the percentage Coverage percentage of implementation
of test cases Defect density defects of test cases
executed, detected by within a given
passed, and testing schedule
failed
Quantitative values Qualitative values
finding defects improved reputation
reducing risk by for quality
running tests, smoother and more-
delivering predictable releases
information on increased confidence
project, process, protection from legal
and product status liability
reducing risk of loss
of whole missions or
even lives
Costs of Costs of Costs of
Costs of
internal external
prevention detection
failure failure
• Training • Write test • Fix bug • Support
• Early test case prior to customer
• Build • Review delivery • Fix bug
process document • Re-test after
• Execution delivery
test • Regression
test