0% found this document useful (0 votes)
40 views10 pages

Module 3 Testing

Uploaded by

hasounlimited
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views10 pages

Module 3 Testing

Uploaded by

hasounlimited
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module 3: Fundamentals of testing, White-box, and black-box testing, Test coverage analysis and test

case, design techniques, mutation testing, Static and dynamic analysis, Software reliability metrics,
reliability growth modeling. : explain with example so that it can be explained to btech students

1. Fundamentals of Testing

What is it?

Software testing is the process of evaluating a software application to identify any gaps,
errors, or missing requirements versus the actual requirements. It's not just about
finding bugs; it's about building confidence that the software works as intended.

Key Objectives:

 To find defects before users do.


 To ensure quality and reliability.
 To meet business and user requirements.
 To provide information about the quality level to stakeholders.

Core Principles:

1. Testing shows the presence of defects: Testing can prove that defects are
present, but cannot prove that there are no defects.
2. Exhaustive testing is impossible: It's impossible to test all combinations of
inputs and preconditions. Instead, we use risk and priorittise those risk to focus
testing efforts RPN=S*O*D.
3. Early testing: Testing activities should start as early as possible in the software
development life cycle (SDLC).
4. Defect clustering: A small number of modules usually contain most of the
defects discovered. The "80/20 Rule" of Software Bugs

Pareto Principle (80/20 Rule): 80% of the problems are found in 20% of the modules.
Why Does This Happen?

There are several reasons why bugs tend to cluster in specific areas:

1. Complexity: Some modules are inherently more complex. A simple "Login" module will
have far fewer bugs than a "Recommendation Engine" that uses complex machine
learning algorithms.
2. Frequent Changes: Modules that undergo frequent requirement changes or last-
minute fixes are more prone to errors.
3. New Code: Brand new code, especially if it's a new technology for the team, is often less
stable and has more defects.
4. Dependencies: Modules that interact with many other parts of the system or external
services have more points of potential failure.
5. Developer Skill/Experience: A complex module might be assigned to a less
experienced developer, or a developer might be working under extreme time pressure,
leading to more bugs.

A Detailed Example: E-Commerce Website

Imagine a team is building an online store. The website has several main modules:

1. User Registration & Login


2. Product Catalog & Search
3. Shopping Cart
4. Payment Gateway Integration
5. Order History & Tracking

Let's analyze their complexity:

 Module 1 (Login): Relatively simple. Validates email/password. Well-understood


problem.
 Module 2 (Catalog): Medium complexity. Involves searching, filtering, and displaying
products.
 Module 3 (Cart): Medium complexity. Handles adding/removing items, calculating
quantities and prices.
 Module 4 (Payment): HIGHLY COMPLEX. This module has to:

o Communicate securely with external banks (e.g., Visa, Mastercard).


o Handle multiple payment methods (credit card, PayPal, net banking).
o Manage sensitive customer data (card numbers, CVV).
o Process transactions correctly and handle failures (e.g., "Insufficient Funds", "Network
Timeout").
o Be incredibly robust—any bug here directly leads to lost sales.
 Module 5 (History): Low complexity. Mostly just displays data from the database.

Testing and Defect Discovery

During testing, the team logs all the bugs they find. The bug distribution might look like
this:

Module Number of Defects Found % of Total Defects

1. Login 5 10%

2. Catalog 8 16%

3. Shopping Cart 7 14%

4. Payment Gateway 28 56%

5. Order History 2 4%
Module Number of Defects Found % of Total Defects

Total 50 100%

Conclusion from the Data:

 The Payment Gateway module (just one out of five modules, or 20% of the modules) is
responsible for 56% of all defects.
 This is a clear example of defect clustering.

Implications and How Testers Use This Principle

1. Focus Testing Efforts: A smart testing manager will see this data and assign their best
testers and more time to test the Payment module rigorously (e.g., security testing,
performance testing, negative testing). They might spend less time retesting the stable
Login module.
2. Risk-Based Testing: This principle is the foundation of risk-based testing. You identify
the most complex, critical, and change-prone areas of the application and prioritize
testing there.
3. Not an Excuse to Ignore Other Modules: It's a guideline, not a law. While focus is on
"clusters," you cannot ignore the rest of the system. A serious bug might still exist in a
supposedly "simple" module. The goal is efficient allocation of limited resources.
4. Feedback for Development: This data is valuable for the development team too. It
shows them which areas might need code refactoring, better design patterns, or more
code reviews in the future to improve quality.

In short, Defect Clustering tells testers: "Don't spread your effort evenly. Be a
detective, find the bug-prone areas, and hunt there first."
2. White-Box vs. Black-Box Testing

Feature White-Box Testing Black-Box Testing

Internal view. Tester looks at the External view. Tester does NOT
Perspective
code. look at the code.

Requires knowledge of
Requires knowledge of internal
Knowledge requirements and specifications,
logic, code, and implementation.
not code.

Structural Testing, Glass Box Functional Testing, Behavioral


Alias
Testing Testing

Validate internal structure, logic Validate functionality against


Goal
paths, and conditions. requirements.

Checking if all branches of an if- Entering a username and


Example
else statement are executed. password to see if login works.

Analogy:

 Black-Box: Testing a car by driving it (using its functions without knowing how
the engine works).
 White-Box: A mechanic opening the hood to test the engine's components
directly.

Code Example:
python
def calculate_discount(amount, is_member):
if is_member and amount > 100:
return amount * 0.9 # 10% discount
else:
return amount
 Black-Box Test Cases:

o Input: (amount=150, is_member=True) -> Expected Output: 135


o Input: (amount=50, is_member=True) -> Expected Output: 50
o Input: (amount=150, is_member=False) -> Expected Output: 150
 White-Box Test Cases: (Designed by looking at the code lines and branches)

o Test the True path of the if condition.


o Test the False path of the if condition (both conditions must be tested false).

3. Test Coverage Analysis & Test Case Design Techniques

What is Test Coverage?


A metric that measures the amount of testing performed. It answers the question:
"How much of our code have we tested?"

 Common Type: Statement Coverage: The percentage of executable statements


in the source code that have been executed by tests. Aim for high coverage, but
100% doesn't mean 0 bugs.

Test Case Design Techniques:


These are systematic ways to create effective test cases.

a) White-Box Technique: Control Flow Testing

 Basis Path Testing: Creating tests to ensure every independent path through a
code module is executed at least once. It uses control flow graphs.
 Example: For the calculate_discount function above, a basis path test would
design tests for both paths through the if-else statement.

b) Black-Box Techniques:

 Equivalence Partitioning (EP): Inputs are divided into groups that are expected
to behave similarly. You test one value from each group.
o Example: A field that accepts ages 1-120.

 Valid Partitions: 1-120 -> Test value: 50


 Invalid Partitions: <1 -> Test value: -5; >120 -> Test value: 150
 Bugs often lurk here. So instead of 120+ tests, you just did 3!

 Boundary Value Analysis (BVA): Testing at the boundaries between partitions.

o Example: For the age field (1-120), test: 0, 1, 2, 119, 120, 121.

4. Mutation Testing

What is it?

An advanced white-box testing technique used to evaluate the quality of your test
cases. The idea is to deliberately introduce small faults (mutants) into the code and then
check if your test cases can kill (detect) them.

Mutation Testing: The "Practice Zombie Attack" for Your Tests


Imagine you've built an awesome video game level and you have a team of superheroes
(your test cases) who are supposed to defend it from bugs.

You want to know: Are my superheroes actually strong enough to stop a real
attack?

How can you find out without letting real, dangerous bugs in?

You send in practice zombies!

Mutation Testing: The "Spell Check" for Your Tests


Imagine you have a spell-check test for the word "CAT."

Your test is simple: "If I type 'CAT', the spell checker should say it's CORRECT."

That seems like a good test, right? But is it a strong test? Let's find out with mutation
testing!

5. Static vs. Dynamic Analysis

Feature Static Analysis Dynamic Analysis

Performed without executing the Performed by executing the


When?
program. program.

Examining code, requirements, or Testing the software with specific


What?
design documents. inputs.

Syntax errors, coding standards Logical errors, calculation errors,


Finds violations, potential bugs (e.g., unused runtime crashes, performance
variables), security vulnerabilities. issues.

Unit testing frameworks (JUnit,


Linters, Static Analyzers (e.g.,
Tools pytest), debugging tools,
SonarQube, ESLint, Pylint).
performance monitors.

A tool flags if (x = 5) as a potential Running the program and


Example bug (it's an assignment, not a entering a letter where a number
comparison). is expected causes it to crash.

6. Software Reliability Metrics

These are quantitative measures used to estimate the reliability (how failure-free it is) of
software.
Key Metrics:

1. Mean Time To Failure (MTTF): The average time between failures.

o Formula: (Total Operational Time) / (Number of Failures)


o Example: A system runs for 1000 hours with 5 failures. MTTF = 1000 / 5 = 200
hours.

2. Mean Time To Repair (MTTR): The average time it takes to fix a failure.

o Example: The 5 failures took 2, 1, 3, 2, and 2 hours to fix. MTTR = (2+1+3+2+2)/5 =


2 hours.

3. Mean Time Between Failures (MTBF): The average time between one
failure ending and the next one starting. MTBF = MTTF + MTTR.

o Example: MTBF = 200 + 2 = 202 hours.


4. Failure Rate (λ): The frequency with which a system fails. λ = 1 / MTTF.

o Example: λ = 1 / 200 = 0.005 failures per hour.

7. Reliability Growth Modeling

What is it?

A model that predicts the future reliability of a software system based on the failure
data observed during testing. It assumes that as bugs are found and fixed, the
software's reliability improves (grows) over time.

Process:

1. During testing, record the time between failures (e.g., T1, T2, T3...).
2. Fit this data to a mathematical model (e.g., the Goel-Okumoto Model).
3. The model can then predict:

o How many bugs are left.


o How much testing is needed to reach a target reliability.
o The expected failure rate in the future.

Simple Example:
Imagine you are testing a website:

 Week 1: You find 20 bugs. MTTF is 1 hour.


 Week 2: After fixes, you find 10 new bugs. MTTF is 5 hours. Reliability is
growing.
 Week 3: After more fixes, you find 2 new bugs. MTTF is 50 hours. Reliability is
growing again.

A reliability growth model would plot this MTTF data and create a curve that forecasts
what the MTTF might be in Week 4, helping managers decide if the software is ready for
release.

You might also like