UNIT-1
Define software testing
Software testing is an important process in the Software Development Lifecycle (SDLC). It
involves verifying and validating that a Software Application is free of bugs, meets the
technical requirements set by its Design and Development, and satisfies user requirements
efficiently and effectively.
Software Testing Can be Divided into Two Steps:
Software testing mainly divides into the two parts, which is used in the Software Development
Process:
1. Verification: This step involves checking if the software is doing what is supposed to do. Its
like asking, "Are we building the product the right way?"
2. Validation: This step verifies that the software actually meets the customer's needs and
requirements. Its like asking, "Are we building the right product?"
Principles of software testing
Software testing is an important aspect of software development, ensuring that applications
function correctly and meet user expectations. From test planning to execution, analysis and
understanding these principles help testers in creating a more structured and focused approach
to software testing, resulting in a higher-quality product.
Principles of Software Testing
Here are the Seven Principles of Software Testing:
1. Testing shows the Presence of Defects
The goal of software testing is to make the software fail. Software testing reduces the presence
of defects. Software testing talks about the presence of defects and doesn't talk about the
absence of defects. Software testing can ensure that defects are present but it can not prove that
software is defect-free. Even multiple tests can never ensure that software is 100% bug-free.
Testing can reduce the number of defects but not remove all defects.
2. Exhaustive Testing is not Possible
It is the process of testing the functionality of the software in all possible inputs (valid or
invalid) and pre-conditions is known as exhaustive testing. Exhaustive testing is impossible
means the software can never test at every test case. It can test only some test cases and assume
that the software is correct and it will produce the correct output in every test case. If the
software will test every test case then it will take more cost, effort, etc., which is impractical.
3. Early Testing
To find the defect in the software, early test activity shall be started. The defect detected in the
early phases of SDLC will be very less expensive. For better performance of software,
software testing will start at the initial phase i.e. testing will perform at the requirement
analysis phase.
4. Defect Clustering
In a project, a small number of modules can contain most of the defects. The Pareto Principle
for software testing states that 80% of software defects come from 20% of modules.\
5. Pesticide Paradox
Repeating the same test cases, again and again, will not find new bugs. So it is necessary to
review the test cases and add or update test cases to find new bugs.
6. Testing is Context-Dependent
The testing approach depends on the context of the software developed. Different types of
software need to perform different types of testing. For example, The testing of the e-
commerce site is different from the testing of the Android application.
7. Absence of Errors Fallacy
If a built software is 99% bug-free but does not follow the user requirement then it is unusable.
It is not only necessary that software is 99% bug-free but it is also mandatory to fulfill all the
customer requirements.
Conclusion
Software testing is essential for ensuring applications meet user expectations and function
correctly. Understanding key principles like detecting defects early and recognizing the
impossibility of exhaustive testing is vital for delivering reliable software.
Role of tetser in software development organizations
In a software development organization, a tester, often part of a Quality Assurance (QA) team,
plays a crucial role in ensuring the quality and reliability of software products. They are
responsible for designing, executing, and reporting on tests to identify defects and ensure the
software meets requirements and user expectations.
Key Responsibilities of a Software Tester:
Test Planning and Design:
Testers create test plans, outlining the scope, schedule, and resources for testing efforts. They
also design test cases, specifying the steps and expected results for evaluating various
software functionalities.
Test Execution and Defect Reporting:
Testers execute test cases, carefully observing the software's behavior and documenting any
deviations or defects found. They log defects with detailed information to help developers
understand and fix the issues.
Environment and Data Setup:
Testers prepare the necessary test environments, including hardware, software, and data, to
ensure accurate and reliable testing.
Collaboration and Communication:
Testers collaborate with developers, product owners, and other stakeholders to ensure that
testing efforts align with project goals and that feedback is effectively communicated.
Regression Testing:
After bug fixes or new feature implementations, testers perform regression testing to ensure
that previously working functionalities are not negatively impacted.
Providing Feedback:
Testers provide valuable feedback to developers on the quality, usability, and performance of
the software, helping to improve the overall product.
Types of testing
1. Manual Testing
Manual Testing is a technique to test the software that is carried out using the functions and
features of an application. Which means manual testing will be check the defect manually with
trying one by one function is working as expected.
2. Automation Testing
Automated Testing is a technique where the Tester writes scripts independently and uses
suitable Software or Automation Tools to test the software. It is an Automation Process of a
Manual Process. It allows for executing repetitive tasks without the use of a Manual Tester.
Manual vs. Automated testing
Parameters Manual Testing Automation Testing
In manual testing, the test In automated testing, the test
cases are executed by the cases are executed by the
Definition human tester. software tools.
Manual testing is time- Automation testing is faster
Processing Time consuming. than manual testing.
Automation testing takes up
Manual testing takes up
automation tools and trained
human resources.
Resources requirement employees.
Exploratory testing is not
Exploratory testing is
possible in automation
possible in manual testing.
Exploratory testing testing.
Automation testing uses
Manual testing doesn’t use
frameworks like Data Drive,
frameworks.
Framework requirement Keyword, etc.
Types of Manual Testing
1. White Box Testing
White Box Testing is a software testing technique that involves testing the internal structure
and workings of a software application. The tester has access to the source code and uses this
knowledge to design test cases that can verify the correctness of the software at the code level.
2. Black Box Testing
Black-Box Testing is a type of software testing in which the tester is not concerned with the
internal knowledge or implementation details of the software but rather focuses on validating
the functionality based on the provided specifications or requirements.
3. Gray Box Testing
Gray Box Testing is a software testing technique that is a combination of the Black Box
Testing technique and the White Box Testing technique.
In the Black Box Testing technique, the tester is unaware of the internal structure of the
item being tested and in White Box Testing the internal structure is known to the tester.
The internal structure is partially known in Gray Box Testing.
This includes access to internal data structures and algorithms to design the test cases.
Types of Black Box Testing
1. Functional Testing
Functional Testing is a type of Software Testing in which the system is tested against the
functional requirements and specifications. Functional testing ensures that the requirements or
specifications are properly satisfied by the application. This type of testing is particularly
concerned with the result of processing. It focuses on the simulation of actual system usage but
does not develop any system structure assumptions.
2. Non-Functional Testing
Non-Functional Testing is a type of Software Testing that is performed to verify the non-
functional requirements of the application. It verifies whether the behavior of the system is as
per the requirement or not. It tests all the aspects that are not tested in functional testing. Non-
functional testing is a software testing technique that checks the non-functional attributes of the
system. Non-functional testing is defined as a type of software testing to check non-functional
aspects of a software application. It is designed to test the readiness of a system as per
nonfunctional parameters which are never addressed by functional testing. Non-functional
testing is as important as functional testing.
Types of Functional Testing
1. Unit Testing
Unit Testing is a method of testing individual units or components of a software application. It
is typically done by developers and is used to ensure that the individual units of the software
are working as intended. Unit tests are usually automated and are designed to test specific parts
of the code, such as a particular function or method. Unit testing is done at the lowest level of
the software development process where individual units of code are tested in isolation.
Note: Unit Testing basically Included in both White Box Testing and Black Box Testing.
2. Integration Testing
Integration Testing is a method of testing how different units or components of a software
application interact with each other. It is used to identify and resolve any issues that may arise
when different units of the software are combined. Integration testing is typically done after
unit testing and before functional testing and is used to verify that the different units of the
software work together as intended.
Different Ways of Performing Integration Testing:
Different ways of Integration Testing are discussed below.
Top-down integration testing: It starts with the highest-level modules and differentiates
them from lower-level modules.
Bottom-up integration testing: It starts with the lowest-level modules and integrates them
with higher-level modules.
Big-Bang integration testing: It combines all the modules and integrates them all at once.
Incremental integration testing: It integrates the modules in small groups, testing each
group as it is added.
1. Black Box testing: It is used for validation. In this, we ignore internal working
mechanisms and focus on "what is the output?"
2. White box testing: It is used for verification. In this, we focus on internal mechanisms
i.e. how the output is achieved.
3. System Testing
System Testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets
the specified requirements and if it is suitable for delivery to the end-users. This type of testing
is performed after the integration testing and before the acceptance testing.
4. End-to-end Testing
End-to-End Testing is the type of software testing used to test entire software from starting to
the end along with its integration with external interfaces. The main purpose of end-to-end
testing is to identify system dependencies and to make sure that the data integrity and
communication with other systems, interfaces and databases to exercise complete production.
5. Acceptance Testing
Acceptance Testing is formal testing according to user needs, requirements, and business
processes conducted to determine whether a system satisfies the acceptance criteria or not and
to enable the users, customers, or other authorized entities to determine whether to accept the
system or not.
Types of Non-functional Testing
Here are the Types of Non-Functional Testing
1. Performance Testing
Performance Testing is a type of software testing that ensures software applications perform
properly under their expected workload. It is a testing technique carried out to determine
system performance in terms of sensitivity, reactivity, and stability under a particular
workload.
Ex: Measuring how quickly the login page loads with 500 concurrent users.
2. Usability Testing
Usability Testing in software testing is a type of testing, that is done from an end user’s
perspective to determine if the system is easily usable. Usability testing is generally the
practice of testing how easy a design is to use on a group of representative users. Several tests
are performed on a product before deploying it.
Ex: Verifying that new users can easily navigate and use the app.
3. Compatibility Testing
Compatibility Testing is software testing that comes under the non functional testing category,
and it is performed on an application to check its compatibility (running capability) on
different platforms/environments. This testing is done only when the application becomes
stable. This means simply this compatibility test aims to check the developed software
application functionality on various software, hardware platforms, networks browser etc. This
compatibility testing is very important in product production and implementation point of view
as it is performed to avoid future issues regarding compatibility.
Ex: Ensuring the app works on Chrome, Firefox, Safari, and Edge.
Types of Performance Testing
Here are the Types of Performance testing:
1. Load Testing
Load Testing determines the behavior of the application when multiple users use it at the same
time. It is the response of the system measured under varying load conditions.
1. The load testing is carried out for normal and extreme load conditions.
2. Load testing is a type of performance testing that simulates a real-world load on a system
or application to see how it performs under stress.
3. The goal of load testing is to identify bottlenecks and determine the maximum number of
users or transactions the system can handle.
4. It is an important aspect of software testing as it helps ensure that the system can handle
the expected usage levels and identify any potential issues before the system is deployed
to production.
2. Stress Testing
Stress Testing is defined as types of software testing that verifies the stability and reliability
of the system. This test particularly determines the system’s robustness and error handling
under the burden of some load conditions. It tests beyond the normal operating point and
analyses how the system works under extreme conditions.
Example:
1. Test cases that require maximum memory or other resources are executed.
2. Test cases that may cause thrashing in a virtual operating system.
3. Test cases that may cause excessive disk requirement Performance Testing.
3. Scalability Testing
Scalability Testing is a type of non-functional testing in which the performance of a software
application, system, network or process is tested in terms of its capability to scale up or scale
down the number of user request load or other such performance attributes. It can be carried
out at a hardware, software or database level.
4. Stability Testing
Stability Testing is a type of Software Testing to check the quality and behavior of the software
in different environmental parameters. It is defined as the ability of the product to continue to
function over time without failure.
Stability testing assesses stability problems. This testing is mainly intended to check whether
the application will crash at any point in time or not.
Test case, Test Scenario, Test Scripts
Test Case: Test Cases are a series of actions executed during software development to verify a
particular feature or function. A test case consists of test steps, test data, preconditions, and post
conditions designed to verify a specific requirement.
Test Scenario: Usually, a test scenario consists of a set of test cases covering the end-to-end
functionality of a software application. A test scenario provides a high-level overview of what
needs to be tested.
Test Scripts: When it comes to software testing, a test script refers to the set of instructions that
will be followed in order to verify that the system under test performs as expected. The
document outlines each step to be taken and the expected results.
What is defect in Software Testing?
In any software testing, a defect is a deviation from the observed versus expected behavior
of a software application. It's usually called a bug. Defects are faults, errors, or flaws of
software that make it produce incorrect, unexpected, or unintended results and thus impact its
functionality, performance, security, or usability.
Common Origins of Defects
A defect can come from a very wide variety of sources. Appreciating the origin of the different
defects is key to preventing them in future development cycles:
Incomplete Requirements: Poorly defined or ambiguous requirements are subject to
misinterpretation causing a defect in the software.
Design Errors: Problems in the software architecture or design can cause defects.
Coding Mistakes: Various human errors during the coding phase of the software—that is
to say, syntax, logic, or algorithm errors—can result whereby defects are introduced.
Unidentified Defects: The testing is inadequate or bad, so that defects are not detected
before software delivered.
Defect classes, the defect repository and test design
Defects can be classified in many ways. It is important for an organization to adapt a single
classification scheme and apply it to all projects. No matter which classification scheme is
selected, some defects will fit into more than one class or category. Because of this problem,
developers, testers, and SQA staff should try to be as consistent as possible when recording
defect data. The four classes of defects are as follows,
Requirements and specifications defects,
Design defects,
Code defects,
Testing defects
1. Requirements and Specifications Defects
The beginning of the software life cycle is important for ensuring high quality in the
software being developed. Defects injected in early phases can be very difficult to
remove in later phases. Since many requirements documents are written using a natural
language representation, they may become
Ambiguous,
Contradictory,
Unclear,
Redundant,
Imprecise.
Some specific requirements/specification defects are:
1.1 Functional Description Defects
The overall description of what the product does, and how it should behave
(inputs/outputs), is incorrect, ambiguous, and/or incomplete.
1.2 Feature Defects
Features is described as distinguishing characteristics of a software component or system.
Feature defects are due to feature descriptions that are missing, incorrect, incomplete, or
unnecessary.
1.3 Feature Interaction Defects
These are due to an incorrect description of how the features should interact with each
other.
1.4 Interface Description Defects
These are defects that occur in the description of how the target software is to interface
with external software, hardware, and users.
2. Design Defects
Design defects occur when the following are incorrectly designed,
System components,
Interactions between system components,
Interactions between the components and outside software/hardware, or users
It includes defects in the design of algorithms, control, logic, data elements, module
interface descriptions, and external software/hardware/user interface descriptions. The design
defects are,
2.1 Algorithmic and Processing Defects
These occur when the processing steps in the algorithm as described by the pseudo code
are incorrect.
2.2 Control, Logic, and Sequence Defects
Control defects occur when logic flow in the pseudo code is not correct.
2.3 Data Defects
These are associated with incorrect design of data structures.
2.4 Module Interface Description Defects
These defects occur because of incorrect or inconsistent usage of parameter types,
incorrect number of parameters or incorrect ordering of parameters.
2.5 Functional Description Defects
The defects in this category include incorrect, missing, or unclear design elements.
2.6 External Interface Description Defects
These are derived from incorrect design descriptions for interfaces with COTS
components, external software systems, databases, and hardware devices.
3. Coding Defects
Coding defects are derived from errors in implementing the code. Coding defects classes
are similar to design defect classes. Some coding defects come from a failure to understand
programming language constructs, and miscommunication with the designers.
3.1 Algorithmic and Processing Defects
Code related algorithm and processing defects include
Unchecked overflow and underflow conditions,
Comparing inappropriate data types,
Converting one data type to another,
Incorrect ordering of arithmetic operators,
Misuse or omission of parentheses,
Precision loss,
Incorrect use of signs.
3.2 Control, Logic and Sequence Defects
This type of defects include incorrect expression of case statements, incorrect iteration of
loops, and missing paths.
3.3 Typographical Defects
These are mainly syntax errors, for example, incorrect spelling of a variable name that are
usually detected by a compiler or self-reviews, or peer reviews.
3.4 Initialization Defects
This type of defects occur when initialization statements are omitted or are incorrect. This
may occur because of misunderstandings or lack of communication between programmers, or
programmer`s and designer`s, carelessness, or misunderstanding of the programming
environment.
3.5 Data-Flow Defects
Data-Flow defects occur when the code does not follow the necessary data-flow
conditions.
3.6 Data Defects
These are indicated by incorrect implementation of data structures.
3.7 Module Interface Defects
Module Interface defects occurs because of using incorrect or inconsistent parameter
types, an incorrect number of parameters, or improper ordering of the parameters.
3.8 Code Documentation Defects
When the code documentation does not describe what the program actually does, or is
incomplete or ambiguous, it is called a code documentation defect.
3.9 External Hardware, Software Interfaces Defects
These defects occur because of problems related to
System calls,
Links to databases,
Input/output sequences,
Memory usage,
Resource usage,
Interrupts and exception handling,
Data exchanges with hardware,
Protocols,
Formats,
Interfaces with build files,
Timing sequences.
4. Testing Defects
Test plans, test cases, test harnesses, and test procedures can also contain defects. These
defects are called testing defects. Defects in test plans are best detected using review techniques.
4.1 Test Harness Defects
In order to test software, at the unit and integration levels, auxiliary code must be
developed. This is called the test harness or scaffolding code. The test harness code should be
carefully designed, implemented, and tested since it is a work product and this code can be
reused when new releases of the software are developed.
4.2 Test Case Design and Test Procedure Defects
These consists of incorrect, incomplete, missing, inappropriate test cases, and test
procedures.
DEFECT EXAMPLES
The Coin Problem
Specification for the program calculate_coin_value
This program calculates the total rupees value for a set of coins. The user inputs the amount of
25p, 50p and 1rs coins. There are size different denominations of coins. The program outputs the
total rupees and paise value of the coins to the user
Input : number_of_coins is an integer
Output : number_of_rupees is an integer
number_of_paise is an integer
This is a sample informal specification for a simple program that calculates the total value of a
set of coins. The program could be a component of an interactive cash register system. This
simple example shows
Requirements/specification defects,
Functional description defects,
Interface description defects.
The functional description defects arise because the functional description is ambiguous and
incomplete. It does not state that the input and the output should be zero or greater and cannot
accept negative values. Because of these ambiguities and specification incompleteness, a
checking routine may be omitted from the design. A more formally stated set of preconditions
and post conditions is needed with the specification.
A precondition is a condition that must be true in order for a software component to operate
properly.
A postcondition is a condition that must be true when a software component completes its
operation properly.
The functional description is unclear about the maximum number of coins of each
denomination allowed, and the maximum number of rupees and paise allowed as output values.
It is not clear from the specification how the user interacts with the program to provide input,
and how the output is to be reported.
1. Design Description for the Coin Problem
Design Description for Program calculate_coin_values
Program calculate_coin_values
number_of_coins is integer
total_coin_value is integer
number_of_rupees is integer
number_of_paise is integer
coin_values is array of six integers representing
each coin value in paise
initialized to: 25,25,100
begin
initialize total_coin_value to zero
initialize loop_counter to one
while loop_counter is less than six
begin
output “enter number of coins”
read (number_of_coins )
total_coin_value = total_coin_value +
number_of_coins * coin_value[loop_counter]
increment loop_counter
end
number_rupees = total_coin_value/100
number_of_paise = total_coin_value – 100 * number_of_rupees
output (number_of_rupees, number_of_paise)
end
2. Design Defects in the Coin Problem
Control, logic, and sequencing defects. The defect in this subclass arises from an incorrect
“while” loop condition (should be less than or equal to six)
Algorithmic, and processing defects. These arise from the lack of error checks for incorrect
and/or invalid inputs, lack of a path where users can correct erroneous inputs, lack of a path for
recovery from input errors.
Data defects. This defect relates to an incorrect value for one of the elements of the integer
array, coin_values, which should be 25, 50, 100.
External interface description defects. These are defects arising from the absence of input
messages or prompts that introduce the program to the user and request inputs.
3. Coding Defects in the Coin Problem
Control, logic, and sequence defects. These include the loop variable increment step which is
out of the scope of the loop. Note that incorrect loop condition (i<6) is carried over from
design and should be counted as a design defect.
Algorithmic and processing defects. The division operator may cause problems if negative
values are divided, although this problem could be eliminated with an input check.
Data Flow defects. The variable total_coin_value is not initialized. It is used before it is
defined.
Data Defects. The error in initializing the array coin_values is carried over from design and
should be counted as a design defect.
External Hardware, Software Interface Defects. The call to the external function “scanf” is
incorrect. The address of the variable must be provided.
Code Documentation Defects. The documentation that accompanies this code is incomplete
and ambiguous. It reflects the deficiencies in the external interface description and other
defects that occurred during specification and design.
The poor quality of this small program is due to defects injected during several phases of the
life cycle because of different reasons such as lack of education, a poor process, and oversight
by the designers and developers.
Common Defect Types in the Coin Problem
Below is a list of typical defects that may arise in a coin-operated vending machine system,
grouped by defect category:
1. Functional Defects
These occur when the system does not perform as expected based on the requirements.
Defect Description Example
System accepts unsupported
❌ Invalid Coin Accepted Accepts ₹3 coin, which is invalid
denominations
System rejects a correct
❌Valid Coin Rejected ₹5 coin is not accepted
denomination
❌ Incorrect Amount
Wrong total is calculated ₹2 + ₹5 is counted as ₹8 instead of ₹7
Calculation
❌ Incorrect Change Gives wrong change after excess Insert ₹12, asks for ₹10 item, returns ₹0
Dispensed input instead of ₹2
2. Boundary Defects
These occur when the system misbehaves at the edges of valid input.
Defect Description Example
❌ Below Lower Accepts coin even when total < 0 or is
Allows ₹-1 or ₹0 coins
Limit invalid
❌ Above Upper Accepts coins even after max limit is Inserts coins totaling ₹101 when max
Limit exceeded allowed is ₹100
3. Logical Defects
Errors in the logic or algorithm implemented in the system.
Defect Description Example
System keeps waiting for more coins
❌ Infinite Loop Reached ₹10, but still asks for more coins
despite target met
❌ Wrong Path in State System doesn't move to correct next ₹10 reached, but machine doesn't move
Transition state to "Ready to Vend" state
4. UI/UX Defects
If the machine has a display or app interface, these relate to user experience issues.
Defect Description Example
❌ Incorrect
Shows wrong amount inserted ₹5 inserted, displays ₹3
Display
❌ No Error No message when invalid coin is ₹3 coin inserted, but no “Invalid Coin” message
Message inserted shown
5. Performance Defects
Issues related to speed, memory, or timing.
Defect Description Example
❌ Slow Coin
Takes too long to process a coin Takes 10 seconds to detect each coin
Processing
Accepting too many coins causes Machine hangs after inserting many ₹1
❌ Memory Overflow
crash coins
6. Security Defects
Unintended system behavior that could be exploited.
Defect Description Example
Defect Description Example
❌ Coin Replay Insert ₹5, pull it back, repeat, but amount keeps
Reuse same coin (e.g., fake RFID)
Attack increasing
❌Tampering Tampering coin detector to fake
Short-circuits sensor to simulate ₹10 coin
Bypass amount
7. Compatibility Defects (If digital coin systems or apps are used)
If the vending machine works with digital wallets, QR scanners, etc.
Defect Description Example
❌ OS/Device App doesn't work on some
Coin app fails on Android 12
Compatibility phones
Scanning crashes the system or freezes the
❌ QR Code Scan Error QR code doesn't scan correctly
app
DEVELOPER/TESTER SUPPORT FOR DEVELOPING A DEFECT REPOSITORY
It is important if you are a member of a test organization to illustrate to management and your
colleagues the benefits of developing a defect repository to store defect information. As
software engineers and test specialists we should follow the examples of engineers in other
disciplines who have realized the usefulness of defect data. A requirement for repository
development should be a part of testing and/or debugging policy statements. You begin with
development of a defect classification scheme and then initiate the collection defect data from
organizational projects. Forms and templates will need to be designed to collect the data. You
will need to be conscientious about recording each defect after testing, and also recording the
frequency of occurrence for each of the defect types. Defect monitoring should continue for
each on-going project. The distribution of defects will change as you make changes in your
processes. The defect data is useful for test planning, a TMM level 2 maturity goal. It helps
you to select applicable testing techniques, design (and reuse) the test cases you need, and
allocate the amount of resources you will need to devote to detecting and removing these
defects. This in turn will allow you to estimate testing schedules and costs.
The defect data can support debugging activities as well. In fact, as Fig shows, a defect
repository can help to support achievement and continuous implementation of several TMM
maturity goals including controlling and monitoring of test, software quality evaluation and
control, test measurement, and test process improvement.
TEST CASE DESIGN STRATEGIES
Test case design strategies are methods used to derive effective test cases that ensure thorough
testing of software applications. These strategies help testers identify potential defects by
systematically exploring different input values, system states, and scenarios. Key techniques
include equivalence partitioning, boundary value analysis, decision table testing, and state
transition testing.
Here's a breakdown of common test case design strategies:
1. Black Box Testing Techniques:
Equivalence Partitioning:
Divides input data into classes where all members are expected to be processed the same way
by the system. For example, if an input field accepts numbers between 1 and 100, you might
create partitions for valid numbers (e.g., 1-100), negative numbers, zero, and numbers greater
than 100.
Boundary Value Analysis:
Focuses on testing the boundaries of equivalence partitions, including the minimum and
maximum values, and values just above and below those boundaries. For instance, if the
valid input range is 1-100, you'd test 0, 1, 2, 99, 100, and 101.
Decision Table Testing:
Organizes inputs and their corresponding outputs in a table format to cover all possible
combinations and ensure comprehensive testing. This is particularly useful for systems with
complex logic involving multiple inputs.
State Transition Testing:
Evaluates how the system behaves when transitioning between different states based on user
inputs and events. It's useful for systems with defined states and transitions, like a login
process or a shopping cart.
Use Case Testing:
Focuses on testing the system from the end-user's perspective by simulating real-world
scenarios and workflows.
2. White Box Testing Techniques:
Statement Coverage:
Ensures that every line of code in the application is executed at least once during testing.
Decision Coverage:
Verifies that all possible outcomes of decision points (e.g., if-else statements) are tested.
Condition Coverage:
Tests all conditions within a decision point to ensure they are evaluated correctly.
3. Experience-Based Techniques:
Error Guessing:
Relies on the tester's experience and intuition to identify potential error-prone areas and
design test cases to target those areas.
Exploratory Testing:
A dynamic approach where testers explore the application, design tests, and execute them
simultaneously, often uncovering unexpected defects
TEST PLANNING
A Test Plan is an important document for carrying out the software testing activities. It is
created with the intent to detect as many defects as possible in the initial stages of the software
development life cycle (SDLC).
A Test Plan has multiple components as listed below −
Test Objectives
The Test Objectives section contains the direction of testing, standard processes and
methodologies that will be followed. Thus it mainly focuses on detecting maximum defects
and enhancing the quality. This section can be divided into various modules and contains
information about testing the functionalities of each module and their performances.
Scope
The Scope section contains all the items to be tested and what all items will not be included in
the testing phase.
Test Methodology
The Test Methodology section contains information on the testing types, tools, and
methodologies that will be adopted.
Approach
The Approach section contains the high-level test scenarios and flow of events from one
module to the next.
Assumptions
The Assumptions section contains the assumptions taken into considerations to test the
software, for example, the test team should get all the knowledge, support, assistance on from
the development team and there will be enough resources to carry out the testing process.
Risks
The Risks section contains all the possible risks, for example, wrong budget estimation,
production defects, resource attrition etc, that may come up and the mitigation plans of all
these risks.
Role and Responsibilities
The Roles and Responsibilities section contains information about individual roles and
responsibilities to be carried by test team members.
Schedule
The Schedule section contains information about timelines for every testing activity, for
example, test cases creation, test execution etc.
Defect Logging
The Defect Logging section contains all the information about the defect logging and tracking
activities.
Test Environment
The Test Environment section contains information on the environment specifications, for
example, hardware, software, configurations, installation steps etc on which test will be
performed.
Entry and Exit Condition
The Entry and Exit condition section contains information about the requirements or checklists
that need to be satisfied prior beginning and ending of test activities.
Automation
The Automation section contains information about what all features of the software are a part
of the automation.
Effort Estimation
The Effort Estimation section contains information about the effort estimation of the testing
team.
Deliverables
The Deliverables section contains information about the list of test deliverables, namely the
test plan, test strategy, test scenarios, test cases, test data, defects, logs, reports, etc.
Template
The Deliverables section contains information about the templates that will be used for creating
the test deliverables to maintain uniformity and standards maintained across all deliverables.
Path testing
Path Testing is a method that is used to design the test cases. In the path testing method, the
control flow graph of a program is designed to find a set of linearly independent paths of
execution. In this method, Cyclomatic Complexity is used to determine the number of
linearly independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all possible paths of
the control flow graph. McCabe's Cyclomatic Complexity is used in path testing. It is a
structural testing method that uses the source code of a program to find every possible
executable path.
Example:
Step 1 Input ab,c > 0
Step 2 If( a>=(b+c) or b>=(a+c) or c>=(a+b))
Step 3 Output = “Not a triangle”
Step 4 Else if (a==b and a==c) then
Step 5 Output= “Equilateral triangle “
Step 6 Else if (a==b or a==c or b==c)
Step 7 Output=”Isosceles triangle”
Step 8 Else
Step 9 Output=”Scalene triangle”
Step 10 Return output
Test Case ID Input (a, b, c) Expected Output Reason
TC1 3, 3, 3 Equilateral triangle All three sides are equal
TC2 5, 5, 8 Isosceles triangle Two sides are equal (5, 5), third is different
TC3 7, 5, 6 Scalene triangle All three sides are different, valid triangle
TC4 1, 2, 3 Not a triangle 1 + 2 = 3 ⇒ violates triangle inequality
TC5 10, 4, 5 Not a triangle 10 ≥ 4 + 5 ⇒ invalid
TC6 0, 5, 5 Not a triangle Zero length side is invalid
TC7 -1, 4, 5 Not a triangle Negative side is invalid
TC8 6, 6, 10 Isosceles triangle Two equal sides, forms valid triangle
TC9 10, 15, 25 Not a triangle 10 + 15 = 25 ⇒ violates triangle property
TC10 8, 10, 15 Scalene triangle All sides different and valid