Comparison: Conventional vs Object-Oriented Software Testing
Conventional Software Testing Object-Oriented Software Testing
Tests are based on functions or procedures. Tests are based on objects, classes, and methods.
Top-down or bottom-up testing approaches Uses class testing, interaction testing, and system
are used. testing.
Focus is on inputs and outputs of functions. Focus is on object behavior and interactions.
White-box and black-box testing are Includes state-based, mutation, and interaction
common. testing.
More focus on inheritance, polymorphism, and
Less focus on reusability and inheritance.
encapsulation.
Difficult to test due to tight coupling between
Easier to test independent functions.
objects.
Test cases are written for each class and object
Test cases are written for each procedure.
relationship.
Debugging is complex due to object interactions and
Debugging is usually straightforward.
states.
Comparison: Black Box Testing vs White Box Testing
Black Box Testing White Box Testing
Also called behavioral testing. Also called structural or glass box testing.
Tester doesn’t need to know the code or logic. Tester needs full knowledge of the code and logic.
Focuses on input and output of the software. Focuses on how the code works internally.
Tests the functionality of the application. Tests the flow, logic, and internal structure.
Performed by testers or QA team. Usually performed by developers.
Useful for high-level testing like system
Useful for low-level testing like unit testing.
testing.
Easier to create test cases from requirements. Test cases are based on code structure and paths.
Can find hidden logical errors, loops, and
Can’t find hidden errors in code logic.
conditions.
Comparison: Manual Testing vs Automation Testing
Manual Testing Automation Testing
Testing is done manually by a human tester. Testing is done using automation tools or scripts.
Time-consuming and slow process. Fast and efficient once scripts are ready.
Best for short-term projects or exploratory
Best for long-term projects and repeated testing.
testing.
No programming skills required. Requires knowledge of programming or scripting.
More chance of human error. Very accurate and consistent results.
Low initial cost, but high long-term effort. High initial cost, but saves time and cost later.
Good for performance, regression, and load
Good for UI/UX and usability testing.
testing.
Tools are not used or minimal usage. Uses tools like Selenium, QTP, TestNG, JUnit, etc.
Verification and Validation (V&V) Model
Verification and Validation are two important activities in software testing that ensure the quality
and correctness of a software product.
Verification is the process of checking whether the software is being built correctly according to the
specifications and design documents. It answers the question:
"Are we building the product right?"
Verification is mainly a static process. It involves activities like reviews, inspections, and
walkthroughs of documents and code without actually running the software. It is done during the
development phase to catch errors early.
Validation is the process of checking whether the software meets the actual needs and requirements
of the user. It answers the question:
"Are we building the right product?"
Validation is a dynamic process that requires executing the software and testing it in real conditions
to ensure it works correctly. It is usually done after the software is developed to confirm that it
satisfies the user’s expectations.
Key Differences Between Verification and Validation
Aspect Verification Validation
Checking if the right software is
Meaning Checking if the software is built correctly
built
Focus Are we building the product right? Are we building the right product?
When During development process After software is developed
Reviews, inspections, walkthroughs (no code
How Testing by running the software
execution)
Goal Ensure product follows specifications and design Ensure product meets user needs
Testing if login feature works for
Example Checking if design matches requirements
users
Activity
Static (no execution) Dynamic (execution involved)
Type
Integration Testing and System Testing
1. Integration Testing
Definition: Integration testing checks how different modules or units of the software work
together when combined.
Purpose: To find errors in the interaction between modules.
When: After unit testing and before system testing.
Focus: Interfaces between modules, data flow, and communication.
Types of Integration Testing:
o Big Bang Integration: All modules are combined and tested at once.
o Top-Down Integration: Testing starts from the top module and moves downwards.
o Bottom-Up Integration: Testing starts from the bottom modules and moves
upwards.
o Incremental Integration: Modules are integrated and tested step-by-step.
Integration Testing: Top-Down and Bottom-Up Integration
1. Top-Down Integration Testing
What is it?
Testing starts from the top-level module (main module) and gradually moves down to lower-
level modules.
How it works:
o The highest-level module is tested first.
o If lower-level modules are not ready yet, stubs (dummy modules) are used to
simulate their behavior.
o As lower modules become ready, stubs are replaced with real modules and tested.
o Testing proceeds downwards step-by-step until all modules are integrated and
tested.
Advantages:
o Early testing of main control and flow of the software.
o Design defects are found early.
o Provides early working software with limited functionality.
Disadvantages:
o Stubs need to be written, which requires extra effort.
o Lower modules are tested late, so bugs in lower levels may be found late.
Example:
In an online shopping system, start testing the main menu module first, then move to
shopping cart module, and later to payment module, using stubs for modules not yet ready.
2. Bottom-Up Integration Testing
What is it?
Testing starts from the lowest-level modules and moves upwards towards the top-level
module.
How it works:
o The lowest-level modules (which don’t depend on others) are tested first.
o As higher-level modules become ready, they are integrated and tested together with
the already tested lower modules.
o Drivers (special programs) are used to simulate higher-level modules that are not
ready yet.
o Testing continues upward until the top-level module is tested.
Advantages:
o Lower-level modules are tested thoroughly early.
o No need to write stubs.
o Defects in lower-level modules are found early.
Disadvantages:
o Drivers need to be developed, which takes time.
o The main control module is tested late, so major design flaws might be found late.
o Early working software may not be available.
Example:
In the online shopping system, test the payment processing module first, then integrate it
with the shopping cart module, and finally the main menu module using drivers to simulate
higher modules.
Summary Table
Feature Top-Down Integration Bottom-Up Integration
Testing starts from Top-level modules Bottom-level modules
Uses Stubs (dummy modules) Drivers (dummy modules)
Early tested modules Main control and flow Detailed low-level modules
Late tested modules Lower-level modules Top-level module
Early working software Yes No
Effort needed for test aids Create stubs Create drivers
Example: If you have a login module and a payment module, integration testing checks if
after login, the payment module works correctly.
2. System Testing
Definition: System testing tests the complete and fully integrated software system to verify it
meets all requirements.
Purpose: To check the behavior of the entire system as a whole.
When: After integration testing is complete.
Focus: Overall functionality, performance, security, and usability of the software.
Types of System Testing:
o Functional Testing: Checks if the software functions correctly.
o Performance Testing: Checks speed and stability.
o Security Testing: Ensures software is secure from attacks.
o Usability Testing: Checks if the software is user-friendly.
. Recovery Testing
Definition:
Recovery Testing checks how well a system can recover after it faces failures like crashes or
power outages.
Purpose:
To ensure the system can return to normal working condition quickly and safely after an
unexpected failure.
How it’s done:
Simulate failures such as system crash, network failure, or power outage, then check if the
system restores data and resumes operation without errors.
Example:
In a banking app, if the app crashes during a transaction, recovery testing ensures that after
restarting, the transaction either completes successfully or is safely rolled back without data
loss.
2. Security Testing
Definition:
Security Testing checks whether the system protects data and prevents unauthorized access
or attacks.
Purpose:
To make sure user data is safe, private, and secure from hackers or unauthorized users.
How it’s done:
Test for vulnerabilities like weak passwords, unauthorized access, data encryption, and
protection against hacking attacks (e.g., SQL injection).
Example:
Testing if users can only access their own bank accounts and cannot view or modify other
users’ information.
3. Performance Testing
Definition:
Performance Testing measures how fast and efficiently the software operates under normal
conditions.
Purpose:
To check if the system responds quickly and uses resources like CPU and memory
efficiently.
How it’s done:
Measure response time, processing speed, and resource usage while performing typical user
operations.
Example:
Testing how fast a website loads when 100 users browse it at the same time.
4. Stress Testing
Definition:
Stress Testing evaluates how the system performs under extreme or heavy loads beyond
normal use.
Purpose:
To find the system’s breaking point and see how it behaves under heavy stress.
How it’s done:
Increase the number of users, transactions, or data until the system slows down or crashes.
Example:
Simulating thousands of users trying to book tickets at once on an online ticketing website
during a big sale.
Example: Testing an entire online shopping website including search, add to cart, checkout,
payment, and logout as one complete system.
3. Differences
Integration Testing System Testing
Tests interaction between modules. Tests the whole system’s functionality.
Focus on interfaces and data flow. Focus on overall system behavior.
Performed after unit testing. Performed after integration testing.
Can be done step-by-step or all at once. Tests software as a complete package.
Alpha Testing and Beta Testing
Alpha Testing
Definition:
Alpha Testing is the first phase of testing done by the developers or testing team inside the
organization before releasing the software to real users.
Purpose:
To find bugs and fix them before the software is shown to actual users.
Where:
It is done in the developer’s environment or testing lab.
Who performs it?
Performed by the developers or internal testers.
When:
After the software is developed but before releasing it to customers.
How:
The testers use the software in a controlled environment and try to find as many bugs as
possible.
Example:
A company developing a mobile app tests it internally for bugs before giving it to users.
Beta Testing
Definition:
Beta Testing is the second phase of testing done by actual users or customers outside the
organization.
Purpose:
To get real user feedback and find bugs that may have been missed during alpha testing.
Where:
Done in the real user environment (customer’s location).
Who performs it?
Performed by real users or customers.
When:
After successful alpha testing and just before the final release.
How:
The software is given to users to try in their own environment, and they report problems or
suggestions.
Example:
A software company releases a trial version of a new app to some customers for testing and
feedback.
Difference Between Alpha and Beta Testing
Aspect Alpha Testing Beta Testing
Where Inside the organization (lab) Outside the organization (real environment)
Who performs it? Developers or internal testers Actual users or customers
Purpose Find and fix bugs before release Get user feedback and find real-world bugs
Environment Controlled testing environment Real user environment
Aspect Alpha Testing Beta Testing
When done? Before beta testing After alpha testing, before final release
Duration Usually short Usually longer
Focus Technical bugs and issues Usability, reliability, and user experience
Software Testing Life Cycle (STLC)
Software Testing Life Cycle (STLC) is a step-by-step process followed to ensure the quality of the
software through effective testing. It defines the testing activities that need to be done in each phase
of software testing.
Phases of STLC
1. Requirement Analysis
In this phase, testers study and understand the requirements of the software from the
customer or business.
They identify what needs to be tested and prepare a list of testable requirements.
Any unclear or missing requirements are clarified with the stakeholders.
2. Test Planning
Test planning is about deciding how to test the software.
The test manager creates a test plan document which includes:
o What to test
o How to test
o Resources needed (testers, tools)
o Testing schedule and deadlines
o Types of testing to perform (like functional, performance)
o Risk assessment and mitigation plan
3. Test Case Development
Testers write test cases based on the requirements and test plan.
A test case is a detailed step-by-step instruction to check a specific feature or functionality.
Test data needed to execute test cases is also prepared in this phase.
4. Test Environment Setup
A test environment is the setup where testing will be executed.
This includes hardware, software, network configurations, and tools.
It should simulate the real environment as closely as possible.
Sometimes this phase runs in parallel with test case development.
5. Test Execution
Testers execute the test cases on the software.
They compare the actual results with expected results to identify defects or bugs.
All defects found are logged for fixing.
Retesting and regression testing are done after fixes.
6. Test Closure
After testing is complete, a test closure report is prepared.
It includes:
o Summary of testing done
o Number of defects found and fixed
o Test coverage and quality status
o Lessons learned and recommendations
The test team meets to discuss what went well and what could be improved for future
projects.
Summary Table of STLC Phases
Phase What Happens
Requirement Analysis Understand what to test
Test Planning Prepare test strategy and schedule
Test Case Development Write test cases and prepare test data
Phase What Happens
Test Environment Setup Prepare testing hardware and software setup
Test Execution Run tests and report defects
Test Closure Analyze results and document testing summary
Bottom-Up Testing
What is Bottom-Up Testing?
Bottom-Up Testing is an integration testing approach where testing begins with the smallest parts of
the software and moves up towards the bigger parts until the whole system is tested.
How does it work?
You start by testing the lowest-level modules first, which are the basic building blocks of the
software.
Since the higher modules (which call these low modules) may not be ready yet, you use
drivers—small helper programs that simulate these higher modules.
After testing the bottom modules, you combine them with the next higher-level modules and
test again.
Keep doing this step by step until you reach and test the top-level module (main control).
Why is this useful?
It makes sure the basic parts of the software work properly before testing complex parts.
You don’t need stubs (dummy lower modules) because testing starts from the bottom.
It helps to find and fix problems early in the small modules.
Advantages
Thorough testing of basic modules.
No need to build stubs.
Easier to write drivers than stubs.
Great when low-level modules are very important.
Disadvantages
The main or top-level module is tested last, so high-level problems are found late.
You need to create drivers, which requires extra effort.
You don’t get a working version of the whole software early.
Some errors in overall system flow may appear late.
Example
If you are testing an online shopping system:
First, test the payment module (a basic part) using drivers to simulate other parts.
Next, integrate it with the shopping cart module and test together.
Finally, combine with the main user interface and test the full system.
Summary Table
Feature Description
Testing order From basic (bottom) modules to top modules
Test helpers used Drivers (simulate top modules)
Best for Critical or complex low-level modules
Advantage Tests basic modules first, no stubs needed
Disadvantage Top module tested late, need drivers
Early working software Not available
Types of Software Testing
1. Unit Testing
o Tests individual parts (modules) of the software.
o Example: Testing the login function separately.
2. Integration Testing
o Tests how different modules work together.
o Example: Testing login module works correctly with the payment module.
3. System Testing
o Tests the complete software as a whole.
o Example: Testing the entire online shopping website.
4. Acceptance Testing
o Tests if the software meets user needs and requirements.
o Example: Customer tries the software to see if it fits their needs.
5. Regression Testing
o Tests that new changes don’t break existing features.
o Example: After fixing a bug, check other features still work fine.
6. Performance Testing
o Tests how fast and responsive the software is under load.
o Example: Checking if a website loads quickly with many users.
7. Security Testing
o Tests if the software is safe from threats and unauthorized access.
o Example: Making sure users cannot hack or access others' data.
Objectives of Software Testing
Find defects (bugs): Identify errors before software release.
Ensure quality: Make sure the software works as expected.
Validate requirements: Confirm software meets user needs.
Improve reliability: Make software stable and dependable.
Prevent failures: Avoid crashes or security issues in real use.
Save cost and time: Fixing bugs early is cheaper than after release.
Strategies in Web Application Testing
Testing a web application requires special strategies because web apps run on different browsers,
devices, and networks. Here are some key strategies:
1. Cross-Browser Testing
o Test the app on multiple browsers (Chrome, Firefox, Safari, Edge) to ensure
consistent behavior.
2. Responsive Testing
o Check if the app works well on different screen sizes (mobile, tablet, desktop).
3. Functionality Testing
o Verify all features (like forms, buttons, links) work correctly.
4. Performance Testing
o Test speed and loading times under different network conditions and user loads.
5. Security Testing
o Check for vulnerabilities such as data leaks, SQL injection, and unauthorized access.
6. Usability Testing
o Ensure the app is user-friendly and easy to navigate.
7. Compatibility Testing
o Test the app on different devices, operating systems, and browsers.
Use of Automation Tools in Web Application Testing
Automation tools help testers perform repetitive and complex tests faster and more accurately.
Benefits include:
Speed: Automated tests run faster than manual tests.
Repeatability: Tests can be reused for multiple versions.
Accuracy: Reduces human errors in testing.
Coverage: Can test many scenarios and data combinations.
Continuous Integration: Easily integrates with development pipelines for frequent testing.
Popular Automation Tools for Web App Testing
Selenium: Open-source tool to automate web browsers.
Cypress: Fast and reliable testing for modern web apps.
TestComplete: User-friendly tool supporting many scripting languages.
Jest: Mainly for JavaScript unit and integration testing.
Katalon Studio: Complete automation tool with record and playback features.
Example of Automation Use
Automating login tests across different browsers to ensure the login works everywhere
without manual repetition.
Running performance tests automatically every night to catch slowdowns early.