0% found this document useful (0 votes)
46 views23 pages

Module 5-Notes

SE module 5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views23 pages

Module 5-Notes

SE module 5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Software Testing

Software testing:
Software testing is the process of evaluating a software application to ensure that it works as intended, is
free of defects, and meets user requirements.

Testing checks “Does the software do what it is supposed to do?” and “Does it do it correctly?”

Why we do testing?

 Detect errors/bugs in the software.


 Ensure quality and reliability.
 Verify that the software meets the client’s requirements.
 Prevent costly failures after deployment.
 Increase user satisfaction and trust in the product.

Types of Testing:

Based on Functionality:

 Black Box Testing – Focuses on input-output behavior. Tester doesn’t know the internal code.
 White Box Testing – Focuses on internal logic & code structure. Tester knows how the system
works inside.
 Grey Box Testing – Combination of both.

Based on Execution:

 Manual Testing – Human testers execute test cases without tools.


 Automation Testing – Tools/scripts automatically execute test cases.

Other Classifications:

 Unit Testing – Testing individual modules/functions.


 Integration Testing – Testing combined modules.
 System Testing – Complete system testing as per requirements.
 Acceptance Testing – Done by client/end-users before final delivery.
 Regression Testing – Ensures old features still work after changes.

Unit testing is a fundamental software testing practice that focuses on verifying the functionality of
individual components or units of code, such as functions, methods, or classes.

Run in developer environment using unit-test frameworks (JUnit, pytest, NUnit).

It is typically performed during the early stages of the software development lifecycle to ensure that each unit
works as intended before integration with other components.
In these multiple tests are written for a single function to cover different possible scenarios and these are
called test cases. While it is ideal to cover all expected behaviors, it is not always necessary to test every
scenario.

Unit Testing is a software testing method where individual units or modules of code (like functions,
methods, or classes) are tested in isolation to ensure they work as expected.

―Unit‖ = the smallest testable part of software (e.g., a function).

To verify that each small piece of the program performs correctly before integrating it with others.

Unit Testing is about verifying that each small part of your software works independently and correctly. It
improves code quality, makes debugging easier, and builds confidence in refactoring and scaling software.

In unit testing tester knows the internal design of the software.Unit testing is white box testing.

Characteristics of Unit Testing

1. Smallest scope – focuses only on one component.


2. White-box testing – tester usually knows the internal structure.
3. Automated – often written and run automatically using testing frameworks.
4. Fast feedback – detects bugs early, right where they are introduced.
5. Repeatable – tests can be rerun whenever code changes.

Advantages of Unit Testing

1. Early bug detection → Fixing at the unit level is cheaper.


2. Code quality → Encourages writing modular, reusable, and maintainable code.
3. Documentation → Tests act as living documentation of how functions should behave.
4. Refactoring confidence → When changing code, you know if you broke something.
5. Saves cost & time → Errors are found early before they spread.

Limitations of Unit Testing

 Cannot catch all bugs → especially integration issues.


 Time-consuming if not automated.
 Overhead → writing tests is extra effort.
 Not a substitute for higher-level testing (system or acceptance).

Integration Testing :
It mainly tests interface between two software units or modules. It focuses on determining the correctness
of the interface. Once all the modules have been unit-tested, integration testing is performed.

Integration testing is important because it verifies that individual software modules or components work
together correctly as a whole system

Integration testing doesn’t know the internal design of the software. Integration testing is black box testing.

Integration testing is performed after unit testing and before system testing.
Integration testing is performed by the tester.

Types of Integration Testing


There are four main strategies for executing integration testing: big-bang, top-down, bottom-up, and
sandwich (or hybrid) testing. Each of these methods comes with its own set of advantages and challenges,
so it's important to choose the right one based on the specific needs of your project. Those approaches are
the following:

1. Big-Bang Integration Testing


It is the simplest integration testing approach, where all the modules are combined and the functionality is
verified after the completion of individual module testing.

All modules are integrated at once (no step-by-step).

In simple words, all the modules of the system are simply put together and tested. This approach is
practicable only for very small systems

If an error is found during the integration testing, it is very difficult to localize the error as the error
may potentially belong to any of the modules being integrated.

The goal of big-bang integration testing is to verify the overall functionality of the system and to identify
any integration problems that arise when the components are combined.

Eg. In an Online Shopping System:

Modules:

1. Login Module – verifies user credentials


2. Product Catalog Module – displays items
3. Cart Module – adds/removes products
4. Payment Module – processes payment

In Big Bang testing, instead of integrating and testing step by step (like Top-Down or Bottom-Up), all four
modules are combined together in one go, and the system is tested end-to-end.

The tester directly tests a complete workflow:

 User logs in → selects product → adds to cart → pays.


2. Bottom-Up Integration Testing
Testing starts from the lowest-level modules (the leaf modules).
These are tested first, then combined step by step, moving upwards toward the main system.
Drivers (temporary programs) are used to simulate higher-level modules until they are developed.

Eg.Online shopping website

Modules:

1. Database Module – stores products, users, orders


2. Product Catalog Module – fetches products from DB
3. Cart Module – manages selected items
4. Checkout/Payment Module – processes order

Testing process (Bottom-Up):

1. Start with Database Module → test storing/retrieving data.


2. Integrate Product Catalog Module with Database → test product listing.
3. Integrate Cart Module with Catalog → test adding/removing items.
4. Finally, integrate Checkout/Payment Module → test full purchase flow.

3. Top-Down Integration Testing

First, high-level modules are tested and then low-level modules and finally integrating the high-level
modules to a low level to ensure the system is working as intended.

Stubs are used as temporary replacements for the lower-level modules until they are developed.

 Start with Checkout (top-level) – Test checkout flow using stub for Order Processing, Inventory,
and Cart.
 Integrate Checkout + Order Processing – Replace stub of Order Processing with the real module.
 Then add Inventory + Cart modules – Integrate step by step, replacing stubs with actual code.

Stub

 A dummy module used in Top-Down testing.


 Replaces a lower-level module that is not ready yet.
 It provides fake output so the higher-level module can be tested.
 Eg in online shopping system : Checkout (Top) → Order Processing (Mid) → Inventory (Low).
 If Inventory is not ready, we create a stub for Inventory that just returns ―Item Available‖.
 This lets us test Checkout flow even though the real Inventory module isn’t built.

Driver

 A dummy program used in Bottom-Up testing.


 Replaces a higher-level module to test a lower-level module.
 It calls the lower module and supplies input.

Example (Shopping System – Bottom-Up):

 Inventory (Low) is ready, but Checkout (Top) is not.


 To test Inventory, we write a driver that simulates Checkout calling Inventory to fetch product stock.
 The driver provides input like ―Check stock of Item A‖ and verifies output.

4. Mixed Integration Testing


 A mixed integration testing is also called sandwiched integration testing. A mixed integration
testing follows a combination of top down and bottom-up testing approaches.
 In top-down approach, testing can start only after the top-level module have been coded and unit
tested. In bottom-up approach, testing can start only after the bottom level modules are ready.
 This sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches. It is also called the hybrid integration testing.
 Mixed approach allows testing both high-level flows and low-level details in parallel, saving time.

 Eg.

Modules (Fully Developed)

1. Checkout Module (Top-level)


2. Order Processing Module (Middle-level)
3. Inventory Module (Low-level)
4. Payment Gateway Module (Low-level)

 Checkout → Order Processing → Inventory → Payment → Order Confirmation.

Validation Testing
 Validation testing ensures that the developed software meets the customer’s actual requirements
(the right product is built).
 Here we validate or check Are we building the right product?‖
 It is requirement-based testing, usually performed after verification activities.
 Even if the software is coded correctly (verification), it may still fail customer needs.
 Example: You built a payroll system that calculates salary correctly, but the client wanted a tax
calculator as well → system is ―correct‖ but not valid.
 Validation testing is about user needs, not just correctness.
 Ensures software does what customer expects.
 Usually involves system + acceptance testing.

Aspect Verification Validation


Checks if the software is built correctly (according to Checks if the right product is built
Focus design/specifications).this ques arise at every phase of (according to customer needs).this ques
development occurs after development of software
Question ―Are we building the product right?‖ ―Are we building the right product?‖
Activity Reviews, walkthroughs, inspections Actual testing against requirements
After development / before release.done by
Stage During development.done by developer
tester
Example Checking design document vs coding Checking final product vs user requirement
Check whether an artefact confirms its previous artifacts Check final product against specification
Varification activities :

1. Reviews

 Definition: A broad term for any type of evaluation of documents or code by one or more people.
 Purpose: Detect defects, improve quality, ensure compliance with standards.
 Types of Reviews:

 Informal Review: Simple checking (e.g., peer review, email review).


 Technical Review: Experts evaluate technical content.
 Formal Review: Structured process with roles and documentation.

Example: A developer shares a design document with teammates for feedback.

2. Walkthroughs

 Definition: A type of review(Informal meeting) where the author leads the team through the work
product.
 Purpose: To explain the logic, design, or code to others and gather feedback.
 Characteristics:

 Conducted by the author.


 Focus is on understanding and knowledge sharing.
 Not very formal, defects may or may not be documented.

Example: A programmer walks the team through a new algorithm to ensure everyone understands it.

3. Inspections
 Definition: The most formal type of review, with a well-defined process, roles, and checklists.
 Purpose: To find defects systematically before testing begins.
 Characteristics:

 Moderator, Author, Reviewer, Scribe (defined roles).


 Defects are recorded and follow-up is required.
 Uses checklists (e.g., coding standards, requirement completeness).

Example: A formal inspection of a requirement specification document to catch ambiguities before design
starts.

Methods of Validation Testing

1. Functional Testing

 Check if each function behaves as per requirements.


 Example: Login should accept correct credentials and reject wrong ones.

2. System Testing

 Test the entire integrated system against requirements.


 Example: In an e-commerce app, test complete order placement workflow.

3. Acceptance Testing

 Final phase, performed by end users or clients.


 Example: UAT (User Acceptance Testing) → client tests whether payroll software matches
their real needs.

Steps in Validation Testing


1. Collect requirements (SRS – Software Requirement Specification).
2. Identify test scenarios from requirements.
3. Design test cases (positive + negative).
4. Execute test cases on the integrated software.
5. Compare results with requirements.
6. Report mismatches → fix → re-test.

System testing:
System Testing is a type of software testing that is performed on a completely integrated system to
evaluate the compliance of the system with the corresponding requirements.

Here we check ―Does the complete system work as intended?‖

It is a high-level testing phase performed after integration testing.

Purpose

 Validate system-level behavior (functional and non-functional).


 Ensure the product meets specifications.
 Here we Verify that the system meets functional requirements.
 Check non-functional requirements like performance, security, and usability.
 Ensure the software works correctly in the real environment.
 Detect defects in interactions between components and the system as a whole.
 Performed by QA/testers, not developers.
 Does not focus on individual modules—focuses on the complete system.

Types of System Testing

1. Functional Testing – verifies system behavior against functional requirements.


o Example: Online Shopping → Checkout calculates total correctly, applies discounts.
2. Performance Testing – checks system speed, load, and responsiveness.
o Example: Can the system handle 500 users placing orders simultaneously?
3. Security Testing – ensures system protects data and resists attacks.
o Example: Unauthorized user cannot access orders or payment data.
4. Usability Testing – evaluates if the system is user-friendly.
o Example: Can a new user easily place an order?
5. Compatibility Testing – checks system works on different devices, browsers, or OS.
o Example: Shopping website works on Chrome, Firefox, and mobile devices.
6. Regression Testing – ensures new changes do not break existing functionality.
7. Acceptance Testing – sometimes overlaps → done by end-users to approve system for release.

Steps in System Testing

1. Prepare test plan and test cases based on requirements.


2. Set up the real test environment (hardware, software, network).
3. Execute test cases covering all functional and non-functional requirements.
4. Record actual results and compare with expected results.
5. Report defects → developers fix → retest.
6. Repeat until system meets all requirements.

Types of System Testing :

 Functional system testing (all features).


 Non-functional: performance (load, stress), security, usability, compatibility, reliability

Example: Online Shopping System

Functional Test Cases:

TC ID Test Scenario Expected Result


SYS-01 User login with valid credentials Access granted to dashboard
SYS-02 Add items to cart Items added correctly
SYS-03 Checkout & Payment Order confirmed, receipt generated
SYS-04 Apply discount coupon Correct discount applied

Non-Functional Test Cases:

TC ID Test Scenario Expected Result


SYS-05 Load test with 500 users System responds within 5 seconds
SYS-06 Security test: access unauthorized page Access denied, error shown
SYS-07 Browser compatibility Works on Chrome, Firefox, Edge, Safari

Example (CMS)

 System test: Create a project, assign resources, upload contract documents, run budget reports, close
project — workflow end-to-end.
 Non-functional: Simulate 100 concurrent users creating/viewing projects (JMeter).
 Run an end-to-end scenario and show result.
 Run a performance test and display response-time graphs.

Black box testing :


A functional testing method where the tester does not know the internal code/structure.

Focuses on:
 Inputs → Processing → Outputs
 Validating system behavior as per requirements/specifications.

Common techniques:

1. Graph-Based Testing
2. Equivalence Partitioning
3. Boundary Value Analysis (BVA)

1.Graph-Based Testing
 Idea: Model software as a graph of nodes (states/modules) and edges (transitions/relationships).
 Ensures all links, relationships, and flows are tested.
 We imagine the software as a graph.
 Nodes = screens/modules.
 Edges = transitions (links, relationships, or flows between modules).
 Testing is done by traversing all possible paths.
 Ensures that workflow/navigation works correctly

Steps

1. Identify all modules/screens.


2. Draw them as nodes.
3. Connect them with arrows (edges) to show possible transitions.
4. Write test cases for valid and invalid paths.

CMS has modules:

 User Login → Project Dashboard → Task Assignment → Progress Report

Graph representation:

 Nodes: {Login, Dashboard, Task, Report}


 Edges: {Login→Dashboard, Dashboard→Task, Task→Report}

Test Cases:

 Login → Dashboard (Valid credentials → access granted).


 Dashboard → Task (Assign a task to contractor).
 Task → Report (Generate progress report).
 Invalid edges (e.g., Login → Task directly) → should fail.

TC Test Scenario /
Precondition Steps to Execute Expected Result
ID Path
1. Open CMS.2. Enter valid login
GB- Login → User successfully lands on
CMS running credentials.3. Click Login.4.
01 Dashboard Project Dashboard.
System redirects to Dashboard.
GB- Dashboard → User logged in 1. From Dashboard, select Task Task is assigned
02 Task Assignment and on Assignment.2. Assign a task to a successfully.
TC Test Scenario /
Precondition Steps to Execute Expected Result
ID Path
Dashboard contractor.3. Click Save.
1. From Task Assignment, select
GB- Task Assignment Progress Report is
Task exists Generate Report.2. System should
03 → Report generated correctly.
display task progress report.
1. Open CMS.2. Enter valid
Login → Task System should block access
GB- credentials.3. Try to directly access
Assignment CMS running and show an error (not
04 Task Assignment URL (bypassing
(Invalid Path) allowed/access denied).
Dashboard).

2.Equivalence Partitioning
 Idea: Divide input data into equivalence classes (valid & invalid), test only one value from each
class.
 Reduces number of test cases while keeping coverage high.

Example (CMS Task Deadline Entry):

 Input: Task duration (in days: 1–365)


 Equivalence Classes:
o Valid: [1–365]
o Invalid: [<1], [>365]

Test Cases:

 Valid: 30 (within range).


 Invalid: 0 (too small).
 Invalid: 500 (too large).

Input
TC ID Test Scenario Precondition Steps to Execute Value Expected Result
(Days)
1. Navigate to Task
Application is System should accept
Valid input Creation page.2. Enter task
TC_EP_01 running, user 30 the task duration
(within range) duration as 30.3. Click
logged in (Valid Input).
Save/Submit.
1. Navigate to Task System should reject
Invalid input Application is
Creation page.2. Enter task the input and display
TC_EP_02 (below running, user 0
duration as 0.3. Click error message (Invalid
minimum) logged in
Save/Submit. Input).
1. Navigate to Task System should reject
Invalid input Application is
Creation page.2. Enter task the input and display
TC_EP_03 (above running, user 500
duration as 500.3. Click error message (Invalid
maximum) logged in
Save/Submit. Input).
3.Boundary Value Analysis (BVA)
 Idea: Errors are often found at the boundaries of input ranges.
 Test values just below, at, and just above the boundary.
 BVA focuses on edges of input range. Most defects occur just inside or outside valid limits.

Example (CMS Project Budget Entry):

 Input: Budget limit = 10,000 – 1,000,000 INR


 Boundaries: 10,000 (min), 1,000,000 (max)

Test Cases:

 9,999 (just below min → invalid).


 10,000 (at min → valid).
 10,001 (just above min → valid).
 999,999 (just below max → valid).
 1,000,000 (at max → valid).
 1,000,001 (just above max → invalid).

Input
Value
TC ID Test Scenario Precondition Steps to Execute Expected Result
(Budget
in INR)
1. Navigate to
Just below CMS application is Budget Entry page.2. System should reject
TC_BVA_01 minimum running, user is Enter budget amount 9,999 (Invalid Input, show
boundary logged in as 9,999.3. Click error message).
Save/Submit.
1. Navigate to
CMS application is Budget Entry page.2. System should accept
At minimum
TC_BVA_02 running, user is Enter budget amount 10,000 (Valid Input, saved
boundary
logged in as 10,000.3. Click successfully).
Save/Submit.
1. Navigate to
Just above CMS application is Budget Entry page.2. System should accept
TC_BVA_03 minimum running, user is Enter budget amount 10,001 (Valid Input, saved
boundary logged in as 10,001.3. Click successfully).
Save/Submit.
1. Navigate to
Just below CMS application is Budget Entry page.2. System should accept
TC_BVA_04 maximum running, user is Enter budget amount 999,999 (Valid Input, saved
boundary logged in as 999,999.3. Click successfully).
Save/Submit.
At maximum CMS application is 1. Navigate to System should accept
TC_BVA_05 1,000,000
boundary running, user is Budget Entry page.2. (Valid Input, saved
Input
Value
TC ID Test Scenario Precondition Steps to Execute Expected Result
(Budget
in INR)
logged in Enter budget amount successfully).
as 1,000,000.3. Click
Save/Submit.
1. Navigate to
Just above CMS application is Budget Entry page.2. System should reject
TC_BVA_06 maximum running, user is Enter budget amount 1,000,001 (Invalid Input, show
boundary logged in as 1,000,001.3. Click error message).
Save/Submit.

White box testing :


 White box testing is a Software Testing Technique that involves testing the internal
structure and workings of a Software Application

 The tester has access to the source code and uses this knowledge to design test cases that
can verify the correctness of the software at the code level.
 White box testing is also known as Structural Testing or Code-based Testing, and it is
used to test the software’s internal logic, flow, and structure.
 The tester creates test cases to examine the code paths and logic flows to ensure they
meet the specified requirements.

Unlike black-box testing, which focuses on user interactions without knowledge of the
underlying code, white-box testing involves examining the software's source code directly.
Methods of White Box Testing

1. Basis Path Testing


Definition

Basis Path Testing is a white-box testing technique that uses the program’s control flow graph to identify
independent execution paths. It ensures that all logical paths in a program are executed at least once.

Steps

1. Construct the Control Flow Graph (CFG) of the program.


o Nodes → statements/blocks
o Edges → flow of control
2. Construct DD graph
3. Calculate Cyclomatic Complexity (V(G)) = E – N + 2.
4. Determine the number of independent paths.
5. Prepare test cases to cover each path.

Advantages

 Guarantees coverage of all program paths.


 Finds errors in logic, conditions, and loops

Eg

Step 1 : Logic flow : Project Approval (Construction Management System)

1. PROGRAM: Project Approval


2. INPUT: projectCost
3. INPUT: approvalFromManager
4. IF (projectCost > 500000)
5. IF (approvalFromManager == True)
6. PRINT "Project Approved by Manager"
7. ELSE PRINT "Project Rejected - No Manager Approval"
8. ELSE PRINT "Project Auto Approved - Low Cost"
9. End if
10. End if
11. Exit

Step 2: Control Flow Graph (CFG)

In CFG:

 Nodes = statements / blocks


 Edges = flow of control (arrows)
CFG(Control Flow Graph) :
A CFG shows flow of logic with nodes and edges.

Step 3 : DD graph : (Decision to Decision graph)


DD Graph focuses only on decisions (branches) and their connectivity.
Step 4 : Calculate cyclometric complexity :

Cyclomatic Complexity is a software metric used in White-Box Testing to measure the logical complexity
of a program.

Method 1:

 E = number of edges = 11 edges


 N = number of nodes = 10 nodes
 P = number of connected components (usually 1 for one program/module)/no of exit= 1

CC= 11-10+2 = 3

Method 2:

Using Predicate Nodes

CC=No of decision nodes + 1


CC=2+1 = 3

Method 3:

Using DD Graph Formula

CC=Number of regions in DD graph=3

Cyclomatic Complexity = 3 → Minimum 3 independent paths to test.

Step 5 : Find Independents path

Path 1 : 1 → 2 → 3 → 4 → 5 → 6→ 9→ 11
Path 2: 1 → 2 → 3 → 4 → 5 → 7→ 9→ 11
Path 3 : 1 → 2 → 3 → 4 → 8 → 10→ 11

Step 6 : Write test cases to check each path

TC Input (projectCost, Path


Test Scenario Expected Output
ID approvalFromManager) ID
Low-cost project → auto "Project Auto Approved -
TC1 200000, True P3
approved Low Cost"
High-cost project → "Project Approved by
TC2 600000, True P1
approved by manager Manager"
High-cost project → "Project Rejected - No
TC3 600000, False P2
rejected by manager Manager Approval"
CONTROL STRUCTURE TESTING (CST)

Definition:

 Control Structure Testing is a white-box testing technique that focuses on testing the logical structures,
decisions, and flow of control in a program.
 The goal is to ensure that all possible paths, conditions, and variable uses are exercised during testing.
 CST is especially useful for programs with decision statements, loops, and nested conditions.

Types of Control Structure Testing:

1. Conditional Structure Testing


2. Data Flow Testing
3. Loop Testing

Conditional Structure Testing

Definition:

 Tests all logical conditions and branches in decision statements.


 Tests all IF, ELSE IF, ELSE, and nested IF conditions.
 Ensures that every possible branch of a decision is executed at least once

Eg : Nested Ifs

Here we use 2 conditional ststements

 IF projectCost > 500000 → True / False


 IF approvalFromManager == True → True / False

Test Cases (Cover all condition outcomes):


Condition True Input False Input Expected Output
projectCost > 500000 600000 200000 High cost branch / Low cost branch
approvalFromManager == True True False Approved / Rejected

Data Flow Testing

Definition:

 Focuses on variables: where they are defined (assigned), used, and killed (no longer used).
 Ensure every variable is defined before use.
 Ensure all definitions reach a use.
 Detects issues like undefined or unused variables.
Steps in Data Flow Testing:

1. Identify all variables in the program.


2. Determine where each variable is defined (assigned).
3. Determine where each variable is used.
4. Prepare test cases to cover all definition-use pairs.

Input (projectCost,
Test Scenario Variable Def-Use Pair Tested Expected Output
approvalFromManager)
"Project Auto
Low-Cost Project →
200000, True projectCost: defined → used Approved - Low
Auto Approved
Cost"
High-Cost Project projectCost &
"Project Approved
→ Manager 600000, True approvalFromManager: defined
by Manager"
Approved → used
High-Cost Project projectCost & "Project Rejected -
→ Manager Not 600000, False approvalFromManager: defined No Manager
Approved → used Approval"

Loop Testing
 Tests FOR, WHILE, DO-WHILE loops in a program.
 Ensures loops execute correctly for minimum, typical, and maximum iterations.

Types :

Simple Loop

Definition:

 A loop with a single entry and exit.


 Not nested inside another loop.
 Usually has known bounds(upper and lower bounds)

Nested Loop

Definition:

 A loop inside another loop.


 Inner loop executes completely for each iteration of the outer loop.

Concatenated Loop (Sequential Loops)

Definition:

 Two or more loops one after the other, not nested.


 Each loop executes independently.
Unstructured Loop

Definition:

 A loop that does not have a simple structure (e.g., loops with break, exit, or complex conditions)
 Often seen in while loops with multiple exit points.

Software Maintenance:
Even after a software product is delivered to the customer, it cannot remain unchanged. Over time,
software must be modified due to:

1. Bug Fixing

 Errors may be discovered after release that need correction.


 Example: incorrect tax calculation in billing software.

2. Changing Environment

 Software must adapt to new hardware, operating systems, databases, browsers, or


APIs.
 Example: updating an app to work with the latest iOS/Android version.

3. New Requirements from Users

 Users often request new features or enhancements once they start using the software.
 Example: adding a ―dark mode‖ option to an application.
4. Performance Improvements

 Over time, software may slow down or become inefficient.


 Example: optimizing a search engine for faster results.

5. Prevent Future Failures

 Old code, outdated libraries, or poor documentation can lead to future problems.
 Example: refactoring code to prevent system crashes.

Types of Software Maintenance


 Corrective = Fix problems
 Adaptive = Adjust to environment
 Perfective = Add / improve features
 Preventive = Avoid future issues

1.Corrective Maintenance (reactive approach)

Definition:

 Performed to fix defects or errors found in the software after it has been delivered.
 These errors may be in logic, design, or coding that were not detected during testing.

Objective:

 To correct bugs and restore normal operation of the system.

Example:

 Notes are uploaded but not visible to students due to a bug.


 Correcting wrong calculation in a billing system.

2.Adaptive Maintenance

Definition:

 Performed when the environment of the software changes, requiring the software to adapt.
 The environment can include operating systems, hardware, business rules, regulations, or
external APIs.

Objective:

 To ensure the software continues working in the new environment.

Example:

 Modifying a banking application to comply with new government tax regulations.


 Updating an app to run on a new mobile OS version.
3.Perfective Maintenance

Definition:

 Enhancing the software to improve performance, maintainability, or add new features as


requested by users.
 Focuses on user satisfaction and future-proofing the application.

Objective:

 To make the software more efficient and user-friendly.

Example:

 Adding a new report generation module in a school management system.


 Improving response time of a search function in an e-commerce website.

4.Preventive Maintenance-( Proactive approach )

Definition:

 Performed to anticipate future problems and prevent software degradation.


 Focuses on code optimization, restructuring, and documentation updates to reduce risks.

Objective:

 To improve reliability and reduce the chances of failures in the future.

Example:

 Refactoring old code to follow modern coding standards.


 Updating libraries or dependencies before they become obsolete.

Example in Notes Sharing


Type of Maintenance Purpose
System

Fixing bug where uploaded


Corrective Fix errors/bugs
notes are not visible

Updating system for new


Adaptive Adjust to environment
browser or mobile OS

Add new features / Adding search, rating, or


Perfective
improve performance commenting features

Updating libraries, securing


Preventive Prevent future issues
login, removing unused code
Re-engineering
Software re-engineering is the disciplined process of analysing and changing an existing software system to
reconstitute it in a new form.
It’s more than fixing bugs — it’s about recovering, improving, and modernizing a system so it’s easier to
maintain, performs better, or runs on current platforms.

Goals :

Re-engineering is done to:

 Extend life of valuable functionality without a full rewrite.


 Improve maintainability (cleaner code, modular design).
 Reduce long-term maintenance cost.
 Adapt to new environments (OS, browsers, mobile).
 Improve performance, security, and scalability.
 Enable reuse of components in other projects.

Consider re-engineering when:


 Core business rules are correct and valuable.
 Code is hard to modify (spaghetti/legacy) but functionality must be preserved.
 Documentation is poor or missing but the system is critical.
 Rewriting from scratch would be too risky/costly.

If the system is small and the architecture is fundamentally wrong, a rewrite could be better. Re-engineer
when preserving behavior + reducing risk is the priority.

Re-engineering activities

1. Reverse engineering (analysis & recovery)

 Recover design, architecture, behavior and requirements from code, binaries, logs.
 Produce models: architecture map, component diagrams, data model, use cases.

2. Restructuring / Refactoring

 Clean and reorganize code without changing external behavior: rename, modularize,
remove dead code, improve naming.
 Add tests to lock behavior.

3. Forward engineering / Modernization

 Replace components, migrate to new platform / language / architecture (microservices, cloud).


 Migrate data, integrate new modules, and deploy.
Reverse Engineering :
Reverse Engineering in software is done to analyze, understand, and document an existing system when
proper documentation or design is missing.

Purpose
1.Learning Level (Understanding the System)

 Used to study and understand how a system works, especially when documentation is missing.
 Helps developers, students, or new team members to quickly grasp architecture and logic.

Example (Notes Sharing System):

 Students analyze the upload module to understand how notes are stored in the database and linked to
subjects.

2.Security & Quality

 Helps find vulnerabilities, weaknesses, and bugs in existing software.


 Ensures software is secure, reliable, and meets quality standards.

Example (Notes Sharing System):

 Reverse engineering reveals that passwords are stored in plain text.


 Fixing it by introducing encryption improves both security and quality.

3.Enabling Additional Features

 By understanding existing code, new enhancements and functionalities can be added without
breaking old features.

Example (Notes Sharing System):

 After analyzing the search function, a new filter by subject and semester feature is added.
 Helps students quickly find the right notes.

4.Developing Compatible & Cheaper Products

Reverse engineering is used to create compatible systems or modules that work with existing products but
at a lower cost.

 Often applied in software migration or when reusing components.

Example (Notes Sharing System):

 A lightweight mobile app is developed as a cheaper alternative to the full web version, but still
compatible with the same database.
Levels :
1.Abstraction Level
 You analyze the program at a high level, without looking at every detail of the source code.
 Understand overall structure, architecture, or design.
 Info Available : Only abstracted info — modules, interfaces, or high-level workflow

Example:

 Understanding that the program has a login module, a file upload module, and a download module,
without seeing the actual code.
 Good for creating system diagrams or high-level documentation

2.Completeness Level

 You analyze the program with full access to the source code or detailed implementation.
 Understand all internal details of the program.
 Info Available: All source code, logic, variables, loops, and functions.
 Example:
o Inspecting the login module code to see exactly how the password is checked.
o Detecting hidden logic, bugs, or vulnerabilities.

3.Directionality Level

 Definition: Focuses on flow of control and data within the program.


 Purpose: Understand how information moves and how different parts of the program interact.
 Info Available: Control flow graphs, data flow paths, and dependencies between modules.
 Example:

 Tracking how uploaded notes move from the client to the server.
 Checking if certain conditions can bypass security checks.

 Helps reverse engineers see paths and relationships in the system.


 You can report this flow to developers if they need to optimize or fix the module

You might also like