Advanced Software Engineering – Detailed Notes
UNIT 1 – INTRODUCTION
1.1 Problem Domain
Definition: The problem domain is the environment or area of application in which the
software system will be used.
It includes:
o Business rules (e.g., banking rules in a banking application).
o Processes (workflow to be automated).
o Users (who will use the software).
o Hardware/Software environment (platform, operating systems).
Importance: Understanding the domain ensures the system addresses real user needs.
1.2 Software Engineering Challenges
1. Complexity – Software is more complex than physical systems; must manage thousands
of interacting parts.
2. Change – Requirements evolve due to business needs, regulations, or user feedback.
3. Quality – Must satisfy attributes like reliability, performance, security, usability.
4. Cost & Time Constraints – Projects must meet deadlines & budgets.
5. Team Management – Software often developed by large, distributed teams →
coordination issues.
6. Customer Expectations – Deliver value that meets or exceeds customer needs.
1.3 Software Engineering Approach
Definition: A disciplined, systematic, and quantifiable approach to software
development.
Goals: High quality, within budget, on time, and easy to maintain.
Key Principles:
1. Abstraction – focusing only on essential details.
2. Modularity – breaking software into small manageable modules.
3. Encapsulation – hiding internal details, exposing only necessary operations.
4. Reusability – using existing tested components to reduce cost/time.
5. Maintainability – system should be easy to fix, update, and improve.
1.4 Software Process
Definition: A software process is a structured set of activities required to develop a
software product.
Basic Activities:
1. Specification – defining the functionality and constraints.
2. Development – design and coding.
3. Validation – testing, ensuring software meets requirements.
4. Evolution – adapting and maintaining after release.
1.5 Characteristics of a Good Software Process
Repeatability: Same process → same results.
Measurability: Progress & quality can be tracked.
Predictability: Future outcomes estimated.
Flexibility: Can adapt to new conditions.
Quality-focused: Ensures reliability and user satisfaction.
1.6 Software Development Process Models
1. Waterfall Model
Sequential phases: Requirements → Design → Implementation → Testing →
Maintenance.
The Waterfall Model is the oldest and most widely used software development model.
It follows a linear sequential approach, where each phase must be completed before the
next begins.
Like a waterfall, progress flows downward step by step.
Phases / Steps of Waterfall Model
1. Requirement Analysis
Collect and analyze the requirements of the system.
Understand what the system should do.
Prepare SRS (Software Requirement Specification).
Output: SRS document (approved by client).
Example: For a Student Result System → requirements include entering marks,
calculating grades, generating reports.
2. System Design
Based on requirements, design the architecture of the system.
High-Level Design (HLD): Overall system architecture, modules, database design.
Low-Level Design (LLD): Detailed module specifications, algorithms, interface design.
Output: Design Documents (HLD & LLD).
3. Implementation (Coding)
Convert design into source code.
Divide work into small modules assigned to developers.
Use programming languages and tools to build the system.
Output: Source code of the system.
4. Integration and Testing
Combine individual modules → integrate into a complete system.
Perform different types of testing:
o Unit Testing
o Integration Testing
o System Testing
o Acceptance Testing
Output: Tested software ready for delivery.
5. Deployment (Installation)
Deliver the tested system to the client.
Install in customer’s environment.
Provide user training & system documentation.
Output: Working system delivered to end-users.
6. Maintenance
After delivery, software may need updates or bug fixes.
Types of maintenance:
o Corrective (bug fixing).
o Adaptive (adapting to new OS, hardware).
o Perfective (improving performance/features).
o Preventive (reducing future risks).
Output: Improved, updated version of software.
Advantages of Waterfall Model
1. Simple and easy to understand.
2. Well-structured and systematic.
3. Clear documentation at every stage.
4. Easy to manage (each phase has specific deliverables).
5. Works well for small projects with clear requirements.
Disadvantages of Waterfall Model
1. Inflexible – difficult to handle requirement changes.
2. Testing happens late in the cycle → costly to fix errors.
3. Not suitable for complex, long-term projects.
4. Customer cannot see working product until late stages.
5. High risk and uncertainty.
2. Incremental Model
o Build software in small increments, each adding functionality.
Introduction
The Incremental Model is a software development approach where the system is built
in small parts (increments) instead of developing the entire system at once.
Each increment delivers a working portion of the software.
New features are added incrementally until the complete system is developed.
It combines the advantages of the Waterfall Model (structured approach) and
Prototyping (customer feedback).
Steps / Phases of Incremental Model: Each increment goes through the following phases
(mini-waterfall for each increment):
1. Requirement Analysis
Identify overall system requirements.
Break down the system into small, manageable modules/features.
Prioritize requirements (decide what goes in first increment, second, etc.).
Output: Requirement specification for the first increment.
2. System & Software Design
Design the architecture to accommodate future increments.
Design the first increment in detail.
Plan how additional increments will integrate with the existing system.
Output: System architecture + design for current increment.
3. Implementation (Coding)
Develop the first increment based on design.
Deliver a working module to the customer.
Use programming tools and practices.
Output: Partial software product with limited features.
4. Testing
Test the increment individually (unit + integration + system).
Verify that new module works correctly with previously developed modules.
Output: Tested, working software increment.
5. Deployment & Feedback
Deliver the increment to the customer.
Gather customer feedback on functionality and usability.
Plan improvements and additional features for next increment.
6. Next Increment Development
Repeat all phases (analysis → design → coding → testing → deployment) for each
increment.
Add new features while keeping old features intact.
Continue until the full system is developed.
Example (Student Result Management System)
Increment 1: Entering student marks.
Increment 2: Generating grade sheet.
Increment 3: Generating overall class performance report.
Increment 4: Integration with university database.
Advantages of Incremental Model
1. Customer gets early working software.
2. Easier to test and debug smaller increments.
3. Less costly to change requirements (since increments are flexible).
4. Users give feedback after each increment → improves quality.
5. Risks are lower compared to Waterfall (early partial delivery).
6. Faster delivery of useful software (time-to-market reduced).
Disadvantages of Incremental Model
1. Requires good planning and design to integrate increments smoothly.
2. Cost may be higher if too many increments.
3. Not suitable for very small projects (overhead).
4. Each delivery must be well tested → more effort in testing.
5. Requires customer involvement at every increment.
3. Spiral Model
o Combines iterative development with risk analysis.
o Four phases per cycle: Planning → Risk Analysis → Engineering → Evaluation
Introduction:
Proposed by Barry Boehm (1986).
Combines Iterative Development (like Incremental model) and Systematic Risk
Analysis (unique feature).
Best suited for large, complex, and high-risk projects.
Called "spiral" because development proceeds in loops (spirals) instead of a straight line.
Each spiral (loop) = one phase of development.
Each loop consists of 4 main phases:
1. Planning
2. Risk Analysis
3. Engineering
4. Evaluation
Phases / Steps of Spiral Model
1. Planning (Determine Objectives)
Identify system objectives, alternatives, and constraints.
Collect requirements from customer.
Decide what to build in this iteration.
Output: Requirement plan for the cycle.
2. Risk Analysis
Analyze and identify potential risks (technical, cost, schedule).
Develop strategies to reduce/avoid risks.
If risks are too high → project may be terminated early.
Output: Risk assessment + mitigation plan.
3. Engineering (Development & Testing)
Develop prototype / increment.
Design, coding, and testing of selected features.
Verify whether risks identified earlier are handled.
Output: A working model/prototype of the system.
4. Customer Evaluation (Review & Planning for Next Cycle)
Deliver the prototype/increment to the customer.
Get feedback from customer and stakeholders.
Plan next iteration with refined requirements.
Output: Updated plan for next spiral loop.
Spiral Model – Diagram Explanation
The spiral diagram has four quadrants:
1. Top-Left (Planning) – Define objectives and alternatives.
2. Top-Right (Risk Analysis) – Evaluate risks, plan solutions.
3. Bottom-Right (Engineering) – Develop, code, and test.
4. Bottom-Left (Evaluation) – Get feedback, plan next cycle.
Each loop in the spiral = iteration of the software with improved features.
Radius of spiral → project progress.
Angle of spiral → cost incurred.
Advantages of Spiral Model
1. Best for large and complex projects.
2. Risk analysis reduces chances of failure.
3. Customer gets early prototypes for feedback.
4. Flexibility – new requirements can be added in next iteration.
5. Development is systematic + iterative.
Disadvantages of Spiral Model
1. Very expensive compared to other models.
2. Requires highly experienced risk analysts.
3. Complex to manage.
4. Not suitable for small projects (overhead too high).
4. Agile Methods (e.g., Scrum, XP)
o Iterative, incremental development with customer collaboration.
Introduction
Agile Software Development is an iterative and incremental approach.
Emphasizes flexibility, collaboration, customer satisfaction, and rapid delivery.
Focuses on delivering working software quickly, in short cycles called iterations
(sprints).
Agile is based on the Agile Manifesto (2001), which values:
1. Individuals & interactions over processes & tools
2. Working software over comprehensive documentation
3. Customer collaboration over contract negotiation
4. Responding to change over following a plan
Key Principles of Agile (from Agile Manifesto – 12 principles)
1. Early & continuous delivery of valuable software.
2. Welcome changing requirements, even late in development.
3. Deliver working software frequently (every 2–4 weeks).
4. Business people & developers must work together.
5. Build projects around motivated individuals.
6. Face-to-face conversation is most effective.
7. Working software = primary measure of progress.
8. Sustainable development – constant pace.
9. Technical excellence & good design = agility.
10. Simplicity is essential.
11. Self-organizing teams produce best designs.
12. Regular reflection & adaptation.
Agile Development Cycle (Steps)
1. Planning – High-level requirements identified.
2. Requirements Analysis – User stories (short descriptions of features).
3. Design & Development – Build small increment of product.
4. Testing – Continuous testing of new features.
5. Delivery – Working increment delivered to customer.
6. Feedback – Customer feedback used to refine next iteration.
This cycle repeats in iterations (sprints) until project completion.
Popular Agile Methods
1. Scrum
Most widely used Agile framework.
Project divided into Sprints (2–4 weeks).
Roles:
o Product Owner – defines product backlog (requirements list).
o Scrum Master – ensures Scrum practices are followed.
o Development Team – builds the product.
Artifacts:
o Product Backlog (list of features).
o Sprint Backlog (tasks for current sprint).
o Increment (working product).
Meetings:
o Daily Stand-up (15 min progress meeting).
o Sprint Review & Retrospective.
2. Extreme Programming (XP)
Focuses on high-quality code and customer satisfaction.
Practices:
o Pair programming (two programmers work together).
o Test-driven development (TDD).
o Continuous integration.
o Small releases.
o Refactoring code for improvement.
o On-site customer involvement.
3. Crystal Methodology
Family of lightweight methodologies.
Emphasizes people over processes.
Tailored for project size (Crystal Clear, Crystal Yellow, etc.).
4. Feature-Driven Development (FDD)
Breaks project into features (“client-valued functions”).
Steps: Develop model → Build feature list → Plan → Design → Build by feature.
5. Lean Software Development
Inspired by lean manufacturing (Toyota).
Principles: Eliminate waste, deliver fast, respect people, continuous learning.
Advantages of Agile Methods
1. Early and continuous delivery of working software.
2. Customer feedback incorporated regularly.
3. Reduces risk of failure (short iterations).
4. High adaptability to changing requirements.
5. Improved team collaboration & communication.
6. Higher customer satisfaction.
Disadvantages of Agile Methods
1. Requires skilled and experienced team.
2. Less emphasis on documentation (may cause issues later).
3. Not suitable for very large, distributed projects without modifications.
4. Requires continuous customer involvement.
5. May lead to scope creep if not managed properly.
1.7 Other Software Processes
Prototyping – quick working model, refine based on feedback.
Rapid Application Development (RAD) – short development cycles, heavy prototyping.
V-Model – each development phase has a corresponding testing phase.
UNIT 2 – SOFTWARE REQUIREMENTS
2.1 Requirement Engineering (RE)
Definition: A systematic process of discovering, analyzing, documenting, validating, and
managing the requirements of a software system.
Why Important?
o Requirements are the foundation of software.
o Errors in requirements = very costly later.
o Good requirements reduce rework and improve software quality.
Phases of Requirement Engineering
1. Elicitation – Gathering requirements from stakeholders.
2. Analysis – Identifying conflicts, prioritizing, modeling requirements.
3. Specification – Writing requirements in a structured format (SRS).
4. Validation – Checking correctness & completeness.
5. Management – Handling changes to requirements during project lifecycle.
2.2 Types of Requirements
1. Functional Requirements (FR)
o Define what the system must do.
o Expressed as functions, inputs, outputs, and interactions.
o Example: ATM system must allow cash withdrawal.
2. Non-Functional Requirements (NFR)
o Define system qualities & constraints.
o Examples:
Performance – “System should process 1000 requests/sec.”
Usability – “Interface must be easy for non-technical users.”
Reliability – “System uptime must be 99.9%.”
Security – “Users must log in with password + OTP.”
3. Domain Requirements
o Come from the application domain (laws, standards, business rules).
o Example: Banking system must comply with RBI regulations.
2.3 Feasibility Studies
Before gathering requirements, check whether project is practical & beneficial.
Types of feasibility:
1. Technical Feasibility – Can we build it with available technology?
2. Operational Feasibility – Will it fit in organization workflow?
3. Economic Feasibility – Cost vs. benefits.
4. Schedule Feasibility – Can we deliver within the deadline?
5. Legal Feasibility – Does it follow laws & policies.
2.4 Requirements Elicitation
Definition: Process of gathering requirements from stakeholders (users, clients,
managers).
Techniques:
Interviews – one-on-one Q&A with users.
Questionnaires – useful for large user groups.
Observation – watch how users perform tasks.
Workshops / Brainstorming – group discussions to discover requirements.
Prototyping – build a quick model → get user feedback.
Document analysis – study existing manuals, business documents.
Challenges in Elicitation:
Stakeholders don’t know what they want.
Communication gap between developers & users.
Conflicting requirements from different stakeholders.
2.5 Requirement Analysis
After gathering requirements → analyze them for quality.
Activities:
Conflict resolution – resolve differences between stakeholders.
Prioritization – which requirements are most important.
Modeling – using diagrams (DFD, UML, Use-case).
Checking feasibility – technical & cost-wise.
2.6 Requirement Documentation (SRS)
Software Requirement Specification (SRS) – a written, structured document of
requirements.
Contents of SRS:
1. Introduction (purpose, scope, definitions).
2. Overall description (product perspective, features, constraints).
3. Specific requirements (functional + non-functional).
4. External interfaces (hardware/software/user).
5. Appendices (glossary, references).
Characteristics of good SRS:
Correct
Unambiguous
Complete
Consistent
Verifiable
Modifiable
Traceable
2.7 Requirement Validation
Ensures requirements are accurate & acceptable.
Methods:
o Reviews & Walkthroughs (peer checking).
o Prototyping (user feedback).
o Model validation (use-case, simulation).
o Test-case generation (check testability of each requirement).
2.8 Requirement Management
Since requirements change, we need to manage & control them.
Activities:
Maintain a requirement repository.
Version control (track old/new requirements).
Change management (approve/reject changes).
Traceability (each requirement linked to design, code, and test cases).
2.9 Formal System Specification
Definition: Using mathematical models to define requirements.
Why? To avoid ambiguity.
Methods:
1. Axiomatic Specification – define system properties using logic statements.
o Example: For all x, if x is student, then x has roll number.
2. Algebraic Specification – specify operations using equations.
o Example: balance(deposit(account, amount)) = balance(account) + amount.
2.10 Case Study: Student Result Management System
Functional Requirements:
o Enter student marks.
o Generate grade sheet.
o Calculate pass/fail.
Non-functional Requirements:
o Accessible only by authorized staff.
o Response time < 2 seconds.
Domain Requirement:
o Follow university exam regulations.
2.11 Software Quality Management
Definition: Ensures software meets customer expectations & follows standards.
(i) Software Quality
Attributes: Reliability, Efficiency, Usability, Maintainability, Portability.
(ii) Software Quality Management System
Policies, processes, and documentation to ensure quality.
(iii) ISO 9000 Standards
International quality standards.
Focus on process quality, not just product quality.
(iv) SEI-CMM (Capability Maturity Model)
Framework for process improvement.
Levels:
1. Initial – ad hoc, chaotic.
2. Repeatable – basic project management.
3. Defined – standard process across organization.
4. Managed – metrics used for control.
5. Optimizing – continuous process improvement.
UNIT 3 – PROJECT MANAGEMENT
3.1 Software Project Management
Definition: Application of knowledge, skills, tools, and techniques to plan, monitor, and
control software projects.
Goal: Deliver a high-quality software product within time, cost, and resource
constraints.
Responsibilities of Project Manager (PM):
1. Planning – Define scope, schedule, cost, resources.
2. Organizing – Allocate tasks to teams.
3. Leading – Motivate and guide team.
4. Controlling – Monitor progress, manage risks.
5. Communication – Bridge between stakeholders and development team.
3.2 Project Planning
The foundation of project management → creates a roadmap for development.
Steps in Project Planning:
1. Define Objectives – What is to be achieved?
2. Scope Definition – Boundaries of the project (in-scope / out-of-scope).
3. Identify Deliverables – End products (documents, software, reports).
4. Resource Planning – Human resources, tools, machines, budget.
5. Schedule Planning – Timeline of tasks.
6. Risk Planning – Identify potential risks and solutions.
3.3 Project Size Estimation
To plan time and cost, we must estimate project size.
Metrics:
1. Lines of Code (LOC) – Estimate number of program lines.
o Easy but language-dependent.
o Example: 10,000 LOC system.
2. Function Point (FP) – Based on functionality delivered (inputs, outputs, files, inquiries).
o Independent of language.
o Example: ATM system FP = sum of input screens, outputs, reports.
3.4 Project Estimation Techniques
1. Empirical Estimation Techniques
o Based on past project experience.
o Example: “Project X took 6 months with 10 people, so similar project Y will
also.”
2. COCOMO (Constructive Cost Model)
o Developed by Barry Boehm.
o Formula:
Effort=a×(KLOC)bEffort = a \times (KLOC)^bEffort=a×(KLOC)b
where:
Effort = person-months
KLOC = thousand lines of code
a, b = constants based on project type (organic, semi-detached,
embedded).
Types of COCOMO:
o Basic – Quick estimation.
o Intermediate – Includes cost drivers (complexity, reliability, experience).
o Detailed – Phase-wise estimation.
3. Halstead’s Software Science
o Based on operators and operands in code.
o Key Measures:
n1 = number of distinct operators
n2 = number of distinct operands
N1 = total operators
N2 = total operands
o Program Length (N) = N1 + N2
o Vocabulary (n) = n1 + n2
o Volume (V) = N × log₂(n)
Example: For a small program → more operators = higher complexity.
3.5 Staffing Level Estimation
Predicting how many people are needed at each phase.
Uses Putnam’s Model (based on Rayleigh curve).
Manpower demand is high in the middle phases (coding & testing), lower at start
(planning) and end (maintenance).
3.6 Scheduling
Definition: Arranging tasks in a timeline.
Tools:
1. Gantt Chart – Bar chart showing tasks vs. time.
o Simple, easy visualization.
o Example: Design phase → Jan, Coding → Feb–Mar.
2. PERT (Program Evaluation Review Technique)
o Uses optimistic (O), pessimistic (P), and most likely (M) times.
o Expected Time (TE):
TE=O+4M+P6TE = \frac{O + 4M + P}{6}TE=6O+4M+P
3. CPM (Critical Path Method)
o Identifies the longest path in task network = minimum project duration.
3.7 Organization and Team Structures
Types of Team Structures:
1. Hierarchical (Chief Programmer Team)
o One leader + subordinates.
o Pros: Clear authority.
o Cons: Dependency on leader.
2. Democratic Team
o All members are equal, decisions made collectively.
o Pros: Creativity, participation.
o Cons: Slow decision-making.
3. Matrix Team
o Combines functional managers + project managers.
o Pros: Resource sharing.
o Cons: Conflicts between managers.
3.8 Risk Management
Definition: Identifying, analyzing, and controlling potential risks.
Types of Risks:
o Project risks (schedule slippage, cost overrun).
o Product risks (low performance, defects).
o Business risks (market failure).
Steps in Risk Management:
1. Risk Identification
2. Risk Analysis (probability & impact)
3. Risk Planning (mitigation, contingency)
4. Risk Monitoring
Example: “Key developer may leave” → mitigation = knowledge transfer.
3.9 Software Configuration Management (SCM)
Definition: The discipline of managing software changes systematically.
Activities:
o Version Control – maintain versions (e.g., Git, SVN).
o Change Control – approve/reject modifications.
o Build Management – automated builds & deployment.
o Release Management – distributing final product.
Benefits: Avoids confusion, enables team collaboration, ensures traceability.
3.10 Miscellaneous Plans in Project Management
Besides the main project plan, PM also prepares:
Quality Plan – ensures standards are followed.
Testing Plan – strategies for unit, integration, system testing.
Training Plan – training for developers and users.
Maintenance Plan – post-release support.
UNIT IV – SOFTWARE TESTING & MAINTENANCE
4.1 Software Testing – Introduction
Definition: The process of executing a program with the intention of finding errors.
Objective:
o Ensure the software meets requirements.
o Identify defects before deployment.
o Improve quality and reliability.
Testing Principles (by Glenford Myers):
1. Testing shows the presence of defects, not their absence.
2. Exhaustive testing is impossible.
3. Early testing saves time & cost.
4. Defects cluster in few modules.
5. Pesticide paradox – repeating same tests finds fewer bugs → need new test cases.
6. Testing depends on context.
7. Absence of errors ≠ useful system (must meet user needs).
4.2 Levels of Testing
1. Unit Testing
o Testing individual modules or components.
o Done by developers.
o Tools: JUnit (Java), NUnit (.NET).
2. Integration Testing
o Testing combined modules to check interaction.
o Approaches:
Top-down → test top modules first, then integrate down.
Bottom-up → test lower modules first.
Big Bang → integrate all at once (risky).
3. System Testing
o Entire system tested as a whole.
o Focus on verifying functional + non-functional requirements.
4. Acceptance Testing
o Done by client/end-users.
o Alpha testing – at developer’s site.
o Beta testing – at user’s site.
4.3 Testing Techniques
(a) Black-Box Testing
Focuses on input-output behavior.
Test cases derived from requirements & specifications.
Techniques:
o Equivalence Partitioning
o Boundary Value Analysis
o Cause-Effect Graphing
Example: ATM → input PIN → check valid/invalid case.
(b) White-Box Testing
Focuses on internal structure/code.
Requires programming knowledge.
Techniques:
o Statement Coverage
o Branch Coverage
o Path Coverage
Example: Test all paths in a “login” function.
(c) Gray-Box Testing
Mix of black-box + white-box approaches.
4.4 Regression Testing
Testing performed after modifications to ensure no new defects introduced.
Tools: Selenium, QTP, JUnit.
4.5 Software Maintenance
Definition: Modification of software after delivery to correct faults, improve
performance, or adapt to new environment.
Types of Maintenance:
1. Corrective Maintenance – Fixing defects found by users.
o Example: Bug fix in billing software.
2. Adaptive Maintenance – Modifying software to adapt to environment changes.
o Example: Updating system for new OS version.
3. Perfective Maintenance – Enhancing features & performance.
o Example: Adding search functionality.
4. Preventive Maintenance – Improving maintainability & preventing future issues.
o Example: Refactoring code.
4.6 Software Re-engineering
Definition: The process of analyzing and modifying an existing system to improve its
quality, maintainability, or extend its life.
Steps in Re-engineering:
1. Reverse Engineering – analyzing system to extract design and requirements.
2. Restructuring – improving internal structure (code, database).
3. Forward Engineering – building improved system using extracted knowledge.
Benefits:
Improves maintainability.
Extends software lifetime.
Reduces risk compared to building from scratch.
4.7 Differences – Testing vs. Debugging
Testing: Process of finding errors.
Debugging: Process of locating and fixing the cause of errors.
UNIT V – SOFTWARE QUALITY & RE-ENGINEERING
5.1 Software Quality – Introduction
Definition:
Software quality means conformance to requirements and fitness for use.
Goal: Deliver software that satisfies customer needs and adheres to standards.
Dimensions of Quality (McCall’s Quality Model)
1. Product Operation – Correctness, Reliability, Efficiency, Integrity, Usability.
2. Product Revision – Maintainability, Flexibility, Testability.
3. Product Transition – Portability, Reusability, Interoperability.
5.2 Quality Attributes
Reliability – Software performs consistently without failure.
Efficiency – Optimal use of resources (CPU, memory).
Usability – Easy to learn and use.
Maintainability – Easy to correct, adapt, enhance.
Portability – Can run on different platforms with little modification.
5.3 Software Quality Assurance (SQA)
Definition: A set of activities that ensures software processes and products conform to
requirements, standards, and procedures.
SQA Activities:
1. Process definition and implementation.
2. Audits and Reviews.
3. Testing and Verification.
4. Metrics collection and analysis.
5. Adherence to standards (ISO, CMM).
5.4 Quality Metrics
Metrics are used to measure and improve software quality.
(i) Product Metrics
Measure characteristics of software product.
o Defect density = (Number of defects / KLOC).
o Cyclomatic complexity = measures logical complexity.
o Code coverage = % of code executed in testing.
(ii) Process Metrics
Evaluate efficiency of software development process.
o Effort (person-months).
o Productivity = LOC or FP per person-month.
o Defect removal efficiency.
(iii) Project Metrics
Monitor project progress and control.
o Schedule variance (SV).
o Cost variance (CV).
o Number of open change requests.
5.5 Quality Standards
(i) ISO 9000
International set of standards for quality management systems.
Focuses on process quality not just product quality.
Ensures documentation, reviews, and audits.
(ii) SEI-CMM (Capability Maturity Model)
A framework for process improvement developed by SEI (Software Engineering
Institute).
Levels of Maturity:
1. Initial – Ad hoc, chaotic, no defined processes.
2. Repeatable – Basic project management, success depends on individuals.
3. Defined – Organization-wide standard processes established.
4. Managed – Quantitative metrics and quality measures used.
5. Optimizing – Continuous process improvement.
5.6 Software Re-engineering
Definition: Process of improving existing software by analyzing, restructuring, and
rebuilding.
Why needed?
o Old software difficult to maintain.
o Technology evolves (new OS, new DBMS).
o Cost of re-engineering < cost of redevelopment.
Activities in Re-engineering:
1. Reverse Engineering – Analyze code/design to recover requirements.
2. Restructuring – Improve structure (e.g., refactor code, redesign DB).
3. Forward Engineering – Develop new system using extracted knowledge.
Benefits:
Reduces maintenance cost.
Improves system performance and quality.
Extends life cycle of legacy systems.
5.7 CASE Tools (Computer-Aided Software Engineering)
Definition: Automated tools that assist in different stages of software development.
Types of CASE Tools:
1. Upper CASE tools – support analysis & design (e.g., Rational Rose for UML).
2. Lower CASE tools – support implementation, testing, and maintenance (e.g., JUnit,
Selenium).
3. Integrated CASE tools – support entire SDLC (e.g., Enterprise Architect).
Benefits:
Faster development.
Higher consistency and accuracy.
Easy maintenance and documentation.