0% found this document useful (0 votes)
38 views190 pages

SE Notes

The document contains lecture notes on Software Engineering for the course code 20A05403T, covering various topics such as software development life cycle models, requirements analysis, software design, coding, testing, and software quality management. It outlines course objectives, outcomes, and the importance of software engineering, emphasizing the need for systematic methods, cost management, and effective project handling. Additionally, it discusses key concepts like abstraction and decomposition, as well as the evolution of software engineering techniques.

Uploaded by

naveen2014sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views190 pages

SE Notes

The document contains lecture notes on Software Engineering for the course code 20A05403T, covering various topics such as software development life cycle models, requirements analysis, software design, coding, testing, and software quality management. It outlines course objectives, outcomes, and the importance of software engineering, emphasizing the need for systematic methods, cost management, and effective project handling. Additionally, it discusses key concepts like abstraction and decomposition, as well as the evolution of software engineering techniques.

Uploaded by

naveen2014sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 190

TADIPATRI ENINEERING COLLEGE

Lecture Notes

on

SOFTWARE ENGINEERING
20A05403T
By

B. JAVEED BASHA

ASSISTANT PROFESSOR

DEPARTMENT OF CSE

2022-2023
Contents

S.No. Topics Page no.

1. Syllabus

2. Unit-I

3. Unit-II

4. Unit-III

5. Unit-IV

6. Unit-V

7. 2 M Questions

8. 5 M Questions

9. 10 M Questions
R20 Regulations
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY,ANANTAPUR
(Established by Govt. of A.P., ACT No.30 of 2008)
ANANTHAPURAMU – 515 002 (A.P) INDIA
Computer Science & Engineering
Course Code 20A05403T
SOFTWARE ENGINEERING
(Common to CSE, IT, CSE( DS), CSE (IoT)

LTP C
300 3
Semester IV

Pre-requisite: NIL

Course Objectives:
• To learn the basic concepts of software engineering and life cycle models
• To explore the issues in software requirements specification and enable to write SRS documents for
software development problems
• To elucidate the basic concepts of software design and enable to carry out procedural and object
oriented design of software development problems
• To understand the basic concepts of black box and white box software testing and enable to design
test cases for unit, integration, and system testing
• To reveal the basic concepts in software project management

Course Outcomes (CO):


After completion of the course, students will be able to
• Obtain basic software life cycle activity skills.
• Design software requirements specifications for given problems.
• Implement structure, object oriented analysis and design for given problems.
• Design test cases for given problems.
• Apply quality management concepts at the application level.

UNIT - I: Basic concepts in software engineering and software project management


Lecture
8Hrs
Basic concepts: abstraction versus decomposition, evolution of software engineering techniques, Software
development life cycle (SDLC) models: Iterative waterfall model, Prototype model, Evolutionary model,
Spiral model, RAD model, Agile models, software project management: project planning, project estimation,
COCOMO, Halstead’s Software Science, project scheduling, staffing, Organization and team structure, risk
management, configuration management.

UNIT – II: Requirements analysis and specification


Lecture ,8Hrs
The nature of software, The Unique nature of Webapps, Software Myths, Requirements gathering and
analysis, software requirements specification, Traceability, Characteristics of a Good SRS Document, IEEE
830 guidelines, representing complex requirements using decision tables and decision trees, overview of
formal system development techniques, axiomatic specification, algebraic specification
PAGE NO.3
UNIT – III: Software Design Lecture 9Hrs
Good Software Design, Cohesion and coupling, Control Hierarchy: Layering, Control Abstraction, Depth
and width, Fan-out, Fan-in, Software design approaches, object oriented vs. function oriented design.
Overview of SA/SD methodology, structured analysis, Data flow diagram, Extending DFD technique to
real life systems, Basic Object oriented concepts, UML Diagrams, Structured design, Detailed design,
Design review, Characteristics of a good user interface, User Guidance and Online Help, Mode -based vs
Mode-less Interface, Types of user interfaces, Component-based GUI development, User interface design
methodology: GUI design methodology.
UNIT - IV Coding and Testing Lecture 9Hrs
Coding standards and guidelines, code review, software documentation, Testing, Black Box Testing,
White Box Testing, debugging, integration testing, Program Analysis Tools, system testing, performance
testing, regression testing, Testing Object Oriented Programs.
UNIT - V Software quality, reliability, and other issues Lecture 9Hrs
Software reliability, Statistical testing, Software quality and management, ISO 9000, SEI capability
maturity model (CMM), Personal software process (PSP), Six sigma, Software quality metrics, CASE and
its scope, CASE environment, CASE support in software life cycle, Characteristics of software
maintenance, Software reverse engineering, Software maintenance processes model, Estimation
maintenance cost. Basic issues in any reuse program, Reuse approach, Reuse at organization level.

Textbooks:
1. Rajib Mall, “Fundamentals of Software Engineering”, 5th Edition, PHI, 2018.
2. Pressman R, “Software Engineering- Practioner Approach”, McGraw Hill.

Reference Books:
1. Somerville, “Software Engineering”, Pearson 2.
2. Richard Fairley, “Software Engineering Concepts”, Tata McGraw Hill.
3. JalotePankaj, “An integrated approach to Software Engineering”, Narosa

PAGE NO.4
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

UNIT – 1
Basic concepts in software engineering and software project management
Basic concepts: abstraction versus decomposition, evolution of software engineering
techniques, Software development life cycle (SDLC) models: Iterative waterfall model,
Prototype model, Evolutionary model, Spiral model, RAD model, Agile models, software
project management: project planning, project estimation, COCOMO, Halstead’s Software
Science, project scheduling, staffing, Organization and team structure, risk management,
configuration management.

What is Software Engineering?


The term software engineering is the product of two words, software, and engineering

The software is a collection of integrated programs.

Software subsists of carefully-organized instructions and code written by developers on any of various particular computer languages.

Computer programs and related documentation such as requirements, design models and user manuals.

Engineering is the application of scientific and practical knowledge to invent, design, build, maintain, and improve frameworks,
processes, etc.

Software Engineering is an engineering branch related to the evolution of software product using well-defined scientific principles,
techniques, and procedures. The result of software engineering is an effective and reliable software product.

GIT-CSE-ADSA PAGE NO.5


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Why is Software Engineering required?
Software Engineering is required due to the following reasons:

o To manage Large software


o For more Scalability
o Cost Management
o To manage the dynamic nature of software
o For better quality Management

Need of Software Engineering


The necessity of software engineering appears because of a higher rate of progress in user requirements and the environment on which
the program is working.

o Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly, as the measure of programming
become extensive engineering has to step to give it a scientific process.
o Adaptability: If the software procedure were not based on scientific and engineering ideas, it would be simpler to re-create new
software than to scale an existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let down the cost of computer and
electronic hardware. But the cost of programming remains high if the proper process is not adapted.
o Dynamic Nature: The continually growing and adapting nature of programming hugely depends upon the environment in
which the client works. If the quality of the software is continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and quality software product.

Characteristics of a good software engineer


The features that good software engineers should possess are as follows:

Exposure to systematic methods, i.e., familiarity with software engineering principles.

Good technical knowledge of the project range (Domain knowledge).

Good programming abilities.

Good communication skills. These skills comprise of oral, written, and interpersonal skills.

High motivation.

Sound knowledge of fundamentals of computer science.

Intelligence.

Ability to work in a team

GIT-CSE-ADSA PAGE NO.6


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Importance of Software Engineering

The importance of Software engineering is as follows:

1. Reduces complexity: Big software is always complicated and challenging to progress. Software engineering has a great
solution to reduce the complication of any project. Software engineering divides big problems into various small issues. And
then start solving each small issue one by one. All these small problems are solved independently to each other.
2. To minimize software cost: Software needs a lot of hardwork and software engineers are highly paid experts. A lot of
manpower is required to develop software with a large number of codes. But in software engineering, programmers project
everything and decrease all those things that are not needed. In turn, the cost for software productions becomes less as
compared to any software that does not use software engineering method.
3. To decrease time: Anything that is not made according to the project always wastes time. And if you are making great
software, then you may need to run many codes to get the definitive running code. This is a very time-consuming procedure,
and if it is not well handled, then this can take a lot of time. So if you are making your software according to the software
engineering method, then it will decrease a lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of patience, planning, and
management. And to invest six and seven months of any company, it requires heaps of planning, direction, testing, and
maintenance. No one can say that he has given four months of a company to the task, and the project is still in its first stage.
Because the company has provided many resources to the plan and it should be completed. So to handle a big project without
any problem, the company has to go for a software engineering method.
5. Reliable software: Software should be secure, means if you have delivered the software, then it should work for at least its
given time or subscription. And if any bugs come in the software, the company is responsible for solving all these bugs.
Because in software engineering, testing and maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards. Software standards are the big target of
companies to make it more effective. So Software becomes more effective in the act with the help of software engineering.

GIT-CSE-ADSA PAGE NO.7


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Abstraction versus Decomposition in Software Engineering


It refers to the construction of a simpler version of a problem by ignoring the details. The principle of constructing an abstraction is
popularly known as modeling.

It is the simplification of a problem by focusing on only one aspect of the problem while omitting all other aspects. When using the
principle of abstraction to understand a complex problem, we focus our attention on only one or two specific aspects of the problem and
ignore the rest.

Whenever we omit some details of a problem to construct an abstraction, we construct a model of the problem. In everyday life, we use
the principle of abstraction frequently to understand a problem or to assess a situation.

Decomposition:
Decomposition is a process of breaking down. It will be breaking down functions into smaller parts. It is anotheimportant
principle of software engineering to handle problem complexity. This principle is profusely made use by several
software engineering techniques to contain the exponential growth of the perceived problem complexity. The
decomposition principle is popularly is says the divide and conquer principle.

Functional Decomposition:
It is a term that engineers use to describe a set of steps in which they break down the overall function of a device, system, or process into
its smaller parts.

Steps for the Functional Decomposition:


1. Find the most general function

2. Find the closest sub-functions

3. Find the next levels of sub-functions

In software engineering there are two main concepts in design phase which are abstraction and decomposition but I can't get the differences
between them?

GIT-CSE-ADSA PAGE NO.8


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Both concepts are analysis & design basic techniques. There are interrelated and normally used together during software development. We use
them even though we are not always aware of it. Deeper understanding of these concepts helps to be more accurate and effective in A&D.

Abstraction in general is process of consciously ignoring some aspects of a subject under analysis in order to better understand other aspects of
it. In other words, it kind of simplification of a subject. In software in particular, analysis & design are all about abstraction.

 When you model your DB, you ignore UI and behavior of your system and concentrate only on DB structure.
 When you model your architecture, you concentrate on high-level modules and their relationships and ignore their internal structure
 Each UML diagram for example gives a special, limited view on the system, therefore focusing on a single aspect and ignoring all other
(sequences abstract objects and messages, deployment abstracts network and servers, use cases abstract system users and their interactions
with a system, etc)
 writing source code in any programming language requires a lot of abstraction - programmers abstract app's functionality using limited set
of language constructs
Decomposition is an application of the old good principle "divide and conquer" to software development. It is a technique of classifying,
structuring and grouping complex elements in order to end up with more atomic ones, organized in certain fashion and easier to manage. In all
phases there are lots of examples:

 functional decomposition of a complex process to hierarchical structure of smaller sub-processes and activities
 high-level structure of an complex application to 3 tiers - UI, logic and data.
 Class structure of a complex domain
 namespaces as a common concept of breaking a global scope into several local ones
 UML packages are a direct use of decomposition on the model level - use packages to organize your model
Abstraction is somehow more generic principle than decomposition, kind of "father of all principles" :)

Abstraction is one of the fundamental principles of object oriented programming. Abstraction allows us to name objects that are not directly
instantiated but serve as a basis for creating objects with some common attributes or properties. For example: in the context of computer
accessories Data Storage Device is an abstract term because it can either be a USB pen drive, hard disk, or RAM. But a USB pen drive or hard
disks are concrete objects because their attributes and behaviors are easily identifiable, which is not the case for Data Storage Device, being an
abstract object for computer accessories. So, abstraction is used to generalize objects into one category in the design phase. For example in a
travel management system you can use Vehicle as an abstract object or entity that generalizes how you travel from one place to another .

Decomposition is a way to break down your systems into modules in such a way that each module provides different functionality, but may
affect other modules also. To understand decomposition quite clearly, you should first understand the concepts of association, composition, and
aggregation.

GIT-CSE-ADSA PAGE NO.9


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Evolution of software engineering techniques:-


Software Evolution is a term which refers to the process of developing software initially, then timely updating it for various
reasons, i.e., to add new features or to remove obsolete functionalities etc. The evolution process includes fundamental
activities of change analysis, release planning, system implementation and releasing a system to customers.
The cost and impact of these changes are accessed to see how much system is affected by the change and how much it might
cost to implement the change. If the proposed changes are accepted, a new release of the software system is planned. During
release planning, all the proposed changes (fault repair, adaptation, and new functionality) are considered.
A design is then made on which changes to implement in the next version of the system. The process of change implementation
is an iteration of the development process where the revisions to the system are designed, implemented and tested.
The necessity of Software evolution: Software evaluation is necessary just because of the following reasons:
a) Change in requirement with time: With the passes of time, the organization’s needs and modus Operandi of working could
substantially be changed so in this frequently changing time the tools(software) that they are using need to change for
maximizing the performance.
b) Environment change: As the working environment changes the things(tools) that enable us to work in that environment
also changes proportionally same happens in the software world as the working environment changes then, the organizations
need reintroduction of old software with updated features and functionality to adapt the new environment.
c) Errors and bugs: As the age of the deployed software within an organization increases their preciseness or impeccability
decrease and the efficiency to bear the increasing complexity workload also continually degrades. So, in that case, it become s
necessary to avoid use of obsolete and aged software. All such obsolete Softwares need to undergo the evolution process in
order to become robust as per the workload complexity of the current environment.
d) Security risks: Using outdated software within an organization may lead you to at the verge of various software-based
cyberattacks and could expose your confidential data illegally associated with the software that is in use. So, it becomes
necessary to avoid such security breaches through regular assessment of the security patches/modules are used within the
software. If the software isn’t robust enough to bear the current occurring Cyber attacks so it must be changed (updated).
e) For having new functionality and features: In order to increase the performance and fast data processing and other
functionalities, an organization need to continuously evolute the software throughout its life cycle so that stakeholders &
clients of the product could work efficiently.

GIT-CSE-ADSA PAGE NO.10


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
Laws used for Software Evolution:

1. Law of continuing change:


This law states that any software system that represents some real-world reality undergoes continuous change or become
progressively less useful in that environment.
2. Law of increasing complexity:
As an evolving program changes, its structure becomes more complex unless effective efforts are made to avoid this
phenomenon.
3. Law of conservation of organization stability:
Over the lifetime of a program, the rate of development of that program is approximately constant and independent of the
resource devoted to system development.
4. Law of conservation of familiarity:
This law states that during the active lifetime of the program, changes made in the successive release are almost constant.

Software Development Life Cycle (SDLC) Models:


Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality softwares.
The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and
cost estimates.
 SDLC is the acronym of Software Development Life Cycle.
 It is also called as Software Development Process.
 SDLC is a framework defining tasks performed at each step in the software development process.
 ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the standard that defines all the tasks
required for developing and maintaining software.

What is SDLC?
SDLC is a process followed for a software project, within a software organization. It consists of a detailed plan describing how to
develop, maintain, replace and alter or enhance specific software. The life cycle defines a methodology for improving the quality of
software and the overall development process.
The following figure is a graphical representation of the various stages of a typical SDLC.

A typical Software Development Life Cycle consists of the following stages −


GIT-CSE-ADSA PAGE NO.11
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Stage 1: Planning and Requirement Analysis


Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the senior members of the team with
inputs from the customer, the sales department, market surveys and domain experts in the industry. This information is then used to
plan the basic project approach and to conduct product feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with the project is also done in the planning
stage. The outcome of the technical feasibility study is to define the various technical approaches that can be followed to implement the
project successfully with minimum risks.

Stage 2: Defining Requirements


Once the requirement analysis is done the next step is to clearly define and document the product requirements and get them approved
from the customer or the market analysts. This is done through an SRS (Software Requirement Specification) document which
consists of all the product requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture


SRS is the reference for product architects to come out with the best architecture for the product to be developed. Based on the
requirements specified in SRS, usually more than one design approach for the product architecture is proposed and documented in a
DDS - Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various parameters as risk assessment, product robustness, design
modularity, budget and time constraints, the best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its communication and data flow
representation with the external and third party modules (if any). The internal design of all the modules of the proposed architecture
should be clearly defined with the minutest of the details in DDS.

Stage 4: Building or Developing the Product


In this stage of SDLC the actual development starts and the product is built. The programming code is generated as per DDS during this
stage. If the design is performed in a detailed and organized manner, code generation can be accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and programming tools like compilers, interpreters,
debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++, Pascal, Java and PHP are
used for coding. The programming language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product


This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are mostly involved in all the stages
of SDLC. However, this stage refers to the testing only stage of the product where product defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance


Once the product is tested and ready to be deployed it is released formally in the appropriate market. Sometimes product deployment
happens in stages as per the business strategy of that organization. The product may first be released in a limited segment and tested in
the real business environment (UAT- User acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested enhancements in the targeting market segment. After
the product is released in the market, its maintenance is done for the existing customer base.

GIT-CSE-ADSA PAGE NO.12


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
SDLC Models
There are various software development life cycle models defined and designed which are followed during the software development
process. These models are also referred as Software Development Process Models". Each process model follows a Series of steps
unique to its type to ensure success in the process of software development.
Following are the most important and popular SDLC models followed in the industry −

 Waterfall Model
 Iterative Model
 Spiral Model
 V-Model
 Big Bang Model
Other related methodologies are Agile Model, RAD Model, Rapid Application Development and Prototyping Models.

Iterative waterfall model:


The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-sequential life cycle model. It is
very simple to understand and use. In a waterfall model, each phase must be completed before the next phase can begin and there is no
overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software development.
The waterfall Model illustrates the software development process in a linear sequential flow. This means that any phase in the
development process begins only if the previous phase is complete. In this waterfall model, the phases do not overlap.

Waterfall Model - Design


Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure success of the project. In "The
Waterfall" approach, the whole process of software development is divided into separate phases. In this Waterfall model, typically, the
outcome of one phase acts as the input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall Model.

The sequential phases in Waterfall model are −


 Requirement Gathering and analysis − All possible requirements of the system to be developed are captured in this phase
and documented in a requirement specification document.
 System Design − The requirement specifications from first phase are studied in this phase and the system design is prepared.
This system design helps in specifying hardware and system requirements and helps in defining the overall system
architecture.
GIT-CSE-ADSA PAGE NO.13
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
 Implementation − With inputs from the system design, the system is first developed in small programs called units, which are
integrated in the next phase. Each unit is developed and tested for its functionality, which is referred to as Unit Testing.
 Integration and Testing − All the units developed in the implementation phase are integrated into a system after testing of
each unit. Post integration the entire system is tested for any faults and failures.
 Deployment of system − Once the functional and non-functional testing is done; the product is deployed in the customer
environment or released into the market.
 Maintenance − There are some issues which come up in the client environment. To fix those issues, patches are released. Also
to enhance the product some better versions are released. Maintenance is done to deliver these changes in the customer
environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily downwards (like a waterfall) through the
phases. The next phase is started only after the defined set of goals are achieved for previous phase and it is signed off, so the name
"Waterfall Model". In this model, phases do not overlap.

Waterfall Model - Application


Every software developed is different and requires a suitable SDLC approach to be followed based on the internal and external factors.
Some situations where the use of Waterfall model is most appropriate are −
 Requirements are very well documented, clear and fixed.
 Product definition is stable.
 Technology is understood and is not dynamic.
 There are no ambiguous requirements.
 Ample resources with required expertise are available to support the product.
 The project is short.

Waterfall Model - Advantages


The advantages of waterfall development are that it allows for departmentalization and control. A schedule can be set with deadlines for
each stage of development and a product can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and
maintenance. Each phase of development proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
 Simple and easy to understand and use
 Easy to manage due to the rigidity of the model. Each phase has specific deliverables and a review process.
 Phases are processed and completed one at a time.
 Works well for smaller projects where requirements are very well understood.
 Clearly defined stages.
 Well understood milestones.
 Easy to arrange tasks.
 Process and results are well documented.

Waterfall Model - Disadvantages


The disadvantage of waterfall development is that it does not allow much reflection or revision. Once an application is in the testing
stage, it is very difficult to go back and change something that was not well-documented or thought upon in the concept stage.
The major disadvantages of the Waterfall Model are as follows −
GIT-CSE-ADSA PAGE NO.14
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of changing. So, risk and uncertainty is high
with this process model.
 It is difficult to measure progress within stages.
 Cannot accommodate changing requirements.
 Adjusting scope during the life cycle can end a project.
 Integration is done as a "big-bang. at the very end, which doesn't allow identifying any technological or business bottleneck or
challenges early.

Prototype model:
The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall
objectives for the software, identify whatever requirements are known, and outline areas where further definition is
mandatory A "quick design" then occurs The quick design focuses on a representation of those aspects of the software that
will be visible to the customer/user (e g , input approaches and output formats) The quick design leads to the construction of
a prototype is evaluated by the customer/user and used to refine requirements or the software to be developed Iteration
occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better
understand what needs to be done.

Ideally, the prototype serves as a mechanism for identifying software requirements, a working prototype
is built, the developer attempts to use existing program fragments or applies tools (e.g., report generators,
window managers) that enable working programs to lie generated quickly

The prototype can serve as "the first system." The one that Brooks recommends we throw away. But this
may be an idealized view. It is true that both customers and developers like the prototyping paradigm. Users
get a feel for the actual system and developers get to build something immediately. Yet, prototyping can also
be problematic for the following reason:

 The customer sees what appears to be a working version of the software, unaware that the prototype is held
GIT-CSE-ADSA PAGE NO.15
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
together "with chewing gum and baling wire," unaware that in the rush to get it working no one has considered
overall software quality or longterm maintainability When informed that the product must be rebuilt so that high
levels of quality can be maintained, the customer cries foul and demands that "a few fixes" be applied to make the
prototype a working product. Too often, software development management relents
 The developer often makes implementation compromises in order to get a prototype working quickly. An
inappropriate operating system or programming language may be used simply because it is available and known;
an inefficient algorithm may be implemented simply to demonstrate capability. After a time, the developer may
become familiar with these choices and forget all the reasons why they were inappropriate. The less-than-ideal
choice has now become an integral part of the system.

 Although problems can occur, prototyping can be an effective paradigm for software engineering.
The key is to define the rules of the game at the beginning; that is, the customer and developer must
both agree that the prototype is built to serve as a mechanism for defining requirements. It is

then discarded (at least in part) and the actual software is engineered with an eye toward quality and
maintainability.

Evolutionary model:
Evolutionary model is also referred to as the successive versions model and sometimes as
the incremental model. In Evolutionary model, the software requirement is first broken down into
several modules (or functional units) that can be incrementally constructed and delivered (see Figure
5).
The development first develops the core modules of the system. The core modules are those that do
not need services from the other modules. The initial product skeleton is refined into increasing
levels of capability by adding new functionalities in successive versions. Each evolutionary model
may be developed using an iterative waterfall model of development.

Evolutionary Development of a Software Product


The evolutionary model is shown in Figure 6. Each successive version/model of the product is a
fully functioning software capable of performing more work than the previous versions/model.
The evolutionary model is normally useful for very large products, where it is easier to find modules
for incremental implementation.

GIT-CSE-ADSA PAGE NO.16


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Evolutionary Model of Software


Development
Often, evolutionary model is used when the customer prefers to receive the product in increments so
that he can start using the different features as and when they are developed rather than waiting all
the time for the full product to be developed and delivered.

Advantages of Evolutionary Model


 Large project: Evolutionary model is normally useful for very large products.

 User gets a chance to experiment with a partially developed software much before the
complete version of the system is released.

 Evolutionary model helps to accurately elicit user requirements during the delivery of different
versions of the software.

 The core modules get tested thoroughly, thereby reducing the chances of errors in the core
modules of the final products.

 Evolutionary model avoids the need to commit large resources in one go for development of the
system.

Disadvantages of Evolutionary Model


 Difficult to divide the problem into several versions that would be acceptable to the customer
and which can be incrementally implemented and delivered.

GIT-CSE-ADSA PAGE NO.17


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Spiral model:-

Spiral model is one of the most important Software Development Life Cycle models, which
provides support for Risk Handling. In its diagrammatic representation, it looks like a spiral
with many loops. The exact number of loops of the spiral is unknown and can vary from
project to project. Each loop of the spiral is called a Phase of the software development
process. The exact number of phases needed to develop the product can be varied by the
project manager depending upon the project risks. As the project manager dynamically
determines the number of phases, so the project manager has an important role to develop a
product using the spiral model.
The Radius of the spiral at any point represents the expenses(cost) of the project so far, and
the angular dimension represents the progress made so far in the current phase.
The below diagram shows the different phases of the Spiral Model: –

Each phase of the Spiral Model is divided into four quadrants as shown in the above figure.
The functions of these four quadrants are discussed below-
1. Objectives determination and identify alternative solutions: Requirements are
gathered from the customers and the objectives are identified, elaborated, and analyzed at
the start of every phase. Then alternative solutions possible for the phase are proposed in
this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution
are identified and the risks are resolved using the best possible strategy. At the end of this
quadrant, the Prototype is built for the best possible solution.
3. Develop next version of the Product: During the third quadrant, the identified features
are developed and verified through testing. At the end of the third quadrant, the next
version of the software is available.
GIT-CSE-ADSA PAGE NO.18
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the
so far developed version of the software. In the end, planning for the next phase is started.
Risk Handling in Spiral Model
A risk is any adverse situation that might affect the successful completion of a software
project. The most important feature of the spiral model is handling these unknown risks after
the project has started. Such risk resolutions are easier done by developing a prototype. The
spiral model supports coping up with risks by providing the scope to build a prototype at every
phase of the software development.
The Prototyping Model also supports risk handling, but the risks must be identified
completely before the start of the development work of the project. But in real life project risk
may occur after the development work starts, in that case, we cannot use the Prototyping
Model. In each phase of the Spiral Model, the features of the product dated and analyzed,
and the risks at that point in time are identified and are resolved through prototyping. Thus,
this model is much more flexible compared to other SDLC models.
Why Spiral Model is called Meta Model?
The Spiral model is called a Meta-Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model. The spiral
model incorporates the stepwise approach of the Classical Waterfall Model. The spiral model
uses the approach of the Prototyping Model by building a prototype at the start of each phase
as a risk-handling technique. Also, the spiral model can be considered as supporting
the Evolutionary model – the iterations along the spiral can be considered as evolutionary
levels through which the complete system is built.
Advantages of Spiral Model:
Below are some advantages of the Spiral Model.
1. Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the
risk analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
3. Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.
4. Customer Satisfaction: Customer can see the development of the product at the early
phase of the software development and thus, they habituated with the system by using it
before completion of the total product.
Disadvantages of Spiral Model:
Below are some main disadvantages of the spiral model.
1. Complex: The Spiral Model is much more complex than other SDLC models.
2. Expensive: Spiral Model is not suitable for small projects as it is expensive.
3. Too much dependability on Risk Analysis: The successful completion of the project is
very much dependent on Risk Analysis. Without very highly experienced experts, it is
going to be a failure to develop a project using this model.
4. Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult.

GIT-CSE-ADSA PAGE NO.19


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

THE RAD MODEL

Rapid application development (RAD) is an incremental software development process model that
emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of
the linear sequential model in which rapid development is achieved by using component-based
construction. If requirements are well understood and project scope is constrained, the RAD process
enables a development team to create a "fully functional system" within very short time periods
(e.g., 60 to 90 days). Used primarily for information systems applications, the RAD approach
encompasses the following phases :

Business modeling The information flow among business functions is modeled in a way that
answers the following questions: What information drives the business process? What
information is generated? Who generates it? Where does the information go? Who processes it?

Data modeling The information flow defined as part of the business modeling phase is refined into
a set of data objects that are needed to support the business The characteristics (called attributes)
of each object are identified and the relationships between these objects defined.

Process modeling The data objects defined in the data modeling phase are transformed to achieve
the information flow necessary to implement a business function. Processing descriptions are
created for adding, modifying, deleting, or retrieving a data object.
GIT-CSE-ADSA PAGE NO.20

Application generation RAD assumes the use of fourth generation techniques. Rather than creating
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

software using conventional third generation programming languages the RAD process works to
reuse existing program components (when possible) or create reusable components (when
necessary). In all cases, automated tools are used to facilitate construction of the software.

Testing and turnover Since the RAD process emphasizes reuse, many of the program
components have already been tested. This reduces overall testing time. However, new
components must be tested and all interfaces must be fully exercised.

If a business application can be modularized in a way that enables each major function to be
completed in less than three months (using the approach described previously), it is a candidate for
RAD. Each major function can be addressed by a separate RAD team and then integrated to form a
whole

Like all process models, the RAD approach has drawbacks :

 For large but scalable projects, RAD requires sufficient human resources to create the right number of
RAD teams
 RAD requires developers and customers who are committed to the rapid-fire activities necessary to get a
system complete in a much abbreviated time frame. If commitment is lacking from either constituency,
RAD projects will fail
 Not all types of applications are appropriate for RAD If a system cannot be properly modularized,
building the components necessary for RAD will be problematic. If high performance is an issue and
performance is to be achieved through tuning the interfaces to system components, the RAD approach
may not work
 RAD is not appropriate when technical risks are high. This occurs when a new application makes heavy
use of new technology or when the new software requires a high degree of interoperability with existing
computer programs.

GIT-CSE-ADSA PAGE NO.21


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Agile Modeling (AM)


Agile Modeling (AM) is a practice-based methodology for effective modeling and
documentation of software- based systems. Simply put, Agile Modeling (AM) is a collection of
values, principles, and practices for modeling software that can be applied on a software
development project in an effective and light-weight manner. Agile models are more effective
than traditional models because they are just barely good, they don’t have to be perfect.
Agile modeling adopts all of the values that are consistent with the agile manifesto. The
agile modeling philosophy recognizes that an agile team must have the courage to make decisions
that may cause it to reject a design and refactor. The team must also have the humility to
recognize that technologists do not have all the answers and that business experts and other
stakeholders should be respected and embraced.

Agile Modeling suggests a wide array of ―core‖ and ―supplementary‖ modeling principles, those
that make AM unique are :
• Model with a purpose. A developer who uses AM should have a specific goal in mind before
creating the model. Once the goal for the model is identified, the type of notation to be used and
level of detail required will be more obvious.
• Use multiple models. There are many different models and notations that can be used to describe
software. Only a small subset is essential for most projects. AM suggests that to provide needed
insight, each model should present a different aspect of the system and only those models that
provide value to their intended audience should be used.
• Travel light. As software engineering work proceeds, keep only those models that will provide
long-term value and jettison the rest. Every work product that is kept must be maintained as
changes occur. This represents work that slows the team down. Ambler notes that ―Every time you
decide to keep a model you trade-off agility for the convenience of having that information

GIT-CSE-ADSA PAGE NO.22


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Software Project Management:

Software Project Management (SPM) is a proper way of planning and leading software
projects. It is a part of project management in which software projects are planned,
implemented, monitored and controlled.
Need of Software Project Management:
Software is an non-physical product. Software development is a new stream in business
and there is very little experience in building software products. Most of the software
products are made to fit client’s requirements. The most important is that the basic
technology changes and advances so frequently and rapidly that experience of one
product may not be applied to the other one. Such type of business and environmental
constraints increase risk in software development hence it is essential to manage
software projects efficiently.
It is necessary for an organization to deliver quality product, keeping the cost within
client’s budget constrain and deliver the project as per scheduled. Hence in order,
software project management is necessary to incorporate user requirements along with
budget and time constraints.
Software Project Management consists of several different type of managements:
1. Conflict Management:
Conflict management is the process to restrict the negative features of conflict while
increasing the positive features of conflict. The goal of conflict management is to
improve learning and group results including efficacy or performance in an
organizational setting. Properly managed conflict can enhance group results.

2. Risk Management:
Risk management is the analysis and identification of risks that is followed by
synchronized and economical implementation of resources to minimize, operate and
control the possibility or effect of unfortunate events or to maximize the realization of
opportunities.

3. Requirement Management:
It is the process of analyzing, prioritizing, tracing and documenting on requirements
and then supervising change and communicating to pertinent stakeholders. It is a
continuous process during a project.

4. Change Management:
Change management is a systematic approach for dealing with the transition or
transformation of an organization’s goals, processes or technologies. The purpose of
change management is to execute strategies for effecting change, controlling change
and helping people to adapt to change.

5. Software Configuration Management:


GIT-CSE-ADSA PAGE NO.23
Software configuration management is the process of controlling and tracing changes
in the software, part of the larger cross-disciplinary field of configuration
management. Software configuration management include revision control and the
inauguration of baselines.
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

6. Release Management:
Release Management is the task of planning, controlling and scheduling the build in
deploying releases. Release management ensures that organization delivers new
and enhanced services required by the customer, while protecting the integrity of
existing services.
Aspects of Software Project Management:

Advantages of Software Project Management:


 It helps in planning of software development.
 Implementation of software development is made easy.
 Monitoring and controlling are aspects of software project management.
 It overall manages to save time and cost for software development.

GIT-CSE-ADSA PAGE NO.24


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Project Planning:

Once a project is found to be possible, computer code project managers undertake


project designing. Project designing is undertaken and completed even before any
development activity starts. Project designing consists of subsequent essential activities:
Estimating the subsequent attributes of the project:
 Project size:
What’s going to be downside quality in terms of the trouble and time needed to
develop the product?
 Cost:
What proportion is it reaching to value to develop the project?
 Duration:
However long is it reaching to want complete development?
 Effort:
What proportion effort would be required?
The effectiveness of the following designing activities relies on the accuracy of those
estimations.
 planning force and alternative resources
 workers organization and staffing plans
 Risk identification, analysis, and abatement designing
 Miscellaneous arranges like quality assurance plan, configuration, management
arrange, etc.
Precedence ordering among project planning activities:
The different project connected estimates done by a project manager have already been
mentioned. The below diagram shows the order during which vital project coming up
with activities is also undertaken. It may be simply discovered that size estimation is that
the 1st activity. It’s conjointly the foremost basic parameter supported that all alternative
coming up with activities square measure dispensed, alternative estimations like the
estimation of effort, cost, resource, and project length also are vital elements of the
project coming up with.

GIT-CSE-ADSA PAGE NO.25


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Sliding Window Planning:


Project designing needs utmost care and a spotlight since commitment to unrealistic
time and resource estimates end in schedule slippage. Schedule delays will cause client
discontent and adversely have an effect on team morale. It will even cause project
failure.
However, project designing could be a terribly difficult activity. particularly for giant
comes, it’s pretty much troublesome to create correct plans. A region of this issue is
thanks to the actual fact that the correct parameters, the scope of the project, project
workers, etc. might amendment throughout the span of the project. So as to beat this
drawback, generally project managers undertake project designing little by little.
Designing a project over a variety of stages protects managers from creating huge
commitments too early. This method of staggered designing is thought of as window
designing. Within the window technique, beginning with associate initial set up, the
project is planned additional accurately in sequential development stages.
At the beginning of a project, project managers have incomplete information concerning
the main points of the project. Their info base step by step improves because the project
progresses through completely different phases. When the completion of each section,
the project managers will set up every ulterior section additional accurately and with
increasing levels of confidence.

GIT-CSE-ADSA PAGE NO.26


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Project Estimation:

Estimations of all kinds are entrenched in our day-to-day life. When we plan a trip, we usually
estimate expenses for accommodation, meals, and transportation. We also calculate how much
time we need to get to the hotel and the airport, figuring out the shortest route. We also set our
priorities and do our best to stick to them.
In software development, accurate estimating is far more vital since the success of your effort is at
stake. As a project manager or business owner, your top priority is to meet deliverables' time-
frames and optimize the budget.
The biggest challenge with project estimation is that there's also a great deal of ambiguity when it
comes to software development. It can be hard to assume how much it'll cost on the first try, as a
lot of factors are at play. It can even take hours of preliminary research and coming up with
unconventional methods how to tackle it. However, you can try to use the standard approaches we
have described below and see if they work.
First, what is project estimation?
Generally speaking, it's the process of analyzing available data to predict the time, cost, and
resources needed to complete a project. Typically, project estimation includes scope, time-frames,
budget, and risks.
Key Components of Project Estimation
Scope
CIO defines project scope as a detailed outline of all aspects of a project, including all related
activities, resources, timelines, and deliverables, as well as the project's boundaries. The project
scope also outlines key stakeholders, processes, assumptions, and constraints, as well as what the
project is about, what is included, and what isn't. All of this essential information is documented in
a scope statement.
The project statement of work (SoW) details all aspects of the project, from the software
development life cycle to key meetings and status updates. Accurately estimating the project's
scope means you can have a more precise understanding of the cost, time-frames, and potential
bottlenecks.
Time-frame
With a scope of work on hand, it's easier to estimate how long it’s going to take to achieve each
milestone. Make sure the time-frame allows time for management and administration. Also, make
sure you prioritize tasks, identifying those that need to be completed before others. Some factors
might also slow things down, like meetings, holidays, interruptions of all sorts, and rejections from
Quality Assurance.
A proper timeline estimation covering all elements of the project will show you how much time
will go into completing different parts, inter dependable deliverables, and when each major
milestone will be achieved.
Resources
Defining the scope of work and timeline makes it easier to understand what resources the project
needs. Resources are staff, vendors, contractors as well as equipment. You should allocate
resources for the tasks in the scope of work. Before you do that, you need to know their availability
and schedule in advance. This way, you increase the reliability of a project.
Cost
Cost is an essential part of a project. Everyone wants to know how much it will cost to develop a
GIT-CSE-ADSA
project before delving into it. To predict the project cost, you need to consider thePAGE NO.27
scope, timeline,
and resources. With these aspects mapped out, you can have a rough estimate of your project.
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

You can base your estimation on past project costs. If you don't have your own historical data, it's
best to ask someone who has already developed a similar project to advise budget-wise. The more
accurate information you have, the closer project estimates will be.
Risks
Every project comes with risks. However, it's possible to identify them and devise strategies to
handle them. An ideal project estimation document includes potential risks as a sort of insurance
against threats in the project. After the risks are identified, you need to prioritize them and
recognize their probability and impact.
How to Estimate a Project?
Here're common techniques you can use to estimate your project.
Top-Down estimating
It's a technique whereby the overall project is estimated as a whole, and then it's broken down into
individual phases. You can use this approach based on your historical data, adding estimation for
each project element. Top-down assessment isn't detailed, so it's only suitable for rough budget
estimation to see if it's viable.
Bottom-Up estimating
This method uses a detailed scope of work and suits projects you've decided to go with. The
bottom-up approach suggests estimating each task and adding the estimates to get a high-level
assessment. This way, you make a big picture from small pieces, which is more time-consuming
but guarantees more accurate results than the Top-Down Estimate.
Expert estimation
This kind of estimation is the quickest way to estimate a project. It involves an expert with relevant
experience who applies their knowledge and historical data to estimate a project. So, if you've
already executed a similar project, you can use a top-down or bottom-up estimation by an expert.
If it's not your typical project, you need to gather a tech and domain experts team to do the
assessment. This team of experts isn't necessarily a part of the project team. However, it's highly
recommended to engage an architect, tech lead, or systems analyst.
Analogous estimation
You can use it when the information about a project is limited and you've executed a similar one in
the past. Comparing a project with the previous one and leaning on historical data can give you
insight into a project you have on the anvil. For more precise estimation, you should choose a
project that's a likeness of your current one. Then, you'll be able to сompare the two projects to get
an estimation. The more similarities the projects have, the more precise estimation will be.
Parametric estimation
If you're looking for accurate estimation, a parametric approach is a good solution. It relies on
historical data to calculate estimation coupled with statistical data. Namely, it uses variables from
similar projects and applies them to a current one. The variables can be human resources,
materials, equipment, and more.

GIT-CSE-ADSA PAGE NO.28


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

COCOMO Model:
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of
Lines of Code. It is a procedural cost estimate model for software projects and is often
used as a process of reliably predicting the various parameters associated with making
a project such as size, effort, cost, time, and quality. It was proposed by Barry Boehm in
1981 and is based on the study of 63 projects, which makes it one of the best-
documented models.
The key parameters which define the quality of any software products, which are also an
outcome of the Cocomo are primarily Effort & Schedule:
 Effort: Amount of labor that will be required to complete a task. It is measured in
person-months units.
 Schedule: Simply means the amount of time required for the completion of the job,
which is, of course, proportional to the effort put in. It is measured in the units of time
such as weeks, months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of these
models can be applied to a variety of projects, whose characteristics determine the
value of constant to be used in subsequent calculations. These characteristics
pertaining to different system types are mentioned below.
Boehm’s definition of organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past
and also the team members have a nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team size, experience, knowledge of the various
programming environment lie in between that of organic and Embedded. The projects
classified as Semi-Detached are comparatively less familiar and difficult to develop
compared to the organic ones and require more experience and better guidance and
creativity. Eg: Compilers or different Embedded Systems can be considered of Semi-
Detached type.
3. Embedded – A software project requiring the highest level of complexity, creativity,
and experience requirement fall under this category. Such software requires a larger
team size than the other two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.
All the above system types utilize different values of the constants used in Effort
Calculations.
Types of Models: COCOMO consists of a hierarchy of three increasingly detailed
and accurate forms. Any of the three forms can be adopted according to our
requirements. These are types of COCOMO model:
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough calculations
ofGIT-CSE-ADSA
Software Costs. Its accuracy is somewhat restricted due to the absence of
PAGE NO.29
sufficient factor considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed
COCOMO additionally accounts for the influence of individual project phases, i.e in
case of Detailed it accounts for both these cost drivers and also calculations are
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

performed phase-wise henceforth producing a more accurate result. These two


models are further discussed below.
Estimation of Effort: Calculations –
4. Basic Model –

5.

6.

7.
The above formula is used for the cost estimation of for the basic COCOMO
model, and also is used in the subsequent models. The constant values a,b,c and
d for the Basic Model for the different categories of system:

Software Projects a b c d

Organic 2.4 1.05 2.5 0.38

Semi Detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

The effort is measured in Person-Months and as evident from the formula is


dependent on Kilo-Lines of code.
The development time is measured in months.
These formulas are used as such in the Basic Model calculations, as not much
consideration of different factors such as reliability, expertise is taken into
account, henceforth the estimate is rough.
Below is the C++ program for Basic COCOMO

// C++ program to implement basic COCOMO

#include<bits/stdc++.h>

using namespace std;

// Function for rounding off float to int

int fround(float x)

GIT-CSE-ADSA
{ PAGE NO.30

int a;
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

x=x+0.5;

a=x;

return(a);

// Function to calculate parameters of Basic COCOMO

void calculate(float table[][4], int n,char mode[][15], int size)

float effort,time,staff;

int model;

// Check the mode according to size

if(size>=2 && size<=50)

model=0; //organic

else if(size>50 && size<=300)

model=1; //semi-detached

else if(size>300)

model=2; //embedded
GIT-CSE-ADSA PAGE NO.31
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

cout<<"The mode is "<<mode[model];

// Calculate Effort

effort = table[model][0]*pow(size,table[model][1]);

// Calculate Time

time = table[model][2]*pow(effort,table[model][3]);

//Calculate Persons Required

staff = effort/time;

// Output the values calculated

cout<<"\nEffort = "<<effort<<" Person-Month";

cout<<"\nDevelopment Time = "<<time<<" Months";

cout<<"\nAverage Staff Required = "<<fround(staff)<<" Persons";

int main()
GIT-CSE-ADSA PAGE NO.32

{
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

float
table[3][4]={2.4,1.05,2.5,0.38,3.0,1.12,2.5,0.35,3.6,1.20,2.5,0.32};

char mode[][15]={"Organic","Semi-Detached","Embedded"};

int size = 4;

calculate(table,3,mode,size);

return 0;

Output:
The mode is Organic
Effort = 10.289 Person-Month
Development Time = 6.06237 Months
Average Staff Required = 2 Persons
8. Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the number
of lines of code and some constants evaluated according to the different software
systems. However, in reality, no system’s effort and schedule can be solely
calculated on the basis of Lines of Code. For that, various other factors such as
reliability, experience, Capability. These factors are known as Cost Drivers and
the Intermediate Model utilizes 15 such drivers for cost estimation.
Classification of Cost Drivers and their attributes:
(i) Product attributes –
 Required software reliability extent
 Size of the application database
 The complexity of the product
(ii) Hardware attributes –
 Run-time performance constraints
 Memory constraints
GIT-CSE-ADSA PAGE NO.33
 The volatility of the virtual machine environment
 Required turnabout time
(iii) Personnel attributes –
 Analyst capability
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

 Software engineering capability


 Applications experience
 Virtual machine experience
 Programming language experience
(iv) Project attributes –
 Use of software tools
 Application of software engineering methods
 Required development schedule

Nominal
Very Very
;
Cost Drivers Low Low High High

Product Attributes

Required Software Reliability 0.75 0.88 1.00 1.15 1.40

Size of Application Database 0.94 1.00 1.08 1.16

Complexity of The Product 0.70 0.85 1.00 1.15 1.30

Hardware Attributes

Runtime Performance
Constraints 1.00 1.11 1.30

Memory Constraints 1.00 1.06 1.21

Volatility of the virtual machine


environment 0.87 1.00 1.15 1.30

Required turnabout time 0.94 1.00 1.07 1.15

Personnel attributes

GIT-CSE-ADSA PAGE NO.34


Analyst capability 1.46 1.19 1.00 0.86 0.71
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Nominal
Very Very
;
Cost Drivers Low Low High High

Applications experience 1.29 1.13 1.00 0.91 0.82

Software engineer capability 1.42 1.17 1.00 0.86 0.70

Virtual machine experience 1.21 1.10 1.00 0.90

Programming language
experience 1.14 1.07 1.00 0.95

Project Attributes

Application of software
engineering methods 1.24 1.10 1.00 0.91 0.82

Use of software tools 1.24 1.10 1.00 0.91 0.83

Required development schedule 1.23 1.08 1.00 1.04 1.10

The project manager is to rate these 15 different parameters for a particular


project on a scale of one to three. Then, depending on these ratings, appropriate
cost driver values are taken from the above table. These 15 values are then
multiplied to calculate the EAF (Effort Adjustment Factor). The Intermediate
COCOMO formula now takes the form:

The values of a and b in case of the intermediate model are as follows:

Software Projects a b

Organic 3.2 1.05

Semi Detached 3.0 1.12


GIT-CSE-ADSA PAGE NO.35

Embeddedc 2.8 1.20


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

9. Detailed Model –
Detailed COCOMO incorporates all characteristics of the intermediate version with
an assessment of the cost driver’s impact on each step of the software
engineering process. The detailed model uses different effort multipliers for each
cost driver attribute. In detailed cocomo, the whole software is divided into
different modules and then we apply COCOMO in different modules to estimate
effort and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
The effort is calculated as a function of program size and a set of cost drivers are
given according to each phase of the software lifecycle.

GIT-CSE-ADSA PAGE NO.36


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Halstead’s Software Science:

A computer program is an implementation of an algorithm considered to be a collection


of tokens which can be classified as either operators or operands. Halstead’s
metrics are included in a number of current commercial tools that count software lines
of code. By counting the tokens and determining which are operators and which are
operands, the following base measures can be collected :
n1 = Number of distinct operators.
n2 = Number of distinct operands.
N1 = Total number of occurrences of operators.
N2 = Total number of occurrences of operands.
In addition to the above, Halstead defines the following :
n1* = Number of potential operators.
n2* = Number of potential operands.
Halstead refers to n1* and n2* as the minimum possible number of operators and
operands for a module and a program respectively. This minimum number would be
embodied in the programming language itself, in which the required operation would
already exist (for example, in C language, any program must contain at least the
definition of the function main()), possibly as a function or as a procedure: n1* = 2, since
at least 2 operators must appear for any function or procedure : 1 for the name of the
function and 1 to serve as an assignment or grouping symbol, and n2* represents the
number of parameters, without repetition, which would need to be passed on to the
function or the procedure.

Halstead metrics –

Halstead metrics are :

 Halstead Program Length – The total number of operator occurrences and the total
number of operand occurrences.
N = N1 + N2
And estimated program length is, N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to estimate program
length:
 NJ = log2(n1!) + log2(n2!)
 NB = n1 * log2n2 + n2 * log2n1
 NC = n1 * sqrt(n1) + n2 * sqrt(n2)
 NS = (n * log2n) / 2
 Halstead Vocabulary – The total number of unique operator and unique operand
occurrences.
n= n1 + n2
GIT-CSE-ADSA PAGE NO.37
 Program Volume – Proportional to program size, represents the size, in bits, of
space necessary for storing the program. This parameter is dependent on specific
algorithm implementation. The properties V, N, and the number of lines in the code
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

are shown to be linearly connected and equally valid for measuring relative program
size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size “bits”. It is the actual
size of a program if a uniform binary encoding for the vocabulary is used. And error =
Volume / 3000
 Potential Minimum Volume – The potential minimum volume V* is defined as the
volume of the most succinct program in which a problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
 Program Level – To rank the programming languages, the level of abstraction
provided by the programming language, Program Level (L) is considered. The higher
the level of a language, the less effort it takes to develop a program using that
language.
L = V* / V
The value of L ranges between zero and one, with L=1 representing a program
written at the highest possible level (i.e., with minimum size).
And estimated program level is L^ =2 * (n2) / (n1)(N2)
 Program Difficulty – This parameter shows how difficult to handle the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the program level
decreases and the difficulty increases. Thus, programming practices such as
redundant usage of operands, or the failure to use higher-level control constructs will
tend to increase the volume as well as the difficulty.
 Programming Effort – Measures the amount of mental activity needed to translate
the existing algorithm into implementation in the specified program language.
E = V / L = D * V = Difficulty * Volume

 Language Level – Shows the algorithm implementation program language level. The
same algorithm demands additional effort if it is written in a low-level program
language. For example, it is easier to program in Pascal than in Assembler.
L’ = V / D / D
lambda = L * V* = L2 * V

 Intelligence Content – Determines the amount of intelligence presented (stated) in


the program This parameter provides a measurement of program complexity,
independently of the program language in which it was implemented.
I=V/D
 Programming Time – Shows time (in minutes) needed to translate the existing
algorithm into implementation in the specified program language.
T = E / (f * S)
The concept of the processing rate of the human brain, developed by the
psychologist
GIT-CSE-ADSA John Stroud, is also used. Stoud defined a moment as the time
PAGE required
NO.38
by the human brain requires to carry out the most elementary decision. The Stoud
number S is therefore Stoud’s moments per second with:
5 <= S <= 20. Halstead uses 18. The value of S has been empirically developed from
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

psychological reasoning, and its recommended value for programming applications is


18.
Stroud number S = 18 moments / second
seconds-to-minutes factor f = 60

Counting rules for C language –

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as
multiple occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique
operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( ) {…}, all control
statements e.g., if ( ) {…}, if ( ) {…} else {…}, etc. are considered as operators.
8. In control construct switch ( ) {case:…}, switch as well as all the case statements are
considered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as
operators.
10. All the brackets, commas, and terminators are considered as operators.
11. GOTO is counted as an operator and the label is counted as an operand.
12. The unary and binary occurrence of “+” and “-” are dealt separately. Similarly “*”
(multiplication operator) are dealt separately.
13. In the array variables such as “array-name [index]” “array-name” and “index” are
considered as operands and [ ] is considered as operator.
14. In the structure variables such as “struct-name, member-name” or “struct-name ->
member-name”, struct-name, member-name are taken as operands and ‘.’, ‘->’ are
taken as operators. Some names of member elements in different structure variables
are counted as unique operands.
15. All the hash directive are ignored.
Example – List out the operators and operands and also calculate the values of
software science measures like

int sort (int x[ ], int n)

{
GIT-CSE-ADSA
int i, j, save, im1; PAGE NO.39

/*This function sorts array x in ascending order */


If (n< 2) return 1;
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

for (i=2; i< =n; i++)


{
im1=i-1;
for (j=1; j< =im1; j++)
if (x[i] < x[j])
{
Save = x[i];
x[i] = x[j];
x[j] = save;
}
}
return 0;
}

Explanation –

operators occurrences operands occurrences

Int 4 sort 1

() 5 x 7

, 4 n 3

[] 7 i 8

If 2 j 7

< 2 save 3

; 11 im1 3

For 2 2 2

= 6 1 3
GIT-CSE-ADSA PAGE NO.40
– 1 0 1
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

<= 2 – –

++ 2 – –

return 2 – –

{} 3 – –

n1=14 N1=53 n2=10 N2=38

Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds

Advantages of Halstead Metrics:


 It is simple to calculate.
 It measures overall quality of the programs.
 It predicts the rate of error.
 It predicts maintenance effort.
 It does not require the full analysis of programming structure.
 It is useful in scheduling and reporting projects.
 It can be used for any programming language.

Disadvantages of Halstead Metrics:

 It GIT-CSE-ADSA
depends on the complete code. PAGE NO.41
 It has no use as a predictive estimating model.
TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Project scheduling:

Project-task scheduling is a significant project planning activity. It comprises deciding which functions
would be taken up when. To schedule the project plan, a software project manager wants to do the
following:
1. Identify all the functions required to complete the project.
2. Break down large functions into small activities.
3. Determine the dependency among various activities.
4. Establish the most likely size for the time duration required to complete the activities.
5. Allocate resources to activities.
6. Plan the beginning and ending dates for different activities.
7. Determine the critical path. A critical way is the group of activities that decide the duration of the
project.

The first method in scheduling a software plan involves identifying all the functions required to
complete the project. A good judgment of the intricacies of the project and the development process
helps the supervisor to identify the critical role of the project effectively. Next, the large functions are
broken down into a valid set of small activities which would be assigned to various engineers. The work
breakdown structure formalism supports the manager to breakdown the function systematically after
the project manager has broken down the purpose and constructs the work breakdown structure; he
has to find the dependency among the activities. Dependency among the various activities determines
the order in which the various events would be carried out. If an activity A necessary the results of
another activity B, then activity A must be scheduled after activity B. In general, the function
dependencies describe a partial ordering among functions, i.e., each service may precede a subset of
other functions, but some functions might not have any precedence ordering describe between them
(called concurrent function). The dependency among the activities is defined in the pattern of an activity
network.

Once the activity network representation has been processed out, resources are allocated to every
activity. Resource allocation is usually done using a Gantt chart. After resource allocation is completed, a
PERT chart representation is developed. The PERT chart representation is useful for program monitoring
and control. For task scheduling, the project plan needs to decompose the project functions into a set
of activities. The time frame when every activity is to be performed is to be determined. The end of
every action is called a milestone. The project manager tracks the function of a project by audit the
timely completion of the milestones. If he examines that the milestones start getting delayed, then he
has to handle the activities carefully so that the complete deadline can still be met.

GIT-CSE-ADSA PAGE NO.42


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Staffing:

Personnel Planning deals with staffing. Staffing deals with the appoint personnel for the position that is identified
by the organizational structure.

It involves:

o Defining requirement for personnel


o Recruiting (identifying, interviewing, and selecting candidates)
o Compensating
o Developing and promoting agent

For personnel planning and scheduling, it is helpful to have efforts and schedule size for the subsystems
and necessary component in the system.

At planning time, when the system method has not been completed, the planner can only think to know
about the large subsystems in the system and possibly the major modules in these subsystems.

5Once the project plan is estimated, and the effort and schedule of various phases and functions are known, staff
requirements can be achieved.

From the cost and overall duration of the projects, the average staff size for the projects can be
determined by dividing the total efforts (in person-months) by the whole project duration (in months).

Typically the staff required for the project is small during requirement and design, the maximum during
implementation and testing, and drops again during the last stage of integration and testing.

Using the COCOMO model, average staff requirement for various phases can be calculated as the effort
and schedule for each method are known.

When the schedule and average staff level for every action are well-known, the overall personnel
allocation for the project can be planned.

This plan will indicate how many people will be required for different activities at different times for the
duration of the project.

The total effort for each month and the total effort for each step can easily be calculated from this plan.

Team Structure
Team structure addresses the issue of arrangement of the individual project teams. There are some
possible methods in which the different project teams can be organized. There are primarily three
GIT-CSE-ADSA PAGE NO.43
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
formal team structures: chief programmer, Ego-less or democratic, and the mixed team
organizations even several other variations to these structures are possible. Problems of various
complexities and sizes often need different team structures for the chief solution.

Ego-Less or Democratic Teams


Ego-Less teams subsist of a team of fewer programmers. The objective of the group is set by consensus,
and input from each member is taken for significant decisions. Group leadership revolves among the
group members. Due to its nature, egoless teams are consistently known as democratic teams.

The structure allows input from all representatives, which can lead to better decisions in various
problems. This suggests that this method is well suited for long-term research-type projects that do not
have time constraints.

Chief Programmer Team


A chief-programmer team, in contrast to the ego-less team, has a hierarchy. It consists of a chief-
programmer, who has a backup programmer, a program librarian, and some programmers.

The chief programmer is essential for all major technical decisions of the project.

He does most of the designs, and he assigns coding of the different part of the design to the
programmers.

The backup programmer uses the chief programmer makes technical decisions, and takes over the chief
programmer if the chief programmer drops sick or leaves.

GIT-CSE-ADSA PAGE NO.44


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
The program librarian is vital for maintaining the documentation and other communication-related
work.

This structure considerably reduces interpersonal communication. The communication paths, as shown
in fig:

Controlled Decentralized Team


(Hierarchical Team Structure)
A third team structure known as the controlled decentralized team tries to combine the strength of the
democratic and chief programmer teams.

It consists of project leaders who have a class of senior programmers under him, while under every
senior programmer is a group of a junior programmer.

The group of a senior programmer and his junior programmers behave like an ego-less team, but
communication among different groups occurs only through the senior programmers of the group.

The senior programmer also communicates with the project leader.

Such a team has fewer communication paths than a democratic team but more paths compared to a
chief programmer team.

GIT-CSE-ADSA PAGE NO.45


TADIPATRI ENGINEERING COLLEGE, TADIPATRI
This structure works best for large projects that are reasonably straightforward. It is not well suited for
simple projects or research-type projects.

GIT-CSE-ADSA PAGE NO.46


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Organization and team structure:


There are many ways to organize the project team. Some important ways are as follows :

1. Hierarchical team organization


2. Chief-programmer team organization
3. Matrix team, organization
4. Egoless team organization
5. Democratic team organization
Hierarchical team organization :
In this, the people of organization at different levels following a tree structure. People at
bottom level generally possess most detailed knowledge about the system. People at higher
levels have broader appreciation of the whole project.

Benefits of hierarchical team organization :

 It limits the number of communication paths and stills allows for the needed
communication.
 It can be expanded over multiple levels.
 It is well suited for the development of the hierarchical software products.
 Large software projects may have several levels.
Limitations of hierarchical team organization :

 As information has to be travel up the levels, it may get distorted.


 Levels in the hierarchy often judges people socially and financially.
 Most technical competent programmers tend to be promoted to the management positions
which may result in loss of good programmer and also bad manager.
Chief-programmer team organization :
This team organization is composed of a small team consisting the following team members :

 The Chief programmer : It is the person who is actively involved in the planning,
specification and design process and ideally in the implementation process as well.
 The project assistant : It is the closest technical co-worker of the chief programmer.
 The project secretary : It relieves the chief programmer and all other programmers of
administration tools.
GIT-CSE-ADSA PAGE NO.47
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
 Specialists : These people select the implementation language, implement individual
system components and employ software tools and carry out tasks.

Advantages of Chief-programmer team organization :

 Centralized decision-making
 Reduced communication paths
 Small teams are more productive than large teams
 The chief programmer is directly involved in system development and can exercise the
better control function.
Disadvantages of Chief-programmer team organization :

 Project survival depends on one person only.


 Can cause the psychological problems as the “chief programmer” is like the “king” who
takes all the credit and other members are resentful.
 Team organization is limited to only small team and small team cannot handle every
project.
 Effectiveness of team is very sensitive to Chief programmer’s technical and managerial
activities.
Matrix Team Organization :
In matrix team organization, people are divided into specialist groups. Each group has a
manager. Example of Metric team organization is as follows :
Egoless Team Organization :
Egoless programming is a state of mind in which programmer are supposed to separate
themselves from their product. In this team organization goals are set and decisions are made
by group consensus. Here group, ‘leadership’ rotates based on tasks to be performed and
differing abilities of members.
In this organization work products are discussed openly and all freely examined all team
members. There is a major risk which such organization, if teams are composed of
inexperienced or incompetent members.
Democratic Team Organization :
It is quite similar to the egoless team organization, but one member is the team leader with
GIT-CSE-ADSA PAGE NO.48
TADIPATRI ENGINEERING COLLEGE, TADIPATRI
some responsibilities :

 Coordination
 Final decisions, when consensus cannot be reached.
Advantages of Democratic Team Organization :

 Each member can contribute to decisions.


 Members can learn from each other.
 Improved job satisfaction.
Disadvantages of Democratic Team Organization :

 Communication overhead increased.


 Need for compatibility of members.
 Less individual responsibility and authority.

GIT-CSE-ADSA PAGE NO.49


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Risk management:
What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem
that could cause some loss or threaten the progress of the project, but which has not
happened yet.
These potential issues might harm cost, schedule or technical success of the project and
the quality of our software device, or project team morale.
Risk Management is the system of identifying addressing and eliminating these
problems before they can damage the project.
We need to differentiate risks, as potential issues, from the current problems of the
project.ry of Java
Different methods are required to address these two kinds of issues.
For example, staff storage, because we have not been able to select people with the
right technical skills is a current problem, but the threat of our technical persons being
hired away by the competition is a risk.

Risk Management
A software project can be concerned with a large variety of risks. In order to be adept to
systematically identify the significant risks which might affect a software project, it is
essential to classify risks into different classes. The project manager can then check
which risks from each class are relevant to the project.

There are three main classifications of risks which can affect a software project:

1. Project risks
2. Technical risks
3. Business risks

1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel,
resource, and customer-related problems. A vital project risk is schedule slippage. Since
the software is intangible, it is very tough to monitor and control a software project. It is
very tough to control something which cannot be identified. For any manufacturing
program, such as the manufacturing of cars, the plan executive can recognize the
product taking shape.

2. Technical risks: Technical risks concern potential method, implementation,


interfacing, testing, and maintenance issue. It also consists of an ambiguous

GIT-CSE-ADSA PAGE NO.50


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

specification, incomplete specification, changing specification, technical uncertainty, and


technical obsolescence. Most technical risks appear due to the development team's
insufficient knowledge about the project.

3. Business risks: This type of risks contain risks of building an excellent product that no
one need, losing budgetary or personnel commitments, etc.

Other risk categories

1. 1. Known risks: Those risks that can be uncovered after careful assessment of the
project program, the business and technical environment in which the plan is being
developed, and more reliable data sources (e.g., unrealistic delivery date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project experience
(e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to
identify in advance.

Principle of Risk Management


1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and
create future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the
client and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of
project management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the
risk management paradigm.

GIT-CSE-ADSA PAGE NO.51


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Risk Management Activities


Risk management consists of three main activities, as shown in fig:

Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss,
causing potential. For risk assessment, first, every risk should be rated in two methods:

o The possibility of a risk coming true (denoted as r).


o The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

p = r * sJava Try Catch

Where p is the priority with which the risk must be controlled, r is the probability of the
risk becoming true, and s is the severity of loss caused due to the risk becoming true. If
all identified risks are set up, then the most likely and damaging risks can be controlled
first, and more comprehensive risk abatement methods can be designed for these risks.

GIT-CSE-ADSA PAGE NO.52


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

1. Risk Identification: The project organizer needs to anticipate the risk in the project
as early as possible so that the impact of risk can be reduced by making effective risk
management planning.

A project can be of use by a large variety of risk. To identify the significant risk, this
might affect a project. It is necessary to categories into the different risk of classes.

There are different types of risks which can affect a software project:

1. Technology risks: Risks that assume from the software or hardware technologies that
are used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used
to create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement
and the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system

2. Risk Analysis: During the risk analysis process, you have to consider every identified
risk and make a perception of the probability and seriousness of that risk.

There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.

It is not possible to make an exact, the numerical estimate of the probability and
seriousness of each risk. Instead, you should authorize the risk to one of several bands:

1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.

GIT-CSE-ADSA PAGE NO.53


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified
risks of a plan are determined; the project must be made to include the most harmful
and the most likely risks. Different risks need different containment methods. In fact,
most risks need ingenuity on the part of the project manager in tackling the risk.

There are three main methods to plan for risk management:

1. Avoid the risk: This may take several ways such as discussing with the client to change
the requirements to decrease the scope of the work, giving incentives to the engineers
to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third
party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.

Risk Leverage: To choose between the various methods of handling risk, the project
plan must consider the amount of controlling the risk and the corresponding reduction
of risk. For this, the risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.

Risk leverage = (risk exposure before reduction - risk exposure after reduction) /
(cost of reduction)

1. Risk planning: The risk planning method considers each of the key risks that have
been identified and develop ways to maintain these risks.

For each of the risks, you have to think of the behavior that you may take to minimize
the disruption to the plan if the issue identified in the risk occurs.

You also should think about data that you might need to collect while monitoring the
plan so that issues can be anticipated.

Again, there is no easy process that can be followed for contingency planning. It rely on
the judgment and experience of the project manager.

GIT-CSE-ADSA PAGE NO.54


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Configuration management:

Whenever a software is build, there is always scope for improvement and those
improvements brings changes in picture. Changes may be required to modify or
update any existing solution or to create a new solution for a problem.
Requirements keeps on changing on daily basis and so we need to keep on
upgrading our systems based on the current requirements and needs to meet
desired outputs. Changes should be analyzed before they are made to the
existing system, recorded before they are implemented, reported to have details
of before and after, and controlled in a manner that will improve quality and
reduce error. This is where the need of System Configuration Management
comes.
System Configuration Management (SCM) is an arrangement of exercises
which controls change by recognizing the items for change, setting up
connections between those things, making/characterizing instruments for
overseeing diverse variants, controlling the changes being executed in the
current framework, inspecting and revealing/reporting on the changes made. It
is essential to control the changes in light of the fact that if the changes are not
checked legitimately then they may wind up undermining a well-run
programming. In this way, SCM is a fundamental piece of all project
management activities.
Processes involved in SCM –
Configuration management provides a disciplined environment for smooth
control of work products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration items from
products that compose baselines at given points in time (a baseline is a set
of mutually consistent Configuration Items, which has been formally
reviewed and agreed upon, and serves as the basis of further development).
Establishing relationship among items, creating a mechanism to manage
multiple level of control and procedure for change management system.
2. Version control – Creating versions/specifications of the existing product to
build new products from the help of SCM system. A description of version is
given below:

GIT-CSE-ADSA PAGE NO.55


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

Suppose after some changes, the version of configuration object changes


from 1.0 to 1.1. Minor corrections and changes result in versions 1.1.1 and
1.1.2, which is followed by a major update that is object 1.2. The
development of object 1.0 continues through 1.3 and 1.4, but finally, a
noteworthy change to the object results in a new evolutionary path, version
2.0. Both versions are currently supported.
3. Change control – Controlling changes to Configuration items (CI). The
change control process is explained in Figure below:

GIT-CSE-ADSA PAGE NO.56


TADIPATRI ENGINEERING COLLEGE, TADIPATRI

A change request (CR) is submitted and evaluated to assess technical merit,


potential side effects, overall impact on other configuration objects and
system functions, and the projected cost of the change. The results of the
evaluation are presented as a change report, which is used by a change
control board (CCB) —a person or group who makes a final decision on the
status and priority of the change. An engineering change Request (ECR) is
generated for each approved change.
Also CCB notifies the developer in case the change is rejected with proper
reason. The ECR describes the change to be made, the constraints that
must be respected, and the criteria for review and audit. The object to be
changed is “checked out” of the project database, the change is made, and
then the object is tested again. The object is then “checked in” to the
database and appropriate version control mechanisms are used to create
the next version of the software.
4. Configuration auditing – A software configuration audit complements the
formal technical review of the process and product. It focuses on the
technical correctness of the configuration object that has been modified. The
audit confirms the completeness, correctness and consistency of items in the
SCM system and track action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to
developers, tester, end users, customers and stakeholders through admin
guides, user guides, FAQs, Release notes, Memos, Installation Guide,
Configuration guide etc .
SCM Tools –
Different tools are available in market for SCM like: CFEngine, Bcfg2 server,
Vagrant, SmartFrog, CLEAR CASETOOL (CC), SaltStack, CLEAR QUEST
TOOL, Puppet, SVN- Subversion, Perforce, TortoiseSVN, IBM Rational team
concert, IBM Configuration management version management, Razor,
Ansible, etc. There are many more in the list.
It is recommended that before selecting any configuration management tool,
have a proper understanding of the features and select the tool which best
suits your project needs and be clear with the benefits and drawbacks of
each before you choose one to use.

GIT-CSE-ADSA PAGE NO.57


UNIT – II
Requirements analysis and specification
The nature of software, The Unique nature of Webapps, Software Myths,
Requirements gathering and analysis, software requirements specification,
Traceability, Characteristics of a Good SRS Document, IEEE 830 guidelines,
representing complex requirements using decision tables and decision trees, overview
of formal system development techniques, axiomatic specification, algebraic
specification

The nature of software:


The software is instruction or computer program that when executed provide
desired features, function, and performance. A data structure that enables the
program to adequately manipulate information and document that describe the
operation and use of the program.
Characteristic of software:
There is some characteristic of software which is given below:
1. Functionality
2. Reliability
3. Usability
4. Efficiency
5. Maintainability
6. Portability
Changing Nature of Software:
Nowadays, seven broad categories of computer software present continuing
challenges for software engineers .which is given below:
1. System Software:
System software is a collection of programs which are written to service
other programs. Some system software processes complex but determinate,
information structures. Other system application process largely
indeterminate data. Sometimes when, the system software area is
characterized by the heavy interaction with computer hardware that requires
scheduling, resource sharing, and sophisticated process management.
2. Application Software:
Application software is defined as programs that solve a specific business
need. Application in this area process business or technical data in a way
that facilitates business operation or management technical decision making.
In addition to convention data processing application, application software is
used to control business function in real time.
3. Engineering and Scientific Software:
This software is used to facilitate the engineering function and task. however
modern application within the engineering and scientific area are moving
away from the conventional numerical algorithms. Computer-aided design,
system simulation, and other interactive applications have begun to take a
real-time and even system software characteristic.
4. Embedded Software:
Embedded software resides within the system or product and is used to
implement and control feature and function for the end-user and for the
system itself. Embedded software can perform the limited and esoteric
function or provided significant function and control capability.
5. Product-line Software:
Designed to provide a specific capability for use by many different
customers, product line software can focus on the limited and esoteric
marketplace or address the mass consumer market.
6. Web Application:
It is a client-server computer program which the client runs on the web
browser. In their simplest form, Web apps can be little more than a set of
linked hypertext files that present information using text and limited graphics.
However, as e-commerce and B2B application grow in importance. Web
apps are evolving into a sophisticate computing environment that not only
provides a standalone feature, computing function, and content to the end
user.
7. Artificial Intelligence Software:
Artificial intelligence software makes use of a nonnumerical algorithm to
solve a complex problem that is not amenable to computation or
straightforward analysis. Application within this area includes robotics, expert
system, pattern recognition, artificial neural network, theorem proving and
game playing.
The Unique nature of Webapps:
Introduction:
In the early days of the World Wide Web (1990 to 1995), websites consisted of
little more than a set of linked hypertext files that presented information using text
and limited graphics.
Today, WebApps have evolved into sophisticated computing tools that not only
provide stand-alone function to the end user, but also have been integrated with
corporate databases and business applications due to the development of HTML,
JAVA, xml etc.

Attributes of WebApps :
Network Intensiveness
Concurrency
Unpredictable load
Performance
Availability
Data driven
Content Sensitive
Continuous evolution
Immediacy
Security
Aesthetic
Network intensiveness.
A WebApp resides on a network and must serve the needs of a diverse community
of clients.
The network may enable worldwide access and communication (i.e., the Internet)
or more limited access and communication
(e.g., a corporate Intranet Network Intensiveness)

Concurrency : [ Operation at the same time]


A large number of users may access the WebApp at one time. In many cases, the
patterns of usage among end users will vary greatly.

Unpredictable load :
The number of users of the WebApp may vary by orders of magnitude from day to
day. One hundred users may show up on Monday; 10,000 may use the system on
Thursday.

Performance :
If a WebApp user must wait too long (for access, for server side processing, for
client-side formatting and display), he or she may decide to go elsewhere.

Availability :
Although expectation of 100 percent availability is unreasonable, users of popular
WebApps often demand access on a 24/7/365 basis.

Data driven :
The primary function of many WebApps is to use hypermedia to present text,
graphics, audio, and video content to the end user.
In addition, WebApps are commonly used to access information that exists on
databases that are not an integral part of the Web-based environment (e.g., e-
commerce or financial applications).

Content sensitive:
The quality and artistic nature of content remains an important
Determinant of the quality of a WebApp.

Continuous evolution:
Unlike conventional application software that evolves over a series of planned,
chronologically spaced releases, Web applications evolve continuously.
It is not unusual for some WebApps (specifically, their content) to be updated on a
minute-by-minute schedule or for content to be independently computed for each
request.

Immediacy:
Although immediacy—the compelling (forceful) need to get software to market
quickly—is a characteristic of many application domains,
WebApps often exhibit a time-to-market that can be a matter of a few days or
weeks.

Security:
Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In
order to protect sensitive content and provide secure mode of data transmission,
strong security measures must be implemented.

Aesthetics : [Artistic / Visual]


An undeniable part of the appeal of a WebApp is its look and feel. When an
application has been designed to market or sell products or ideas, aesthetic may
have as much to do with success as technical design.
Software Myths:
Most, experienced experts have seen myths or superstitions (false beliefs or
interpretations) misleading attitudes (naked users) which creates
Major problems for management and technical people. The opposite Types of
software-related myths are listed below.

Types of Software Myths

(i) Management Myths :


Myth 1:
We have all the standards and procedures available for software development
i.e. the software developer has all the reqd.
Fact :
 Software experts do not know that there are all of them levels.
 Such practices may or may not be expired at present / modern software
engineering methods.
 And all existing processes are incomplete.

Myth 2 :
The addition of the latest hardware programs will improve the software
development.
Fact:
 The role of the latest hardware is not very high on standard software
development; instead (CASE) Engineering tools help the computer they are
more important than hardware to produce quality and productivity.
 Hence, the hardware resources are misused.
Myth 3 :
 Managers think that, with the addition of more people and program planners
to Software development can help meet project deadlines (If lagging behind).
Fact :
 Software development is not, the process of doing things like production;
here the addition of people in previous stages can reduce the time it will be
used for productive development, as the newcomers would take time
existing developers of definitions and understanding of the file project.
However, planned additions are organized and organized It can help
complete the project.

Different Stages of Myths

(ii)Customer Myths :
The customer can be the direct users of the software, the technical team,
marketing / sales department, or other company. Customer has myths
Leading to false expectations (customer) & that’s why you create
dissatisfaction with the developer.
Myth 1 :
A general statement of intent is enough to start writing plans (software
development) and details of objectives can be done over time.
Fact:
 Official and detailed description of the database function, ethical
performance, communication, structural issues and the verification process
are important.
 It is happening that the complete communication between the customer and
the developer is required.
Myth 2 :
 Project requirements continue to change, but, change, can be easy location
due to the flexible nature of the software.
Fact :
 Changes were made to the final stages of software development but cost to
make those changes grow through the latest stages of
Development. A detailed analysis of user needs should be done to
minimize change requirement. Figure shows the transition costs in
Respect of the categories of development.
(iii)Practitioner’s Myths :
Myths 1 :
They believe that their work has been completed with the writing of the plan and
they received it to work.
Fact:
 It is true that every 60-80% effort goes into the maintenance phase (as of the
latter software release). Efforts are required, where the product is available
first delivered to customers.
Myths 2 :
There is no other way to achieve system quality, behind it done running.
Fact:
 Systematic review of project technology is the quality of effective software
verification method. These updates are quality filters and more accessible
than test.
Myth 3 :
An operating system is the only product that can be successfully exported
project.
Fact:
 A working system is not enough, it is just the right document brochures and
booklets are also reqd. To provide for guidance & software support.
Myth4 :
Engineering software will enable us to build powerful and unnecessary
document & always delay us.
Fact :
 Software engineering does not deal with text building, rather while creating
better quality leads to reduced recycling & this is being studied for rapid
product delivery.

Requirements gathering and analysis:

Requirement analysis is significant and essential activity after elicitation. We analyze,


refine, and scrutinize the gathered requirements to make consistent and unambiguous
requirements. This activity reviews all requirements and may provide a graphical view of
the entire system. After the completion of the analysis, it is expected that the
understandability of the project may improve significantly. Here, we may also use the
interaction with the customer to clarify points of confusion and to understand which
requirements are more important than others.

The various steps of requirement analysis are shown in fig:


(i) Draw the context diagram: The context diagram is a simple model that defines the
boundaries and interfaces of the proposed systems with the external world. It identifies
the entities outside the proposed system that interact with the system. The context
diagram of student result management system is given below:
(ii) Development of a Prototype (optional): One effective way to find out what the
customer wants is to construct a prototype, something that looks and preferably acts as
part of the system they say they want.

We can use their feedback to modify the prototype until the customer is satisfied
continuously. Hence, the prototype helps the client to visualize the proposed system
and increase the understanding of the requirements. When developers and users are
not sure about some of the elements, a prototype may help both the parties to take a
final decision.

Some projects are developed for the general market. In such cases, the prototype
should be shown to some representative sample of the population of potential
purchasers. Even though a person who tries out a prototype may not buy the final
system, but their feedback may allow us to make the product more attractive to others.

The prototype should be built quickly and at a relatively low cost. Hence it will always
have limitations and would not be acceptable in the final system. This is an optional
activity.

(iii) Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the relationships
between them. The graphical view may help to find incorrect, inconsistent, missing, and
superfluous requirements. Such models include the Data Flow diagram, Entity-
Relationship diagram, Data Dictionaries, State-transition diagrams, etc.

(iv) Finalise the requirements: After modeling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been
identified and corrected. The flow of data amongst various modules has been analyzed.
Elicitation and analyze activities have provided better insight into the system. Now we
finalize the analyzed requirements, and the next step is to document these requirements
in a prescribed format.

Next
Software requirements specification:

The production of the requirements stage of the software development process


is Software Requirements Specifications (SRS) (also called a requirements
document). This report lays a foundation for software engineering activities and is
constructing when entire requirements are elicited and analyzed. SRS is a formal report,
which acts as a representation of software that enables the customers to review whether
it (SRS) is according to their requirements. Also, it comprises user requirements for a
system as well as detailed specifications of the system requirements.

The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a system.
Second, the SRS could be written by a developer of the system. The two methods create
entirely various situations and establish different purposes for the document altogether.
The first case, SRS, is used to define the needs and expectation of the users. The second
case, SRS, is written for various purposes and serves as a contract document between
customer and developer.

Traceability:
Traceability comprises of two words i.e. trace and ability. Trace means to find someone
or something and ability means to a skill or capability or talent to do something.
Therefore, traceability simply means the ability to trace the requirement, to provide better
quality, to find any risk, to keep and verify the record of history and production of an
item or product by the means of documented identification. Due to this, it’s easy for the
suppliers to reduce any risk or any issue if found and to improve the quality of the item or
product. So, it’s important to have traceability rather than no traceability. Using
traceability, finding requirements, and any risk to improve the quality of the product
becomes very easy.
There are various types of traceability given below:
1. Source traceability –
These are basically the links from requirement to stakeholders who propose these
requirements.
2. Requirements traceability –
These are the links between dependent requirements.
3. Design traceability –
These are the links from requirement to design.
Traceability matrix is generally used to represent the information of traceability. For
mentioning the traceability of small systems usually the traceability matrix is maintained.
If one requirement is dependent upon another requirement then in that row-column cell
‘D’ is mentioned and if there is a weak relationship between the requirements than
corresponding entry can be denoted by ‘R’. For example:
Requirement ID A B C D E F
A D R

B D

C R

D D R

F R D

A simple traceability matrix structure is shown below:

Characteristics of good SRS Document:-


Following are the features of a good SRS document:

1. Correctness: User review is used to provide the accuracy of requirements stated in


the SRS. SRS is said to be perfect if it covers all the needs that are truly expected from
the system.4Exception Handling in Java - Javatpoint

2. Completeness: The SRS is complete if, and only if, it includes the following elements:

(1). All essential requirements, whether relating to functionality, performance, design,


constraints, attributes, or external interfaces.

(2). Definition of their responses of the software to all realizable classes of input data in
all available categories of situations.

Note: It is essential to specify the responses to both valid and invalid values.

(3). Full labels and references to all figures, tables, and diagrams in the SRS and
definitions of all terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual
requirements described in its conflict. There are three types of possible conflict in the
SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but
in another as textual.
(b) One condition may state that all lights shall be green while another states that all
lights shall be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions.
For example,

(a) One requirement may determine that the program will add two inputs, and another
may determine that the program will multiply them.

(b) One condition may state that "A" must always follow "B," while other requires that "A
and B" co-occurs.

(3). Two or more requirements may define the same real-world object but use different
terms for that object. For example, a program's request for user input may be called a
"prompt" in one requirement's and a "cue" in another. The use of standard terminology
and descriptions promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.

5. Ranking for importance and stability: The SRS is ranked for importance and
stability if each requirement in it has an identifier to indicate either the significance or
stability of that particular requirement.

Typically, all requirements are not equally important. Some prerequisites may be
essential, especially for life-critical applications, while others may be desirable. Each
element should be identified to make these differences clear and explicit. Another way
to rank requirements is to distinguish classes of items as essential, conditional, and
optional.

6. Modifiability: SRS should be made as modifiable as likely and should be capable of


quickly obtain changes to the system to some extent. Modifications should be perfectly
indexed and cross-referenced.

7. Verifiability: SRS is correct when the specified requirements can be verified with a
cost-effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear
and if it facilitates the referencing of each condition in future development or
enhancement documentation.

There are two types of Traceability:

1. Backward Traceability: This depends upon each requirement explicitly referencing


its source in earlier documents.

2. Forward Traceability: This depends upon each element in the SRS having a unique
name or reference number.

The forward traceability of the SRS is especially crucial when the software product enters
the operation and maintenance phase. As code and design document is modified, it is
necessary to be able to ascertain the complete set of requirements that may be
concerned by those modifications.

9. Design Independence: There should be an option to select from multiple design


alternatives for the final system. More specifically, the SRS should not contain any
implementation details.

10. Testability: An SRS should be written in such a method that it is simple to generate
test cases and test plans from the report.

11. Understandable by the customer: An end user may be an expert in his/her explicit
domain but might not be trained in computer science. Hence, the purpose of formal
notations and symbols should be avoided too as much extent as possible. The language
should be kept simple and clear.

12. The right level of abstraction: If the SRS is written for the requirements stage, the
details should be explained explicitly. Whereas,for a feasibility study, fewer analysis can
be used. Hence, the level of abstraction modifies according to the objective of the SRS.

Properties of a good SRS document


The essential properties of a good SRS document are the following:

Concise: The SRS report should be concise and at the same time, unambiguous,
consistent, and complete. Verbose and irrelevant descriptions decrease readability and
also increase error possibilities.
Structured: It should be well-structured. A well-structured document is simple to
understand and modify. In practice, the SRS document undergoes several revisions to
cope up with the user requirements. Often, user requirements evolve over a period of
time. Therefore, to make the modifications to the SRS document easy, it is vital to make
the report well-structured.

Black-box view: It should only define what the system should do and refrain from
stating how to do these. This means that the SRS document should define the external
behavior of the system and not discuss the implementation issues. The SRS report
should view the system to be developed as a black box and should define the externally
visible behavior of the system. For this reason, the SRS report is also known as the black-
box specification of a system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable
responses to unwanted events. These are called system response to exceptional
conditions.

Verifiable: All requirements of the system, as documented in the SRS document, should
be correct. This means that it should be possible to decide whether or not requirements
have been met in an implementation.

IEEE 830 Guidelines:


Organization: IEEE
Publication Date: 25 June 1998
Status: Inactive
Page Count: 38
Scope:
This is a recommended practice for writing software requirements speciÞcations. It describes the content and
qualities of a good software requirements speciÞcation (SRS) and presents several sample SRS outlines.

This recommended practice is aimed at specifying requirements of software to be developed but also can be
applied to assist in the selection of in-house and commercial software products. However, application to
already-developed software could be counterproductive.

When software is embedded in some larger system, such as medical equipment, then issues beyond those
identiÞed in this recommended practice may have to be addressed.
This recommended practice describes the process of creating a product and the content of the product. The
product is an SRS. This recommended practice can be used to create such an SRS directly or can be used as a
model for a more speciÞc standard.

This recommended practice does not identify any speciÞc method, nomenclature, or tool for preparing an SRS.
Document History

IEEE 830
June 25, 1998
Recommended Practice for Software Requirements SpeciÞcations
This is a recommended practice for writing software requirements speciÞcations. It describes the content and qualities of a good
software requirements speciÞcation (SRS) and presents several sample...

IEEE 830
June 25, 1998
Recommended Practice for Software Requirements Specifications
This is a recommended practice for writing software requirements specifications. It describes the content and qualities of a good
software requirements speciÞcation (SRS) and presents several sample...

IEEE 830
January 1, 1993
Recommended Practice for Software Requirements Specifications
This is a recommended practice for writing software requirements specifications. It describes the content and qualities of a good
software requirements specification (SRS) and presents several sample...

IEEE 830
January 1, 1984
GUIDE TO SOFTWARE REQUIREMENTS SPECIFICATIONS; (IEEE COMPUTER SOCIETY DOCUMENT)
A description is not available for this item.

Representing Complex Requirement Using Decision table And Decision Trees:


A decision table is a brief visual representation for specifying which
actions to perform depending on given conditions. The information represented
in decision tables can also be represented as decision trees or in a
programming language using if-then-else and switch-case statements.
A decision table is a good way to settle with different combination inputs with
their corresponding outputs and is also called a cause-effect table. The reason
to call cause-effect table is a related logical diagramming technique called
cause-effect graphing that is basically used to obtain the decision table.
Importance of Decision Table:
 Decision tables are very much helpful in test design techniques.
 It helps testers to search the effects of combinations of different inputs and
other software states that must correctly implement business rules.
 It provides a regular way of starting complex business rules, that is helpful
for developers as well as for testers.
 It assists in the development process with the developer to do a better job.
Testing with all combinations might be impractical.
 A decision table is basically an outstanding technique used in both testing
and requirements management.
 It is a structured exercise to prepare requirements when dealing with
complex business rules.
 It is also used in model complicated logic.
Decision Table in test designing:
Blank Decision Table
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1
Condition 2
Condition 3
Condition 4
Decision Table: Combinations
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1 Y Y N N
Condition 2 Y N Y N
Condition 3 Y N N Y
Condition 4 N Y Y N
AdvantageofDecisionTable:
 Any complex business flow can be easily converted into test scenarios & test
cases using this technique.
 Decision tables work iteratively which means the table created at the first
iteration is used as input tables for the next tables. The iteration is done only
if the initial table is not satisfactory.
 Simple to understand and everyone can use this method to design the test
scenarios & test cases.
 It provides complete coverage of test cases which helps to reduce the
rework on writing test scenarios & test cases.
 These tables guarantee that we consider every possible combination of
condition values. This is known as its completeness property.

A Decision Tree offers a graphic read of the processing logic concerned in a


higher cognitive process and therefore the corresponding actions are taken.
The perimeters of a choice tree represent conditions and therefore the leaf
nodes represent the actions to be performed looking at the result of testing the
condition.
For example, consider Library Membership Automation Software (LMS) where it
ought to support the following three options: New member, Renewal, and
Cancel membership. These are explained as following below.
1. New Member Option:

 Decision:
Once the ‘new member’ possibility is chosen, the software system asks for
details concerning the member just like the member’s name, address,
number, etc.
 Action:
If correct info is entered then a membership record for the member is made
and a bill is written for the annual membership charge and the protection
deposit collectible.

Renewal Option:

 Decision:
If the ‘renewal’ possibility is chosen, the LMS asks for the member’s name
and his membership range to test whether or not he’s a sound member or
not.

 Action:
If the membership is valid then the membership ending date is updated and
therefore the annual membership bill is written, otherwise, a slip-up message
is displayed.

Cancel Membership Option:

 Decision:
If the ‘cancel membership’ possibility is chosen, then the software system
asks for a member’s name and his membership range.

 Action:
The membership is off, a cheque for the balance quantity because of the
member is written and at last, the membership record is deleted from the
information.

Decision tree representation of the above example:


The following tree shows the graphical illustration of the above example, when
obtaining data from the user, the system makes a choice and then performs the
corresponding actions.

Difference between Decision Table and Decision Tree:


1. Decision Table :
Decision Table is just a tabular representation of all conditions and actions.
Decision Trees are always used whenever the process logic is very complicated
and involves multiple conditions. The main components used for the formation
of Data Table are Conditions Stubs, Action Stubs, and rules.
2. Decision Tree :
Decision Tree is a graph which always uses a branching method in order to
demonstrate all the possible outcomes of any decision. Decision Trees are
graphical and shows better representation of decision outcomes. It consists of
three nodes namely Decision Nodes, Chance Nodes and Terminal Nodes.

Difference between Decision Table and Decision Tree :


S.No. Decision Table Decision Tree

Decision Tables are tabular Decision Trees are graphical


representation of conditions and representation of every possible
1. actions. outcome of a decision.

We can derive decision table We can not derive decision tree from
2. from decision tree. decision table.
S.No. Decision Table Decision Tree

It helps to take into account the


possible relevant outcomes of
3. It helps to clarify the criteria. decision.

In Decision Tables, we can


include more than one ‘or’ In Decision Trees, we can not
4. condition. include more than one ‘or’ condition.

It is used when there are small It is used when there are more
5. number of properties. number of properties.

It can be used for complex logic as


6. It is used for simple logic only. well.

It is constructed of rows and It is constructed of branches and


7. tables. nodes.
Overview of formal system development techniques:

Formal methods are techniques used by software engineers to design


safety-critical systems and their components. In software engineering,
they are techniques that involve mathematical expressions to model
“abstract representation” of the system.
Long story short – it uses mathematical rigour to describe/specify
systems before they get implemented.
Such models are subject to proof-check (Formal Specification) with
regards to stability, cohesion and reliability. Proving validation is a
core process for evaluating models using automatic theorem proofs.
This is based on a set of mathematical formulas to be proven called
proof obligations (Formal Verification). This allows identification of
potential flaws earlier in the design stage, to prevent from “bricking”
expensive systems later when placed into exploitation. Standard
development techniques revolve around the following phases:
1. Requirements engineering
2. Architecture design
3. Implementation
4. Testing
5. Maintenance
6. Evolution

Some may argue that all these steps usually take place, but they
must, to some extent for at least usable software with longer
perspectives for exploitation. Some of the earlier steps – particularly
design stages – may bring a sense of uncertainty in terms of
unforeseen problems later in the process. The reasons could be:
1. Lack of grasp of the problem as a whole
2. Dispersed engineering teams have different perceptions of the end-product
3. Lack of domain knowledge
4. Inconsistent requirements
5. Yet-to-be discovered areas of expertise

These are just some avoidable factors in the completion of complex


projects. Safety-critical systems, in particular, have a significant need
for earlier fault detection. It is crucial to validate software faultlessness
where agile incremental analysis and development bring about quality
assurance concerns. Thus, that is where the implementation of such
techniques finds its highest demand.
There are notable differences between standard and formal software
development methods. Formal methods are somewhat supporting
tools. Here, the reliability of mathematics improves software
production quality at any stage. They are not necessarily there to
implement data processing. Choice of programming language is
irrelevant. Instead, it creates a ‘bridge’ between modelled concepts
and the environment towards final software implementation: “What
shall we do?” over “How shall we do this?”.

Examples of Formal Method Techniques


B METHOD
B is an example of formal method techniques that covers the whole
development life-cycle. It divides software onto separated components
that further represent as Abstract Machines.
B methods represent system models in the form of mathematical
expressions as an Abstract Notation Machine (AMN). These are
further subject to stepwise refinement and proof obligation evaluation.
This consists of verification of invariant preservation and refinement
correctness.
The B method is a widely-cited technique in scientific publications
concerning formal method implementation. Notably, it is used in the
specification for transport automation systems in Paris and Sao Paulo,
by Siemens Transportation Systems.
B METHOD CODE EXAMPLE: LESS SAFETY-
CRITICAL
This model represents the CRM software (Customer Relationship
Management) to keep track of the current state of relationships. Its
task is to improve user enrolment, user satisfaction rate and member
retention.

//Data structures in use SETS PERSON; REGISTER; ACCEPTANCE


= {true, false} CONSTANTS max_member PROPERTIES
max_member : NAT1 VARIABLES casual,
prospect, member, pers_data, membership, assess
ment //System state - must always be true
during proof-check execution INVARIANT casual <
: PERSON & pers_data <: REGISTER
& membership <: PRODUCT & prospect : casual >+>
pers_data & !dd.(dd:
dom(prospect) => dd : casual)& member : dom(pro
spect) <-> membership &
assessment : ran(prospect) --> ACCEPTANCE & !aa
.(aa: dom(assessment) => aa :
ran(prospect))& !cc.(cc: dom(member) => cc : do
m(prospect) & prospect(cc)
|-> true:assessment)& card(dom(member)) <= max_
member Data needed for
proof-checking INITIALISATION casual, prospect,
member, pers_data, membership, assessment
:= {},{},{},{},{},{} END //Methods OPERATIONS a
dd_casual(cc)= PRE cc: PERSON & cc /:
ran(casualr) THEN casualr := casualr ^ [cc] END
; add_persdata(pd) = PRE pd: REGISTER &
pd /: ran(pers_datar) THEN pers_datar:= pers_da
tar ^[pd] END;

Z NOTATION
Z notation is a model-based, abstract formal specification technique
most compatible with object-oriented programming. Z defines system
models in the form of states where each state consists of variables,
values and operations that change from one state to another.
As opposed to the usability of B, which is involved in full development
life-cycle, Z formalises a specification of the system at the design
level.
EVENT-B
Event-B is an advanced implementation of the B method. Using this
approach, formal software specification is the process of creating a
discrete model that represents a specific state of the system. The
state is an abstract representation of constants, variables and
transitions (events). Part of an event is the guard that determines the
condition for the transition to another other state to take place.
Constructed models (blueprints) are a further subject of refinement,
proof obligation and decomposition for the correctness of verification.

Evaluation
Before deciding on the use of formal methods, each architect must list
the pros and cons against resources available, as well as the system’s
needs.
BENEFITS
1. Significantly improves reliability at the design level decreasing the cost of testing
2. Improves system cohesion, reliability, and safety-critical components by fault detection on early
phases in the development cycle
3. Validated models present deterministic system behaviour
CRITICISMS
1. Requires qualified professionals competent in either mathematics (mathematical expressions, set
theory and predicate logic) or software engineering. Systems once modelled may be difficult to
implement by unaccustomed programmers. “People are quite reluctant to use such methods
mostly because it necessitates modifying the development process in a significant fashion.”,
author of B-Methods, Abrial, once said.
2. Design proof-validation may introduce additional effort/cost to overall project estimation.

axiomatic specification:

In the Axiomatic Specification of a system, first-order logic is used to write the pre-and post-
conditions to specify the operations of the system in the form of axioms. The pre-conditions
capture the conditions that must be satisfied before an operation can successfully be invoked.
In essence, the pre-conditions capture the requirements on the input parameters of a function.
The post-conditions are the conditions that must be satisfied when a function post-conditions are
essentially constraints on the results produced for the function execution to be considered
successful.

Algebraic Specification:
In the Algebraic Specification technique, an object class or type is specified in terms of
relationships existing between the operations defined on that type. It was first brought into
prominence by Guttag (1980-1985) in the specification of abstract data types. Various
notations of algebraic specifications have evolved, including those based on OBJ and Larch
languages.
Essentially, algebraic specifications define a system as a heterogeneous algebra. A
heterogeneous algebra is a collection of different sets on which several operations are
defined. Traditional algebras are homogeneous. A
homogeneous algebra consists of a single set and several operations defined in this set such
as – { +, -, *, / }.
UNIT – III

Good Software Design, Cohesion and coupling, Control Hierarchy: Layering, Control
Abstraction, Depth and width, Fan-out, Fan-in, Software design approaches, object
oriented vs. function oriented design. Overview of SA/SD methodology, structured
analysis, Data flow diagram, Extending DFD technique to real life systems, Basic Object
oriented concepts, UML Diagrams, Structured design, Detailed design, Design review,
Characteristics of a good user interface, User Guidance and Online Help, Mode-based vs
Mode-less Interface, Types of user interfaces, Component-based GUI development, User
interface design methodology: GUI design methodology.

Software Design
Good Software Design :

Software design is a mechanism to transform user requirements into some suitable form, which helps
the programmer in software coding and implementation. It deals with representing the client's
requirement, as described in SRS (Software Requirement Specification) document, into a form, i.e., easily
implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from the problem domain to the solution domain. In software design, we consider the
system to be a set of components or modules with clearly defined behaviors & boundaries.

Objectives of Software Design


Following are the purposes of Software design:
1. Correctness:Software design should be correct as per requirement.
2. Completeness:The design should have all components like data structures, modules, and external
interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable by other designers.

Cohesion and coupling


Introduction: The purpose of Design phase in the Software Development Life Cycle is to
produce a solution to a problem given in the SRS(Software Requirement Specification)
document. The output of the design phase is Software Design Document (SDD).
Basically, design is a two-part iterative process. First part is Conceptual Design that tells the
customer what the system will do. Second is Technical Design that allows the system builders
to understand the actual hardware and software needed to solve customer’s problem.

Conceptual design of the system:


 Written in simple language i.e. customer understandable language.
 Detailed explanation about system characteristics.
 Describes the functionality of the system.
 It is independent of implementation.
 Linked with requirement document.
Technical Design of the system:
 Hardware component and design.
 Functionality and hierarchy of software components.
 Software architecture
 Network architecture
 Data structure and flow of data.
 I/O component of the system.
 Shows interface.
Modularization: Modularization is the process of dividing a software system into multiple
independent modules where each module works independently. There are many advantages
of Modularization in software engineering. Some of these are given below:
 Easy to understand the system.
 System maintenance is easy.
 A module can be used many times as their requirements. No need to write it again and
again.
Coupling: Coupling is the measure of the degree of interdependence between the modules.
A good software will have low coupling.

Types of Coupling:
 Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data
coupling, the components are independent of each other and communicate through data.
Module communications don’t contain tramp data. Example-customer billing system.
 Stamp Coupling In stamp coupling, the complete data structure is passed from one
module to another module. Therefore, it involves tramp data. It may be necessary due to
efficiency factors- this choice was made by the insightful designer, not a lazy programmer.
 Control Coupling: If the modules communicate by passing control information, then they
are said to be control coupled. It can be bad if parameters indicate completely different
behavior and good if parameters allow factoring and reuse of functionality. Example- sort
function that takes comparison function as an argument.
 External Coupling: In external coupling, the modules depend on other modules, external
to the software being developed or to a particular type of hardware. Ex- protocol, external
file, device format, etc.
 Common Coupling: The modules have shared data such as global data structures. The
changes in global data mean tracing back to all modules which access that data to
evaluate the effect of the change. So it has got disadvantages like difficulty in reusing
modules, reduced ability to control data accesses, and reduced maintainability.
 Content Coupling: In a content coupling, one module can modify the data of another
module, or control flow is passed from one module to the other module. This is the worst
form of coupling and should be avoided.
Cohesion: Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards performing a single
task are contained in the component. Basically, cohesion is the internal glue that keeps the
module together. A good software design will have high cohesion.

Types of Cohesion:
 Functional Cohesion: Every essential element for a single computation is contained in
the component. A functional cohesion performs the task and functions. It is an ideal
situation.
 Sequential Cohesion: An element outputs some data that becomes the input for other
element, i.e., data flow between the parts. It occurs naturally in functional programming
languages.
 Communicational Cohesion: Two elements operate on the same input data or contribute
towards the same output data. Example- update record in the database and send it to the
printer.
 Procedural Cohesion: Elements of procedural cohesion ensure the order of execution.
Actions are still weakly connected and unlikely to be reusable. Ex- calculate student GPA,
print student record, calculate cumulative GPA, print cumulative GPA.
 Temporal Cohesion: The elements are related by their timing involved. A module
connected with temporal cohesion all the tasks must be executed in the same time span.
This cohesion contains the code for initializing all the parts of the system. Lots of different
activities occur, all at unit time.
 Logical Cohesion: The elements are logically related and not functionally. Ex- A
component reads inputs from tape, disk, and network. All the code for these functions is in
the same component. Operations are related, but the functions are significantly different.
 Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst
form of cohesion. Ex- print next line and reverse the characters of a string in a single
component.

Control Hierarchy:

Control hierarchy, also called program structure, represents the organization of program components (modules) and
implies a hierarchy of control. It does not represent procedural aspects of software such as sequence of processes,
occurrence or order of decisions, or repetition of operations; nor is it necessarily applicable to all architectural styles.

Different notations are used to represent control hierarchy for those architectural styles that are amenable to this
representation. The most common is the treelike diagram that represents hierarchical control for call and return
architectures. However, other notations, such as Warnier-Orr and Jackson diagrams may also be used with equal
effectiveness. In order to facilitate later discussions of structure, we define a few simple measures and terms. Referring
to figure , depth and width provide an indication of the number of levels of control and overall span of control,
respectively. Fan-out is a measure of the number of modules that are directly controlled by another module. Fan-in
indicates how many modules directly control a given module.

The control relationship among modules is expressed in the following way: A module that controls another module is
said to be superordinate to it, and conversely, a module controlled by another is said to be subordinate to the controller
. For example, referring to figure, module M is superordinate to modules a, b, and c. Module h is subordinate to
module e and is ultimately subordinate to module M. Width-oriented relationships (e.g., between modules d and e)
although possible to express in practice, need not be defined with explicit terminology.

The control hierarchy also represents two subtly different characteristics of the software architecture: visibility and
connectivity. Visibility indicates the set of program components that may be invoked or used as data by a given
component, even when this is accomplished indirectly. For example, a module in an object-oriented system may have
access to a wide array of data objects that it has inherited, but makes use of only a small number of these data objects.
All of the objects are visible to the module. Connectivity indicates the set of components that are directly invoked or
used as data by a given component. For example, a module that directly causes another module to begin execution is
connected to it.
StructuralPartitioning
If the architectural style of a system is hierarchical, the program structure can be partitioned both horizontally and
vertically.Reffering to figure(a) horizontal partitioning defines separate branches of the modular hierarchy for each
major program function. Control modules, represented in a darker shade are used to coordinate communication
between and execution of the functions. The simplest approach to horizontal partitioning defines three partitions—
input, data transformation (often called processing) and output. Partitioning the architecture horizontally provides a
number of distinct benefits:
• software that is easier to test
• software that is easier to maintain
• propagation of fewer side effects
• software that is easier to extend

Because major functions are decoupled from one another, change tends to be less complex and extensions to the
system (a common occurrence) tend to be easier to accomplish without side effects. On the negative side, horizontal
partitioning often causes more data to be passed across module interfaces and can complicate the overall control of
program flow (if processing requires rapid movement from one function to another).

Vertical partitioning (Figure (b)), often called factoring, suggests that control (decision making) and work should be
distributed top-down in the program structure. Toplevel modules should perform control functions and do little actual
processing work. Modules that reside low in the structure should be the workers, performing all input, computation,
and output tasks.

The nature of change in program structures justifies the need for vertical partitioning. Referring to figure(b), it can be
seen that a change in a control module (high in the structure) will have a higher probability of propagating side effects
to modules that are subordinate to it. A change to a worker module, given its low level in the structure, is less likely to
cause the propagation of side effects. In general, changes to computer programs revolve around changes to input,
computation or transformation, and output. The overall control structure of the program (i.e., its basic behavior is far
less likely to change). For this reason vertically partitioned structures are less likely to be susceptible to side effects
when changes are made and will therefore be more maintainable—a key quality factor.
Layering:
Software engineering is fully a layered technology, to develop software we need to go from
one layer to another. All the layers are connected and each layer demands the fulfillment of
the previous layer.

Fig: The diagram shows the layers of software development

Layered technology is divided into four parts:


1. A quality focus: It defines the continuous process improvement principles of software. It
provides integrity that means providing security to the software so that data can be accessed
by only an authorized person, no outsider can access the data. It also focuses on
maintainability and usability.
2. Process: It is the foundation or base layer of software engineering. It is key that binds all
the layers together which enables the development of software before the deadline or on
time. Process defines a framework that must be established for the effective delivery of
software engineering technology. The software process covers all the activities, actions, and
tasks required to be carried out for software development.

Process activities are listed below:-


 Communication: It is the first and foremost thing for the development of software.
Communication is necessary to know the actual demand of the client.
 Planning: It basically means drawing a map for reduced the complication of development.
 Modeling: In this process, a model is created according to the client for better
understanding.
 Construction: It includes the coding and testing of the problem.
 Deployment:- It includes the delivery of software to the client for evaluation and feedback.
3. Method: During the process of software development the answers to all “how-to-do”
questions are given by method. It has the information of all the tasks which includes
communication, requirement analysis, design modeling, program construction, testing, and
support.
4. Tools: Software engineering tools provide a self-operating system for processes and
methods. Tools are integrated which means information created by one tool can be used by
another.
Control Abstraction:-

Our aim is to understand and implement Control Abstraction in Java. Before jumping right into
control abstraction, let us understand what is abstraction.
Abstraction: To put it in simple terms, abstraction is anything but displaying only the
essential features of a system to a user without getting into its details. For example, a car and
its functions are described to the buyer and the driver also learns how to drive using the
steering wheel and the accelerators but the inside mechanisms of the engine are not
displayed to the buyer. To read more about Abstraction, refer here.
In abstraction, there are two types: Data abstraction and Control abstraction.
Data abstraction, in short means creating complex data types but giving out only the
essentials operations.
Control Abstraction: This refers to the software part of abstraction wherein the program is
simplified and unnecessary execution details are removed.
Here are the main points about control abstraction:
 Control Abstraction follows the basic rule of DRY code which means Don’t Repeat Yourself
and using functions in a program is the best example of control abstraction.
 Control Abstraction can be used to build new functionalities and combines control
statements into a single unit.
 It is a fundamental feature of all higher-level languages and not just java.
 Higher-order functions, closures, and lambdas are few preconditions for control
abstraction.
 Highlights more on how a particular functionality can be achieved rather than describing
each detail.
 Forms the main unit of structured programming.
A simple algorithm of control flow:
 The resource is obtained first
 Then, the block is executed.
 As soon as control leaves the block, the resource is closed
Example:
 Java

// Abstract class
abstract class Vehicle {

// Abstract method (does not have a body)

public abstract void VehicleSound();

// Regular method

public void honk() { System.out.println("honk honk"); }

// Subclass (inherit from Vehicle)

class Car extends Vehicle {

public void VehicleSound()

// The body of VehicleSound() is provided here

System.out.println("kon kon");

class Main {

public static void main(String[] args)

// Create a Car object

Car myCar = new Car();

myCar.VehicleSound();
myCar.honk();

Output
kon kon
honk honk
The greatest advantage of control abstraction is that it makes code a lot cleaner and also
more secure.

Depth and Width:


we take a look at Breadth testing and its complementary Depth testing.

In Breadth Testing the full functionality of a product(all the features) are tested but the features are not
tested in detail.

In Depth Testing, the feature of a product is tested in full detail.

There are some scenarios where Breadth testing takes precedence and others where Depth testing takes
precedence. In practice though, a combination of both is used. These techniques help prioritize tests and in
times of schedule crunches, decide on the optimal use of time to pick the best areas to concentrate on.

Breadth and Depth testing are used in many different contexts:

1. Integration Testing: during Top-Down integration, developers can decide whether to use a breadth first or
depth first strategy.

2. Sanity testing: We use breadth first usually to ensure the full functionality is working.

3. Functional/System Test: A combination of both – breadth and depth is used i.e. the full functionality and in
depth testing of features is used.

4. Automation: to decide whether we want to automate the end to end or a particular feature in depth or detail.

5. Test coverage metrics: How many features have been covered vs how much in depth have they been
covered.

6. Regression: Breadth testing first followed by Depth testing of the changed functionality.

7. Test Data: Breadth refers to the variation in test data values and categories whereas Depth refers to the
volume or size of databases.
Fan-out,Fan-in:-
The fan-out of a module is the number of its immediately subordinate modules. As a rule
of thumb, the optimum fan-out is seven, plus or minus 2. This rule of thumb is based on
the psychological study conducted by George Miller during which he determined that the
human mind has difficulty dealing with more than seven things at once.

The fan-in of a module is the number of its immediately superordinate (i.e., parent or boss)
modules. The designer should strive for high fan-in at the lower levels of the
hierarchy. This simply means that normally there are common low-level functions that
exist that should be identified and made into common modules to reduce redundant code
and increase maintainability. High fan-in can also increase portability if, for example, all
I/O handling is done in common modules.

Object-Oriented Considerations

In object-oriented systems, fan-in and fan-out relate to interactions between objects. In


object-oriented design, high fan-in generally contributes to a better design of the overall
system. High fan-in shows that an object is being used extensively by other objects, and is
indicative of re-use.
High fan-out in object-oriented design is indicated when an object must deal directly with a
large number of other objects. This is indicative of a high degree of class
interdependency. In general, the higher the fan-out of an object, the poorer is the overall
system design.

Strengths of Fan-In

High fan-in reduces redundancy in coding. It also makes maintenance easier. Modules
developed for fan-in must have good cohesion, preferably functional. Each interface to a
fan-in module must have the same number and types of parameters.

Designing Modules That Consider Fan-In/Fan-Out

The designer should strive for software structure with moderate fan-out in the upper levels
of the hierarchy and high fan-in in the lower levels of the hierarchy. Some examples of
common modules which result in high fan-in are: I/O modules, edit modules, modules
simulating a high level command (such as calculating the number of days between two
dates).

Use factoring to solve the problem of excessive fan-out. Create an intermediate module to
factor out modules with strong cohesion and loose coupling.

In the example, fan-out is reduced by creating a module X to reduce the number of


modules invoked directly by Z.
Software Design Approaches:-

There are two fundamentally different approaches to software design that are in use today—
function-oriented design, and object-oriented design. Though these two design approaches are
radically different, they are complementary rather than competing techniques. The objectoriented
approach is a relatively newer technology and is still evolving. For development of large
programs, the object- oriented approach is becoming increasingly popular due to certain
advantages that it offers. On the other hand, function-oriented designing is a mature technology
and has a large following. Salient features of these two approaches.

Function-oriented Design:-

The following are the salient features of the function-oriented design approach: Top-down decomposition: A
system, to start with, is viewed as a black box that provides certain services (also known as high-level functions)
to the users of the system. In top-down decomposition, starting at a high-level view of the system, each high-level
function is successively refined into more detailed functions. For example, consider a function create-new-library
me mbe r which essentially creates the record for a new member, assigns a unique membership number to him,
and prints a bill towards his membership charge. This high-level function may be refined into the following
subfunctions:

• assign-membership-number

• create-member-record

• print-bill

Each of these subfunctions may be split into more detailed subfunctions and so on. Centralised system state: The
system state can be defined as the values of certain data items that determine the response of the system to a user
action or external event. For example, the set of books (i.e. whether borrowed by different users or available for
issue) determines the state of a library automation system. Such data in procedural programs usually have global
scope and are shared by many modules. The system state is centralised and shared among different functions. For
example, in the library management system, several functions such as the following share data such as member-
records for reference and updation:

• create-new-member
• delete-member
• update-member-record A large number of function-oriented design approaches have been proposed in the past. A few of
the well-established function-oriented design approaches are as following:
• Structured design by Constantine and Yourdon, [1979
] • Jackson’s structured design by Jackson [1975]
• Warnier-Orr methodology [1977, 1981]
• Step-wise refinement by Wirth [1971]
• Hatley and Pirbhai’s Methodology [1987]
5.5.2 Object-oriented Design In the object-oriented design (OOD) approach, a system is viewed as being made up
of a collection of objects (i.e. entities). Each object is associated with a set of functions that are called its methods.
Each object contains its own data and is responsible for managing it. The data internal to an object cannot be
accessed directly by other objects and only through invocation of the methods of the object. The system state is
decentralised since there is no globally shared data in the system and data is stored in each object. For example, in
a library automation software, each library member may be a separate object with its own data and functions to
operate on the stored data. The methods defined for one object cannot directly refer to or change the data of other
objects. The object-oriented design paradigm makes extensive use of the principles of abstraction and
decomposition as explained below. Objects decompose a system into functionally independent modules. Objects
can also be considered as instances of abstract data types (ADTs). The ADT concept did not originate from the
object-oriented approach. In fact, ADT concept was extensively used in the ADA programming language
introduced in the 1970s. ADT is an important concept that forms an important pillar of objectorientation. Let us
now discuss the important concepts behind an ADT. There are, in fact, three important concepts associated with
an ADT—data abstraction, data structure, data type. We discuss these in the following subsection: Data
abstraction: The principle of data abstraction implies that how data is exactly stored is abstracted away. This
means that any entity external to the object (that is, an instance of an ADT) would have no knowledge about how
data is exactly stored, organised, and manipulated inside the object. The entities external to the object can access
the data internal to an object only by calling certain well-defined methods supported by the object. Consider an
ADT such as a stack. The data of a stack object may internally be stored in an array, a linearly linked list, or a
bidirectional linked list. The external entities have no knowledge of this and can access data of a stack object only
through the supported operations such as push and pop.
Data structure: A data structure is constructed from a collection of primitive data items. Just as a civil engineer
builds a large civil engineering structure using primitive building materials such as bricks, iron rods, and cement;
a programmer can construct a data structure as an organised collection of primitive data items such as integer,
floating point numbers, characters, etc.
Data type: A type is a programming language terminology that refers to anything that can be instantiated. For
example, int, float, char etc., are the basic data types supported by C programming language. Thus, we can say
that ADTs are user defined data types. In object-orientation, classes are ADTs. But, what is the advantage of
developing an application using ADTs? Let us examine the three main advantages of using ADTs in programs:
The data of objects are encapsulated within the methods. The encapsulation principle is also known as data
hiding. The encapsulation principle requires that data can be accessed and manipulated only through the methods
supported by the object and not directly. This localises the errors. The reason for this is as follows. No program
element is allowed to change a data, except through invocation of one of the methods. So, any error can easily be
traced to the code segment changing the value. That is, the method that changes a data item, making it erroneous
can be easily identified.
An ADT-based design displays high cohesion and low coupling. Therefore, object- oriented designs are
highly modular.
Since the principle of abstraction is used, it makes the design solution easily understandable and helps to
manage complexity. Similar objects constitute a class. In other words, each object is a member of some class.
Classes may inherit features from a super class. Conceptually, objects communicate by message passing. Objects
have their own internal data. Thus an object may exist in different states depending the values of the internal data.
In different states, an object may behave differently

Obje ct -or ient ed v e r s u s function-oriented design approaches:


Unlike function-oriented design methods in OOD, the basic abstraction is not the services available to the users of
the system such as issuebook, display-book-details, find-issued-books, etc., but real-world entities such as
member, book, book-register, etc. For example in OOD, an employee pay-roll software is not developed by
designing functions such as update-employee-record, get-employee-address, etc., but by designing objects such as
employees, departments, etc.
In OOD, state information exists in the form of data distributed among several objects of the system. In contrast,
in a procedural design, the state information is available in a centralised shared data store. For example, while
developing an employee pay-roll system, the employee data such as the names of the employees, their code
numbers, basic salaries, etc., are usually implemented as global data in a traditional programming system;
whereas in an object-oriented design, these data are distributed among different employee objects of the system.
Objects communicate by message passing. Therefore, one object may discover the state information of another
object by sending a message to it. Of course, somewhere or other the real-world functions must be implemented
.
Function-oriented techniques group functions together if, as a group, they constitute a higher level function. On
the other hand, objectoriented techniques group functions together on the basis of the data they operate on. To
illustrate the differences between the object-oriented and the functionoriented design approaches, let us consider
an example—that of an automated fire-alarm system for a large building. Automated fire-alarm system—
customer requirements The owner of a large multi-storied building wants to have a computerised fire alarm
system designed, developed, and installed in his building. Smoke detectors and fire alarms would be placed in
each room of the building. The fire alarm system would monitor the status of these smoke detectors. Whenever a
fire condition is reported by any of the smoke detectors, the fire alarm system should determine the location at
which the fire has been sensed and then sound the alarms
only in the neighbouring locations. The fire alarm system should also flash an alarm message on the computer
console. Fire fighting personnel would man the console round the clock. After a fire condition has been
successfully handled, the fire alarm system should support resetting the alarms by the fire fighting personnel.
Function-oriented approach: In this approach, the different high-level functions are first identified, and then the
data structures are designed. The functions which operate on the system state are: interrogate_detectors();
get_detector_location(); determine_neighbour_alarm(); determine_neighbour_sprinkler(); ring_alarm();
activate_sprinkler(); reset_alarm(); reset_sprinkler(); report_fire_location(); Object-oriented approach: In the
object-oriented approach, the different classes of objects are identified. Subsequently, the methods and data for
each object are identified. Finally, an appropriate number of instances of each class is created. class detector
attributes: status, location, neighbours operations: create, sense-status, get-location, find-neighbours class alarm
attributes: location, status operations: create, ring-alarm, get_location, resetalarm class sprinkler
attributes: location, status operations: create, activate-sprinkler, get_location, reset-sprinkler We can now compare
the function-oriented and the object-oriented approaches based on the two examples discussed above, and easily
observe the following main differences: In a function-oriented program, the system state (data) is centralised and
several functions access and modify this central data. In case of an object-oriented program, the state information
(data) is distributed among various objects. In the object-oriented design, data is private in different objects and
these are not available to the other objects for direct access and modification. The basic unit of designing an
object-oriented program is objects, whereas it is functions and modules in procedural designing. Objects appear as
nouns in the problem description; whereas functions appear as verbs. At this point, we must emphasise that it is
not necessary that an objectoriented design be implemented by using an object-oriented language only. However,
an object-oriented language such as C++ and Java support the definition of all the basic mechanisms of class,
inheritance, objects, methods, etc. and also support all key object-oriented concepts that we have just discussed.
Thus, an object-oriented language facilitates the implementation of an OOD. However, an OOD can as well be
implemented using a conventional procedural languages—though it may require more effort to implement an
OOD using a procedural language as compared to the effort required for implementing the same design using an
object-oriented language. In fact, the older C++ compilers were essentially pre-processors that translated C++
code into C code. Even though object-oriented and function-oriented techniques are remarkably different
approaches to software design, yet one does not replace the other; but they complement each other in some sense.
For example, usually one applies the top-down function oriented techniques to design the internal methods of a
class, once the classes are identified. In this case, though outwardly the system appears to have been developed in
an objectoriented fashion, but inside each class there may be a small hierarchy of functions designed in a top-
down manner.
Overview of SA/SD Methodology:
Structured Analysis and Structured Design (SA/SD) is a diagrammatic notation that is
designed to help people understand the system. The basic goal of SA/SD is to improve quality
and reduce the risk of system failure. It establishes concrete management specifications and
documentation. It focuses on the solidity, pliability, and maintainability of the system.
Basically, the approach of SA/SD is based on the Data Flow Diagram. It is easy to
understand SA/SD but it focuses on well-defined system boundary whereas the JSD
approach is too complex and does not have any graphical representation.
SA/SD is combined known as SAD and it mainly focuses on the following 3 points:

1. System
2. Process
3. Technology

SA/SD involves 2 phases:


1. Analysis Phase: It uses Data Flow Diagram, Data Dictionary, State Transition diagram
and ER diagram.
2. Design Phase: It uses Structure Chart and Pseudo Code.

1. Analysis Phase:
Analysis Phase involves data flow diagram, data dictionary, state transition diagram, and
entity-relationship diagram.
1. Data Flow Diagram:
In the data flow diagram, the model describes how the data flows through the system. We
can incorporate the Boolean operators and & or link data flow when more than one data
flow may be input or output from a process.
For example, if we have to choose between two paths of a process we can add an
operator or and if two data flows are necessary for a process we can add an operator. The
input of the process “check-order” needs the credit information and order information
whereas the output of the process would be a cash-order or a good-credit-order.

2. Data Dictionary:
The content that is not described in the DFD is described in the data dictionary. It defines
the data store and relevant meaning. A physical data dictionary for data elements that flow
between processes, between entities, and between processes and entities may be
included. This would also include descriptions of data elements that flow external to the
data stores.
A logical data dictionary may also be included for each such data element. All system
names, whether they are names of entities, types, relations, attributes, or services, should
be entered in the dictionary.

3. State Transition Diagram:


State transition diagram is similar to the dynamic model. It specifies how much time the
function will take to execute and data access triggered by events. It also describes all of
the states that an object can have, the events under which an object changes state, the
conditions that must be fulfilled before the transition will occur and the activities were
undertaken during the life of an object.

4. ER Diagram:
ER diagram specifies the relationship between data store. It is basically used in database
design. It basically describes the relationship between different entities.

2. Design Phase:
Design Phase involves structure chart and pseudocode.
1. Structure Chart:
It is created by the data flow diagram. Structure Chart specifies how DFS’s processes are
grouped into tasks and allocate to the CPU. The structured chart does not show the
working and internal structure of the processes or modules and does not show the
relationship between data or data-flows. Similar to other SASD tools, it is time and cost-
independent and there is no error-checking technique associated with this tool.
The modules of a structured chart are arranged arbitrarily and any process from a DFD
can be chosen as the central transform depending on the analysts’ own perception. The
structured chart is difficult to amend, verify, maintain, and check for completeness and
consistency.

2. Pseudo Code:
It is the actual implementation of the system. It is an informal way of programming that
doesn’t require any specific programming language or technology.

Data flow diagram:

A Data Flow Diagram (DFD) is a traditional visual representation of the information flows within a
system. A neat and clear DFD can depict the right amount of the system requirement graphically. It can
be manual, automated, or a combination of both.

It shows how data enters and leaves the system, what changes the information, and where data is
stored.

The objective of a DFD is to show the scope and boundaries of a system as a whole. It may be used as a
communication tool between a system analyst and any person who plays a part in the order that acts as
a starting point for redesigning a system. The DFD is also called as a data flow graph or bubble chart.

The following observations about DFDs are essential:809

Hello Java Program for Beginners

1. All names should be unique. This makes it easier to refer to elements in the DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents the order of events; arrows
in DFD represents flowing data. A DFD does not involve any order of events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a DFD, suppress
that urge! A diamond-shaped box is used in flow charts to represents decision points with multiple exists
paths of which the only one is taken. This implies an ordering of events, which makes no sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error handling until the end of the
analysis.

Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown in fig:
Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.

Data Flow: A curved line shows the flow of data into or out of a process or data store.

Data Store: A set of parallel lines shows a place for the collection of data items. A data store indicates
that the data is stored which can be used at a later stage or by the other processes in a different order.
The data store can have an element or group of elements.

Source or Sink: Source or Sink is an external entity and acts as a source of system inputs or sink of
system outputs.

Extending DFD technique to real life systems :


Levels in Data Flow Diagrams (DFD)
The DFD may be used to perform a system or software at any level of abstraction. Infact, DFDs may be
partitioned into levels that represent increasing information flow and functional detail. Levels in DFD are
numbered 0, 1, 2 or beyond. Here, we will see primarily three levels in the data flow diagram, which are:
0-level DFD, 1-level DFD, and 2-level DFD.

0-level DFDM

It is also known as fundamental system model, or context diagram represents the entire software
requirement as a single bubble with input and output data denoted by incoming and outgoing arrows.
Then the system is decomposed and described as a DFD with multiple bubbles. Parts of the system
represented by each of these bubbles are then decomposed and documented as more and more
detailed DFDs. This process may be repeated at as many levels as necessary until the program at hand is
well understood. It is essential to preserve the number of inputs and outputs between levels, this
concept is called leveling by DeMacro. Thus, if bubble "A" has two inputs x 1 and x2 and one output y,
then the expanded DFD, that represents "A" should have exactly two external inputs and one external
output as shown in fig:

The Level-0 DFD, also called context diagram of the result management system is shown in fig. As the
bubbles are decomposed into less and less abstract bubbles, the corresponding data flow may also be
needed to be decomposed.

1-level DFD

In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In this level, we
highlight the main objectives of the system and breakdown the high-level process of 0-level DFD into
subprocesses.
2-Level DFD

2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project or record the
specific/necessary detail about the system's functioning.
Basic Object oriented concepts:
In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The
state is distributed among the objects, and each object handles its state data. For example, in a Library
Automation Software, each library representative may be a separate object with its data and functions
to operate on these data. The tasks defined for one purpose cannot refer or change data of other
objects. Objects have their internal data which represent their state. Similar objects create a class. In
other words, each object is a member of some class. Classes may inherit features from the superclass.

The different terms related to object design are:


1. Objects: All entities involved in the solution design are known as objects. For example, person,
banks, company, and users are considered as objects. Every entity has some attributes associated
with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a class. A
class defines all the attributes, which an object can have and methods, which represents the
functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity of the
target object, the name of the requested operation, and any other action needed to perform the
function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is
the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential information of an
object together but also restricts access to the data and methods from the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or
sub-classes can import, implement, and re-use allowed variables and functions from their
immediate superclasses.This property of OOD is called an inheritance. This makes it easier to
define a specific class and to create generalized classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks
but vary in arguments, can be assigned the same name. This is known as polymorphism, which
allows a single interface is performing functions for different types. Depending upon how the
service is invoked, the respective portion of the code gets executed.

UML Diagrams:
Unified Modeling Language (UML) is a general purpose modelling language. The main aim
of UML is to define a standard way to visualize the way a system has been designed. It is
quite similar to blueprints used in other fields of engineering.
UML is not a programming language, it is rather a visual language. We use UML diagrams
to portray the behavior and structure of a system. UML helps software engineers,
businessmen and system architects with modelling, design and analysis. The Object
Management Group (OMG) adopted Unified Modelling Language as a standard in 1997. Its
been managed by OMG ever since. International Organization for Standardization (ISO)
published UML as an approved standard in 2005. UML has been revised over the years and
is reviewed periodically.
Do we really need UML?
 Complex applications need collaboration and planning from multiple teams and hence
require a clear and concise way to communicate amongst them.
 Businessmen do not understand code. So UML becomes essential to communicate with
non programmers essential requirements, functionalities and processes of the system.
 A lot of time is saved down the line when teams are able to visualize processes, user
interactions and static structure of the system.
UML is linked with object oriented design and analysis. UML makes the use of elements and
forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:
1. Structural Diagrams – Capture static aspects or structure of a system. Structural
Diagrams include: Component Diagrams, Object Diagrams, Class Diagrams and
Deployment Diagrams.
2. Behavior Diagrams – Capture dynamic aspects or behavior of the system. Behavior
diagrams include: Use Case Diagrams, State Diagrams, Activity Diagrams and Interaction
Diagrams.
The image below shows the hierarchy of diagrams according to UML 2.2

Object Oriented Concepts Used in UML –

1. Class – A class defines the blue print i.e. structure and functions of an object.
2. Objects – Objects help us to decompose large systems and help us to modularize our
system. Modularity helps to divide our system into understandable components so that we
can build our system piece by piece. An object is the fundamental unit (building block) of a
system which is used to depict an entity.
3. Inheritance – Inheritance is a mechanism by which child classes inherit the properties of
their parent classes.
4. Abstraction – Mechanism by which implementation details are hidden from user.
5. Encapsulation – Binding data together and protecting it from the outer world is referred to
as encapsulation.
6. Polymorphism – Mechanism by which functions or entities are able to exist in different
forms.
Additions in UML 2.0 –
 Software development methodologies like agile have been incorporated and scope of
original UML specification has been broadened.
 Originally UML specified 9 diagrams. UML 2.x has increased the number of diagrams from
9 to 13. The four diagrams that were added are : timing diagram, communication diagram,
interaction overview diagram and composite structure diagram. UML 2.x renamed
statechart diagrams to state machine diagrams.
 UML 2.x added the ability to decompose software system into components and sub-
components.

Structural UML Diagrams –

1. Class Diagram – The most widely use UML diagram is the class diagram. It is the building
block of all object oriented software systems. We use class diagrams to depict the static
structure of a system by showing system’s classes,their methods and attributes. Class
diagrams also help us identify relationship between different classes or objects.
2. Composite Structure Diagram – We use composite structure diagrams to represent the
internal structure of a class and its interaction points with other parts of the system. A
composite structure diagram represents relationship between parts and their configuration
which determine how the classifier (class, a component, or a deployment node) behaves.
They represent internal structure of a structured classifier making the use of parts, ports,
and connectors. We can also model collaborations using composite structure diagrams.
They are similar to class diagrams except they represent individual parts in detail as
compared to the entire class.
3. Object Diagram – An Object Diagram can be referred to as a screenshot of the instances
in a system and the relationship that exists between them. Since object diagrams depict
behaviour when objects have been instantiated, we are able to study the behaviour of the
system at a particular instant. An object diagram is similar to a class diagram except it
shows the instances of classes in the system. We depict actual classifiers and their
relationships making the use of class diagrams. On the other hand, an Object Diagram
represents specific instances of classes and relationships between them at a point of time.
4. Component Diagram – Component diagrams are used to represent the how the physical
components in a system have been organized. We use them for modelling implementation
details. Component Diagrams depict the structural relationship between software system
elements and help us in understanding if functional requirements have been covered by
planned development. Component Diagrams become essential to use when we design
and build complex systems. Interfaces are used by components of the system to
communicate with each other.
5. Deployment Diagram – Deployment Diagrams are used to represent system hardware
and its software.It tells us what hardware components exist and what software components
run on them.We illustrate system architecture as distribution of software artifacts over
distributed targets. An artifact is the information that is generated by system software.
They are primarily used when a software is being used, distributed or deployed over
multiple machines with different configurations.
6. Package Diagram – We use Package Diagrams to depict how packages and their
elements have been organized. A package diagram simply shows us the dependencies
between different packages and internal composition of packages. Packages help us to
organise UML diagrams into meaningful groups and make the diagram easy to
understand. They are primarily used to organise class and use case diagrams.

Behavior Diagrams –

1. State Machine Diagrams – A state diagram is used to represent the condition of the
system or part of the system at finite instances of time. It’s a behavioral diagram and it
represents the behavior using finite state transitions. State diagrams are also referred to
as State machines and State-chart Diagrams . These terms are often used
interchangeably.So simply, a state diagram is used to model the dynamic behavior of a
class in response to time and changing external stimuli.
2. Activity Diagrams – We use Activity Diagrams to illustrate the flow of control in a system.
We can also use an activity diagram to refer to the steps involved in the execution of a use
case. We model sequential and concurrent activities using activity diagrams. So, we
basically depict workflows visually using an activity diagram.An activity diagram focuses on
condition of flow and the sequence in which it happens. We describe or depict what
causes a particular event using an activity diagram.
3. Use Case Diagrams – Use Case Diagrams are used to depict the functionality of a system
or a part of a system. They are widely used to illustrate the functional requirements of the
system and its interaction with external agents(actors). A use case is basically a diagram
representing different scenarios where the system can be used. A use case diagram gives
us a high level view of what the system or a part of the system does without going into
implementation details.
4. Sequence Diagram – A sequence diagram simply depicts interaction between objects in a
sequential order i.e. the order in which these interactions take place.We can also use the
terms event diagrams or event scenarios to refer to a sequence diagram. Sequence
diagrams describe how and in what order the objects in a system function. These diagrams
are widely used by businessmen and software developers to document and understand
requirements for new and existing systems.
5. Communication Diagram – A Communication Diagram(known as Collaboration Diagram
in UML 1.x) is used to show sequenced messages exchanged between objects. A
communication diagram focuses primarily on objects and their relationships. We can
represent similar information using Sequence diagrams,however, communication diagrams
represent objects and links in a free form.
6. Timing Diagram – Timing Diagram are a special form of Sequence diagrams which are
used to depict the behavior of objects over a time frame. We use them to show time and
duration constraints which govern changes in states and behavior of objects.
7. Interaction Overview Diagram – An Interaction Overview Diagram models a sequence of
actions and helps us simplify complex interactions into simpler occurrences. It is a mixture
of activity and sequence diagrams.
Structured design:

Detailed design, Design review, Characteristics of a good user interface, User Guidance and Online Help,
Mode-based vs Mode-less Interface, Types of user interfaces, Component-based GUI development, User
interface design methodology: GUI design methodology.
Detailed design:
The design phase of software development deals with transforming the customer requirements as
described in the SRS documents into a form implementable using a programming language.
The software design process can be divided into the following three levels of phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design

Interface Design:
Interface design is the specification of the interaction between a system and its environment. this phase
proceeds at a high level of abstraction with respect to the inner workings of the system i.e, during
interface design, the internal of the systems are completely ignored and the system is treated as a black
box. Attention is focused on the dialogue between the target system and the users, devices, and other
systems with which it interacts. The design problem statement produced during the problem analysis step
should identify the people, other systems, and devices which are collectively called agents.
Interface design should include the following details:
 Precise description of events in the environment, or messages from agents to which the system must
respond.
 Precise description of the events or messages that the system must produce.
 Specification on the data, and the formats of the data coming into and going out of the system.
 Specification of the ordering and timing relationships between incoming events or messages, and
outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their responsibilities,
properties, interfaces, and the relationships and interactions between them. In architectural design, the
overall structure of the system is chosen, but the internal details of major components are ignored.
Issues in architectural design includes:
 Gross decomposition of the systems into major components.
 Allocation of functional responsibilities to components.
 Component Interfaces
 Component scaling and performance properties, resource consumption properties, reliability
properties, and so forth.
 Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of the
internals of the major components is ignored until the last phase of the design.
Detailed Design:
Design is the specification of the internal elements of all major system components, their properties,
relationships, processing, and often their algorithms and the data structures.
The detailed design may include:
 Decomposition of major system components into program units.
 Allocation of functional responsibilities to units.
 User interfaces
 Unit states and state changes
 Data and control interaction between units
 Data packaging and implementation, including issues of scope and visibility of program elements
 Algorithms and data structures
Design Review:
Software design reviews are a systematic, comprehensive, and well-documented inspection of
design that aims to check whether the specified design requirements are adequate and the design
meets all the specified requirements. In addition, they also help in identifying the problems (if any) in
the design process. IEEE defines software design review as ‘a formal meeting at which a system’s
preliminary or detailed design is presented to the user, customer, or other interested parties for
comment and approval.’ These reviews are held at the end of the design phase to resolve issues (if
any) related to software design decisions, that is, architectural design and detailed design (component-
level and interface design) of the entire software or a part of it (such as a database).
The elements that should be examined while reviewing the software design include requirements and
design specifications, verification and validation results produced from each phase of SOLe, testing
and development plans, and all other project related documents and activities. Note that design
reviews are considered as the best mechanism to ensure product quality and to reduce the potential
risk of avoiding the problems of not meeting the schedules and requirements.
We’ll be covering the following topics in this tutorial:

 Types of Software Design Reviews


 Software Design Review Process
 Evaluating Software Design Reviews
Types of Software Design Reviews

Generally, the review process is carried out in three steps, which corresponds to the steps involved in
the software design process. First, a preliminary design review is conducted with the customers and
users to ensure that the conceptual design (which gives an idea to the user of what the system will
look like) satisfies their requirements. Next, a critical design review is conducted with analysts and
other developers to check the technical design (which is used by the developers to specify how the
system will work) in order to critically evaluate technical merits of the design. Next, a program design
review is conducted with the programmers in order to get feedback before the design is implemented.

Preliminary Design Review


During preliminary design review, the high-level architectural design is reviewed to determine whether
the design meets all the stated requirements as well as the non-functional requirements. This review is
conducted to serve the following purposes.

 To ensure that the software requirements are reflected in the software architecture
 To specify whether effective modularity is achieved
 To define interfaces for modules and external system elements
 To ensure that the data structure is consistent with the information domain
 To ensure that maintainability has been considered
 To assess the quality factors.

In this review, it is verified that the proposed design includes the required hardware and interfaces with
the other parts of the computer-based system. To conduct a preliminary design review, a review team
is formed where each member acts as an independent person authorized to make necessary
comments and decisions. This review team comprises the following individuals.

 Customers: Responsible for defining the software’s requirements.


 Moderator: Presides over the review. The moderator encourages discussions, maintains the main
objective throughout the review, settles disputes and gives unbiased observations. In short, he is
responsible for the smooth functioning of the review.
 Secretary: A silent observer who does not take part in the review process but records the main points of
the review.
 System designers: Includes people involved in designing not only the software but also the
entire computer-based system.
 Other stakeholders (developers) not involved in the project: Provide an outsider’s idea on the
proposed design. This is beneficial as they can infuse ‘fresh ideas’, address issues of correctness,
consistency, and good design practice.

If errors are noted in the review process then the faults are assessed on the basis of their severity.
That is, if there is a minor fault it is resolved by the review team. However, if there is a major fault, the
review team may agree to revise the proposed conceptual design. Note that preliminary design review
is again conducted to assess the effectiveness of the revised (new) design.
Critical Design Review
Once the preliminary design review is successfully completed and the customer(s) is satisfied with the
proposed design, a critical design review is conducted. This review is conducted to serve the following
purposes.

 To assure that there are no defects in the technical and conceptual designs
 To verify that the design being reviewed satisfies the design requirements established in the architectural
design specifications
 To assess the functionality and maturity of the design critically
 To justify the design to the outsiders so that the technical design is more clear, effective and easy to
understand

In this review, diagrams and data (sometimes both) are used to evaluate alternative design strategies
and how and why the major design decisions have been taken. Just like for the preliminary design
review, a review team is formed to carry out a critical design review. In addition to the team members
involved in the preliminary design review, this team comprises the following individuals.

 System tester: Understands the technical issues of design and compare them with the design created for
similar projects.
 Analyst: Responsible for writing system documentation.
 Program designer for this project: Understands the design in order to derive detailed program designs.

Similar to a preliminary design review, if discrepancies are noted in the critical design review process
the faults are assessed on the basis of their severity. A minor fault is resolved by the review team. If
there is a major fault, the review team may agree to revise the proposed technical design. Note that a
critical design review is conducted again to assess the effectiveness of the revised (new) design.

Note: Critical design review team does not involve customers.

Program Design Review


Once the critical design review is successfully completed, a program design review is conducted to
obtain feedback before the implementation (coding) of the design. This review is conducted to serve
the following purposes.

 To assure the feasibility of the detailed design


 To assure that the interface is consistent with the architectural design
 To specify whether the design is compatible to implementation language
 To ensure that structured programming constructs are used throughout
 To ensure that the implementation team is able to understand the proposed design.
A review team comprising system designers, a system tester, moderator, secretary and analyst is
formed to conduct the program design review. The review team also includes program designers and
developers. The program designers, after completing the program designs, present their plans to a
team of other designers, analysts and programmers for comments and suggestions. Note that a
successful program design review presents considerations relating to coding plans before coding
begins.
Software Design Review Process

Design reviews are considered important as in these reviews the product is logically viewed as the
collection of various entities/components and use-cases. These reviews are conducted at all software
design levels and cover all parts of the software units. Generally, the review process comprises three
criteria, as listed below.

 Entry criteria: Software design is ready for review.


 Activities: This criterion involves the following steps.
 Select the members for the software design review team, assign them their roles, and prepare
schedules for the review.
 Distribute the software design review package to all the reviewing participants.
 Participants check the completeness and conformance of the design to the requirements in addition
to the efficiency of the design. They also check the software for defectsal1,d if defects are found,
they discuss those defects with one another. The recorder documents the defects along with the
suggested action items and recommendations.
 The design team rectifies the defects (if any) in design and makes the required changes in the
appropriate design review material.
 The software development manager obtains the approval of the software design from the software
project manager.

 Exit criteria: The software design is approved.

Evaluating Software Design Reviews

The software design review process is beneficial for everyone as the faults can be detected at an early
stage, thereby reducing the cost of detecting errors and reducing the likelihood of missing a critical
issue. Every review team member examines the integrity of the design and not the persons involved in
it (that is, designers), which in turn emphasizes that the common ‘objective of developing a highly
rated design is achieved. To check the effectiveness of the design, the review team members should
address the following questions.

 Is the solution achieved with the developed design?


 Is the design reusable?
 Is the design well structured and easy to understand?
 Is the design compatible with other platforms?
 Is it easy to modify or enlarge the design?
 Is the design well documented?
 Does the design use suitable techniques in order to handle faults and prevent failures?
 Does the design reuse components from other projects, wherever necessary?

In addition to these questions, if the proposed system is developed using a phased development (like
waterfall and incremental model), then the phases should be interfaced sufficiently so that an easy
transition can take place from one phase to the other.

Characteristics of a good user interface:


User Interface Design :
The interaction of the user to the software program viable through the user interface design of
the software program. There is no software that does not have a user interface. As it deals
with the user interaction with the software, so it is a very important portion of the development
of any software. In many applications, 50% of the overall improvement attempt is most
effective at the person interface part.

User Interface Design Process

Characteristics of a Good User Interface Design :


Speed of learning :
A good user interface design is easy to learn. The learning speed is just progressed by using
complex syntaxes and semantics of the command issue procedures. There is no need to
learn the commands by users in a good user interface. A good user interface also does not
allow its user to remember information of different screens while doing any task
The following two methods are crucial to enhance the speed of learning :
 Use of metaphors and intuitive command names –
Metaphor is just like the abstraction of items like real-existence that is used with inside
person interface. If textual content editor of the person interface makes use of the identical
ideas or we will say equipment for the modifying of textual content like reducing strains and
paragraphs and additionally pasting distinctive textual content at distinctive places, then it
could be without difficulty associated through the person.
There is likewise any other form of famous metaphor known as a buying cart. A buying cart
is utilized by the person with inside grocery store for diverse alternatives at the same time
as buying distinctive items. For the designing of scenario wherein a comparable form of
alternatives are to be made through client and a person interface makes use of buying cart
metaphor for this purpose, then the customers can without difficulty apprehend and
discover ways to use interface. Learning also can be stepped forward through the use of
intuitive command names and symbolic command trouble procedures.

 Component-based interface –
It may be smooth for the person to apprehend if brand new interactive fashion of the
interface turns into very just like interface of different programs which can be already
acquainted to the person. This is viable most effective if the improvement of distinctive
interactive person interfaces is through the use of a few preferred interface components.
Speed of use :
The speed of use of a user interface is determined by time and efforts used to initiate and
execute different commands. It is sometimes referred to as productivity support that in which
much time user can perform his task. To initiate and execute different commands, there must
a less requirement of user and time effort. It can only be possible achieve by using a properly
designed user interface.
Speed of recall:
After using the interface many times, speed to recall any command increases automatically.
The speed should be maximized with which they recall command issue procedure. There are
many ways to improve recalling speed like by using some metaphors, symbolic command
issue procedures, and intuitive command names.
Error prevention :
As we understand prevention is higher than cure. So to accurate mistakes, it’s far greater
useful to save you mistakes. A good user interface have to reduce scope of committing
mistakes all through the use of various instructions. By tracking mistakes which took place
through common customers, mistake charge may be without difficulty decided. By automating
the person interface code with tracking code that is beneficial in recording frequency and
blunders sorts and after that show the information of blunders of mistakes dedicated through
users.
Aesthetic and attractive :
As we all know attractive things gain more attention. Thus, a good user interface should be
attractive to use. Thus, graphics-based user interfaces are in great demand over text-based
interfaces.
Feedback :
Providing remarks to the moves of the person facilitates person to apprehend processing of
the system. If any request of user takes more than a few seconds then user starts to panic,
that is what is happening, if the proper feedback is providing to user, then he must know
about his actions. Thus, a good user interface must contain feedback about the processing.
Error recovery :
Error could be very common, all people can dedicate an blunders even specialists also can
dedicate mistakes. Therefore, it’s also a responsibility of a great person interface to offer a
undo facility in order that person can get better their errors at the same time as use of the
interface. If the mistakes can’t be recovered through users, they experience irritated, helpless,
and low.
User guidance and online assist :
A good user interface s one which additionally offers assist to its person after they overlook
some thing like a command or while they may be ignorant of capabilities of the software
program. It may be accomplished through supplying exact Users are seeking steering and on
line assist to person after they want it.
User guidance and online Help:
Putting together a user's guide can become a problem that no one wants to deal with. Freelance technical author David
Farbey asks who's going to write the user's guide and considers why it's important to have one.
There comes a time when every software development manager has to find an answer to the question 'who's going
to write the user's guide?' In major corporations, where there is a well-established technical publications
department, the responsibility for the user's guide would have been allocated early on, as soon as the design
specifications were approved for development.

However in many small and medium-sized enterprises the user's guide question often becomes a problem that
everyone tries to ignore.

In the last dozen years as a technical writer in the software industry I have heard a whole range of excuses as to
why the user's guide question is not important. None of the arguments put forward stand up to close scrutiny.

We don't need a user's guide as our GUI is so intuitive


Many software developers truly believe that their product, and particularly their graphical user interface (GUI), is
straightforward and easy to use. They assure me that the GUI adheres completely to all the industry standards for
the operating system and that no one could have a problem with it.

However it is often the case that something obvious to a developer is less obvious to the user. GUI metaphors that
are second nature to an experienced developer may be totally strange for a user for whom this may be one of the
first applications they ever use.

Even an experienced user can have problems when a new version differs significantly from an older version of the
same product, or when a newly-launched application differs from a comparable application from a rival vendor.
Users need some reference information to help them, and a user's guide is the natural place for them to look for it.

It's been through QA testing so it must be easy to understand


In my experience QA testing tends to concentrate on whether code executes properly and without errors. Manual
or automated testing works from a script that specifies, for example, that when the user clicks button A the
application displays dialog B. As long as this happens the software passes the test and is declared bug-free.

QA testing doesn't look at whether a user would understand that in order to display dialog B they needed to click
button A, nor whether the user would understand the importance of completing the fields in dialog A in the first
place. In the absence of usability testing (even rarer, unfortunately, than user documentation) a well-written user's
guide can explain what the user can see on the screen, and map the screen buttons and dialogs to the user's tasks.

All our software engineers know how to write


I don't doubt that many software engineers do know how to write about the applications they have developed.
After all they are clearly the experts when it comes to the code they have written. It is less likely that the software
developers are experts when it comes to understanding the user's expectations from the software product, and
therefore they are unlikely to write what the user wants to read.

The developer's view of software is, naturally, code-centric, while users see a software product as just another
tool, like a calculator, or a pen and paper.The user needs to get their job done quickly and efficiently, and will use
the best tools they can find. They are not interested in the internal logic of the code behind the application nor in
the table structure of the underlying database, any more than DIY enthusiasts are interested in exactly how their
hammers are made.

We leave that sort of thing to our marketing department


Marketing departments are concerned with getting potential customers to part with their money. Everything they
write is written with that in mind. They don't need to explain exactly how a product works. If they are writing
about a technical product, they do make use of technical information.

They are likely to select the most interesting and attractive snippets of technical information, and whatever jargon
they feel is most likely to make the product seem new and innovative. Once a customer has bought and paid for
your product, the marketing department is no longer interested in them. Quite rightly, they are busy going after
the next customer.

Even if the technical marketing literature does describe your product's features well, there is no guarantee that the
people who are going to use the products even saw the marketing brochures. It is very likely that they, the users,
aren't the people who are authorized to make purchasing decisions.

Our users are sophisticated and don't need instructions


When you are selling to early adopters it is possible that your users are technophiles like you. That might sound
like good news, but it might also mean that if they can't understand how something works they will try and 'look
under the hood' themselves, perhaps even trying to reverse engineer your code. If the answers were in your user's
guide they wouldn’t bother with 'fiddling around'.

In any case commercial viability requires that you sell beyond this initial market segment and into a more general
one, where by definition the average user is going to be less sophisticated. These are the people for whom a well-
written user's guide can be really valuable.

If our customers have a problem they can phone the helpdesk


Do you really want your helpdesk to be clogged with calls about simple and straightforward operations? That
makes your response times look very poor. If you are in a company that generates revenue from customer support
lines, do you think that your customers will be happy wasting their precious support minutes asking about file
formats, directory paths or any other basic information they could have read for themselves?

Your support staff themselves would benefit from having a clear and comprehensive User's Guide in front of
them, and from knowing that their customers had copies of the Guide as well. The customer support log would
then be able to show which topics were not yet adequately covered in the user's guide, so that the next edition
would be even better.

We'd love to have a user's guide but we just don't have a budget
It is difficult for companies, particularly start-ups and smaller enterprises, to have the same level of investment in
peripheral activities as major corporations. It is particularly difficult to fund activities that appear to create no
value for the company. But a good User's Guide is really an essential part of a software product, just as much as
the GUI, the installation package or the code itself.

Printing costs can be high, but a User's Guide can just as easily be delivered as a PDF file or even better, as an
online help file. The minimum cost of developing a User's Guide is the cost of the technical writer, who could
easily be a contractor if your company is concerned about headcount.
The costs involved in providing a User's Guide need to be set against the benefits. Your products become easier to
learn and to use, creating the kind of grass-roots customer loyalty that is difficult to buy. Frivolous calls to
customer support are reduced, giving your staff adequate time to deal quickly with more significant problems.

Your company's reputation is also enhanced when you provide good user documentation, leading to better market
recognition and enhanced sales. And hiring a technical writer means that you ensure that your programming staff
can devote all their time to what they do best - writing code.

Mode-based vs Mode-less Interface:


UNIT – IV
Coding Standards and Guidelines

Different modules specified in the design document are coded in the Coding phase according to the
module specification. The main goal of the coding phase is to code from the design document prepared
after the design phase through a high-level language and then to unit test this code.
Good software development organizations want their programmers to maintain to some well-defined and
standard style of coding called coding standards. They usually make their own coding standards and
guidelines depending on what suits their organization best and based on the types of software they
develop. It is very important for the programmers to maintain the coding standards otherwise the code
will be rejected during code review.
Purpose of Having Coding Standards:
 A coding standard gives a uniform appearance to the codes written by different engineers.
 It improves readability, and maintainability of the code and it reduces complexity also.
 It helps in code reuse and helps to detect error easily.
 It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that can’t be.

2. Standard headers for different modules:


For better understanding and maintenance of the code, the header of different modules should
follow some standard format and information. The header format must contain below things
that is being used in various companies:
 Name of the module
 Date of module creation
 Author of the module
 Modification history
 Synopsis of the module about what the module does
 Different functions supported in the module along with their input output parameters
 Global variables accessed or modified by the module

3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
 Meaningful and understandable variables name helps anyone to understand the
reason of using it.
 Local variables should be named using camel case lettering starting with small letter
(e.g. localData) whereas Global variables names should start with a capital letter
(e.g. GlobalData). Constant names should be formed using capital letters only
(e.g. CONSDATA).
 It is better to avoid the use of digits in variable names.
 The names of the function should be written in camel case starting with small letters.
 The name of the function must describe the reason of using the function clearly and
briefly.

4. Indentation:
Proper indentation is very important to increase the readability of the code. For making the
code readable, programmers should use White spaces properly. Some of the spacing
conventions are given below:
 There must be a space after giving a comma between two function arguments.
 Each nested block should be properly indented and spaced.
 Proper Indentation should be there at the beginning and at the end of each block in
the program.
 All braces should start from a new line and the code following the end of braces also
start from a new line.

5. Error return values and exception handling conventions:


All functions that encountering an error condition should either return a 0 or 1 for simplifying
the debugging.
On the other hand, Coding guidelines give some general suggestions regarding the coding style
that to be followed for the betterment of understandability and readability of the code. Some of
the coding guidelines are given below :

6. Avoid using a coding style that is too difficult to understand:


Code should be easily understandable. The complex code makes maintenance and debugging
difficult and expensive.

7. Avoid using an identifier for multiple purposes:


Each variable should be given a descriptive and meaningful name indicating the reason behind
using it. This is not possible if an identifier is used for multiple purposes and thus it can lead to
confusion to the reader. Moreover, it leads to more difficulty during future enhancements.

8. Code should be well documented:


The code should be properly commented for understanding easily. Comments regarding the
statements increase the understandability of the code.

9. Length of functions should not be very large:


Lengthy functions are very difficult to understand. That’s why functions should be small
enough to carry out small work and lengthy functions should be broken into small ones for
completing small tasks.

10. Try not to use GOTO statement:


GOTO statement makes the program unstructured, thus it reduces the understandability of the
program and also debugging becomes difficult.
Advantages of Coding Guidelines:
 Coding guidelines increase the efficiency of the software and reduces the development time.
 Coding guidelines help in detecting errors in the early phases, so it helps to reduce the extra
cost incurred by the software project.
 If coding guidelines are maintained properly, then the software code increases readability and
understandability thus it reduces the complexity of the code.
 It reduces the hidden cost for developing the software.

5 Practices For Code Review

Software Development Process refers to implementing the design and operations of software, this
process takes place which ultimately delivers the best product. Do several questions arise after this
process like whether the code is secure? Is it well-designed? Is the code free of error? As per the survey,
on average programmers make a mistake once at every five lines of the code. To rectify these bugs
Code Review comes into the picture. Reviewing a code typically means checking whether the code
passes the test cases, has bugs, repeated lines, and various possible errors which could reduce the
efficiency and quality of the software. Reviews can be good and bad as well. Good ones lead to more
usage, growth, and popularity of the software whereas bad ones degrade the quality of software.

we will discuss the 5 steps to a complete review code. So let’s get started.

1. Split the Code into Sections

For web development, several files and folders are incorporated. All the files contain thousands of lines
of code. When you start reviewing them, this might look dense and confusing. So, the first step of code
review must be splitting the code into sections. This will make a clear understanding of the code flow.
Suppose, there are 9 folders and each folder contains 5 files. Divide them into sections. Set a goal to
review at least 5 files of the first folder in n no of days and once you complete reviewing it, go for the
next folder. Like this, when you assign yourself a task for some time, you’ll get sufficient time to review,
and thus, you’ll not feel bored or disinterested.

2. Ask Fellow Developers to Review

This is the second step of the code review process. You must seek advice or help from fellow developers
as everyone’s contribution is equally important. Experienced ones can identify the mistakes within a
second and rectify them but the young minds come up with more simple ways to implement a task. So,
ask your juniors as they have the curiosity to learn more. To make it perfect, they find other ways which
will benefit in two ways –
a) They’ll get deeper knowledge.
b) Solution can be more precise.
The below quote states the best outcome of teamwork. Thus, teamwork improves the performance of
software and fosters a positive environment.
“Alone, we can do so little. Together, we can do so much”
– Helen Keller

3. Basic Principles: Naming Conventions, Usage of libraries, Responsiveness


There are some principles and standards to follow while writing code. There has to be followed to
enhance the effectiveness and productivity. Make a note of those principles and check one-by-one
whether they’re followed or not. The below ones describes some of the standards every developer should
follow. You can also check for more.
Naming Conventions: Use standard names for variables to assign values. The name should be
meaningful, pronounceable, sound positive. Before naming, always keep in mind that whenever anyone
reads it, it should be understandable.
Usage of Libraries: A library is a generalized file of code that acts as a resource used by programs often
under software development. To avoid lines of code, we use the library, we import (call and use) several
methods from libraries and use them in our code to reduce complexity.
Responsiveness: It creates dynamic changes on the website. Do check for the responsiveness of the
website as to whether it works on all devices like mobile phones, tablets, laptops, etc. This also helps
websites get higher search engine results.

4. Check For the Reusability of Code

Functions are reusable blocks of code. A piece of code that does a single task that can be called
whenever required. Avoid repetition of codes. Check if you’ve to repeat code for different tasks, again
and again, so there you can use these functions to reduce the repeatability of code. This process of using
functions maintains the codebase.
For example, if you’re building a website. Several components are made in which basic functionalities
are defined. If a block of code is being repeated so many times, copy that block of code or function to a
file that can be invoked (reused) wherever and whenever required. This also reduces the complexity level
and lengthiness of the codebase.
5. Check Test Cases and Re-Build
This is the final step of the code review process. When you have rectified all the possible errors while
reviewing, check if all the test cases are passed, all the conditions are satisfied. There are various tests
such as functionality, usability, interface, performance, and security testing.
 Functionality: These tests include working of external and internal links, APIs, test forms.
 Usability: Checking design, menus, buttons, or links to different pages should be easily visible
and consistent on all web pages.
 Interface: It shows how interactive the website is.
 Performance: It shows the load time of a website, tests if there’s a crash in a website due to
peak load.
 Security: Test unauthorized access to the website.
Once all the test cases are passed, re-build the entire code. After this process is done, go for a look over
the website. Examine all the working like buttons, arrow keys, etc.
Go For a Demo Presentation
When all the steps of the Code Review process stated above are done, go for a demo presentation.
Schedule a flexible meeting and give a presentation to the team demonstrating the working of the
software. Go through the operations of every part of a website. Tell them about the changes made.
Validate your points as to why these changes have been done. See if all requirements are fulfilled and
also the website doesn’t look too bulky. Make sure it is simple and at the complete working stage.
Things to avoid while reviewing code
1. Don’t take too many files at a time to review.
2. Don’t go for continuous reviewing, take breaks.
3. So many nested loops.
4. Usage of too many variables.
5. No negative comments to anyone in a team.
6. Don’t make the website look too complex.
So till now you must have got the complete picture of the Code Review process. It is a very tedious
process in any modern development team’s workflow. It helps in giving a fresh start to identify bugs and
simple coding errors before your product gets to the next step or deployment, making the process for
getting the software to the customer more efficient. Before getting your prototype turned into a product,
do a proper code review or scrutiny to get the best version of it.
Overview Software Documentation

Software documentation is a written piece of text that is often accompanied with a software program.
This makes the life of all the members associated with the project more easy. It may contain anything
from API documentation, build notes or just help content. It is a very critical process in software
development. It’s primarily an integral part of any computer code development method. Moreover, the
computer code practitioners are a unit typically concerned with the worth, degree of usage and quality of
the actual documentation throughout development and its maintenance throughout the total method.
Motivated by the requirements of Novatel opposition, a world leading company developing package in
support of worldwide navigation satellite system, and based mostly on the results of a former systematic
mapping studies area unit aimed of the higher understanding of the usage and therefore the quality of a
varied technical documents throughout computer code development and there maintenance.
For example before development of any software product requirements are documented which is called
as Software Requirement Specification (SRS). Requirement gathering is considered as an stage
of Software Development Life Cycle (SDLC).
Another example can be a user manual that a user refer for installing, using, and providing maintenance
to the software application/product.
Types Of Software Documentation :
1. Requirement Documentation :
It is the description of how the software shall perform and which environment setup would be
appropriate to have the best out of it. These are generated while the software is under
development and is supplied to the tester groups too.
2. Architectural Documentation :
Architecture documentation is a special type of documentation that concerns the design. It
contains very little code and is more focused on the components of the system, their roles and
working. It also shows the data flows throughout the system.
3. Technical Documentation :
These contain the technical aspects of the software like API, algorithms etc. It is prepared
mostly for the software devs.
4. End-user Documentation :
As the name suggests these are made for the end user. It contains support resources for the end
user.
Purpose of Documentation :
Due to the growing importance of the computer code necessities, the method of crucial them needs to be
effective so as to notice desired results. As, such determination of necessities is often beneath sure
regulation and pointers that area unit core in getting a given goal.
These all implies that computer code necessities area unit expected to alter thanks to the ever ever-
changing technology within the world . however the very fact that computer code information id obtained
through development has to be modified within the wants of users and the transformation of the
atmosphere area unit inevitable.
what is more, computer code necessities ensures that there’s verification and therefore the testing method
, in conjunction with prototyping and conferences there are focus teams and observations.
For a software engineer reliable documentation is often a should the presences of documentation helps
keep track of all aspects of associate application and it improves on the standard of a wares, it’s the most
focuses area unit development , maintenance and information transfer to alternative developers.
productive documentation can build info simply accessible, offer a restricted range of user entry purpose
, facilitate new user learn quickly , alter the merchandise and facilitate to chop out the price.
Importance of software documentation :
For a programmer reliable documentation is always a must the presence keeps track of all aspects of an
application and helps in keeping the software updated.
Advantage of software documentation :
 The presence of documentation helps in keeping the track of all aspects of an application and
also improves on the quality of the software product .
 The main focus are based on the development , maintenance and knowledge transfer to other
developers.
 Helps development teams during development.
 Helps end-users in using the product.
 Improves overall quality of software product
 It cuts down duplicative work.
 Makes easier to understand code.
 Helps in establishing internal co-ordination in work.
Disadvantage of software documentation :
 The documenting code is time consuming.
 The software development process often takes place under time pressure, due to which many
times the documentation updates don’t match the updated code .
 The documentation has no influence on the performance of the an application.
 Documenting is not so fun, its sometime boring to a certain extent.
The agile methodology encourages engineering groups to invariably concentrate on delivering price to
their customers. This key should be thought-about within the method of manufacturing computer code
documentation .good package ought to be provided whether or not it’s a computer code specifications
document for programmers , testers, computer code manual for finish users .

Software Testing | Basics

Software testing can be stated as the process of verifying and validating that software or application is
bug-free, meets the technical requirements as guided by its design and development, and meets the user
requirements effectively and efficiently with handling all the exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at finding
measures to improve the software in terms of efficiency, accuracy, and usability. It mainly aims at
measuring the specification, functionality, and performance of a software program or application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that software correctly implements a specific
function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”

What are different types of software testing?


Software Testing can be broadly classified into two types:
1. Manual Testing: Manual testing includes testing software manually, i.e., without using any automated
tool or any script. In this type, the tester takes over the role of an end-user and tests the software to
identify any unexpected behavior or bug. There are different stages for manual testing such as unit
testing, integration testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test software to ensure the completeness of testing.
Manual testing also includes exploratory testing, as testers explore the software to identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when the tester
writes scripts and uses another software to test the product. This process involves the automation of a
manual process. Automation Testing is used to re-run the test scenarios that were performed manually,
quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the application from a load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time
and money in comparison to manual testing.
What are the different techniques of Software Testing?
Software techniques can be majorly classified into two categories:
1. Black Box Testing: The technique of testing in which the tester doesn’t have access to the source code
of the software and is conducted at the software interface without concern with the internal logical
structure of the software is known as black-box testing.
2. White-Box Testing: The technique of testing in which the tester is aware of the internal workings of
the product, has access to its source code, and is conducted by making sure that all internal operations are
performed according to the specifications is known as white box testing.

Black Box Testing White Box Testing

Internal workings of an application are not


required. Knowledge of the internal workings is a must.

Also known as closed box/data-driven testing. Also known as clear box/structural testing.

End users, testers, and developers. Normally done by testers and developers.

Data domains and internal boundaries can be better


This can only be done by a trial and error method. tested.
What are different levels of software testing?
Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.
2. Integration Testing: A level of the software testing process where individual units are combined and
tested as a group. The purpose of this level of testing is to expose faults in the interaction between
integrated units.
3. System Testing: A level of the software testing process where a complete, integrated system/software
is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business requirements and assess
whether it is acceptable for delivery.

Note: Software testing is a very broad and vast topic and is considered to be an integral and very
important part of software development and hence should be given its due importance.

Black box testing

Black box testing is a type of software testing in which the functionality of the software is not known.
The testing is done without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by context free
grammar. In this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead of
giving all of them separately we can group them together and test only one input of each group. The idea
is to partition the input domain of the system into a number of equivalence classes such that each
member of class works in a similar way, i.e., if a test case in one class results in some error, other
members of class would also result into same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into minimum two sets: valid
values and invalid values. For example, if the valid range is 0 to 100 then select one valid
input like 49 and one invalid like 104.
2. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two invalid
inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
 Whole number which is a perfect square- output will be an integer.
 Whole number which is not a perfect square- output will be decimal number.
 Positive decimals
(b) Invalid inputs:
 Negative numbers(integer or decimal).
 Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test cases
are designed for boundary values of input domain then the efficiency of testing improves and probability
of finding errors also increase. For example – If valid range is 10 to 100 then test for 10,100 also apart
from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical input called causes
with corresponding actions called effect. The causes and effects are represented using Boolean graphs.
The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:

It can be converted into decision table like:


Each column corresponds to a rule which will become a test case for testing. So there will be 4 test cases.
5. Requirement based testing – It includes validating the requirements given in SRS of software
system.
6. Compatibility testing – The test case result not only depend on product but also infrastructure for
delivering functionality. When the infrastructure parameters are changed it is still expected to work
properly. Some parameters that generally affect compatibility of software are:
1. Processor (Pentium 3,Pentium 4) and number of processors.
2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

White box Testing

White box testing techniques analyze the internal structures the used data structures, internal design, code
structure and the working of the software rather than just the functionality as in black box testing. It is
also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
 Input: Requirements, Functional specifications, design documents, source code.
 Processing: Performing risk analysis for guiding through the entire process.
 Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat
until error-free software is reached. Also, the results are communicated.
 Output: Preparing final report of the entire testing process.
Testing techniques:
 Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed at least
once. Since all lines of code are covered, helps in pointing out faulty code.
Statement Coverage Example

 Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.

4 test cases required such that all branches of all decisions are covered, i.e, all edges of flowchart are covered
 Condition Coverage: In this technique, all individual conditions must be covered as shown in
the following example:
0. READ X, Y
1. IF(X == 0 || Y == 0)
2. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0
 Multiple Condition Coverage: In this technique, all the possible combinations of the possible
outcomes of conditions are tested at least once. Let’s consider the following example:
0. READ X, Y
1. IF(X == 0 || Y == 0)
2. PRINT ‘0’
 #TC1: X = 0, Y = 0
 #TC2: X = 0, Y = 5
 #TC3: X = 55, Y = 0
 #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.
 Basis Path Testing: In this technique, control flow graphs are made from code or flowchart
and then Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent path.
Steps:
0. Make the corresponding control flow graph
1. Calculate the cyclomatic complexity
2. Find the independent paths
3. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.

Cyclomatic Complexity: It is a measure of the logical complexity of the software and is used
to define the number of independent paths. For a graph G, V(G) is its cyclomatic complexity.
Calculating V(G):
4. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
5. V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
6. V(G) = Number of non-overlapping regions in the graph
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4
 #P1: 1 – 2 – 4 – 7 – 8
 #P2: 1 – 2 – 3 – 5 – 7 – 8
 #P3: 1 – 2 – 3 – 6 – 7 – 8
 #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
 Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
their testing is very important. Errors often occur at the beginnings and ends of loops.
0. Simple loops: For simple loops of size n, test cases are designed that:
 Skip the loop entirely
 Only one pass through the loop
 2 passes
 m passes, where m < n
 n-1 ans n+1 passes
1. Nested loops: For nested loops, all the loops are set to their minimum count and we
start from the innermost loop. Simple loop tests are conducted for the innermost
loop and this is worked outwards till all the loops have been tested.
2. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language as
opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Debugging

Introduction:
In the context of software engineering, debugging is the process of fixing a bug in the software. In other
words, it refers to identifying, analyzing and removing errors. This activity begins after the software fails
to execute properly and concludes by solving the problem and successfully testing the software. It is
considered to be an extremely complex and tedious task because errors need to be resolved at all stages
of debugging.
Debugging Process: Steps involved in debugging are:
 Problem identification and report preparation.
 Assigning the report to software engineer to the defect to verify that it is genuine.
 Defect Analysis using modeling, documentations, finding and testing candidate flaws, etc.
 Defect Resolution by making required changes to the system.
 Validation of corrections.
Debugging Strategies:
1. Study the system for the larger duration in order to understand the system. It helps debugger to
construct different representations of systems to be debugging depends on the need. Study of
the system is also done actively to find recent changes made to the software.
2. Backwards analysis of the problem which involves tracing the program backward from the
location of failure message in order to identify the region of faulty code. A detailed study of
the region is conducting to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using breakpoints or
print statements at different points in the program and studying the results. The region where
the wrong outputs are obtained is the region that needs to be focused to find the defect.
4. Using the past experience of the software debug the software with similar problems in nature.
The success of this approach depends on the expertise of the debugger.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs. A lot of public
domain software like gdb and dbx are available for debugging. They offer console-based command line
interfaces. Examples of automated debugging tools include code based tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
 Radare2
 WinDbg
 Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas debugging starts
after a bug has been identified in the software. Testing is used to ensure that the program is correct and it
was supposed to do with a certain minimum success rate. Testing can be manual or automated. There are
several different types of testing like unit testing, integration testing, alpha and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some automated
tools available but is more of a manual process as every bug is different and requires a different
technique, unlike a pre-defined testing mechanism.

Integration Testing

Integration testing is the process of testing the interface between two software units or module. It’s
focus on determining the correctness of the interface. The purpose of the integration testing is to expose
faults in the interaction between integrated units. Once all the modules have been unit tested, integration
testing is performed.
Integration test approaches – There are four types of integration testing approaches. Those approaches
are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the modules
are combining and verifying the functionality after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is practicable only
for very small systems. If once an error is found during the integration testing, it is very difficult to
localize the error as the error may potentially belong to any of the modules being integrated. So,
debugging errors reported during big bang integration testing are very expensive to fix.
Advantages:
 It is convenient for small systems.
Disadvantages:
 There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
 High risk critical modules are not isolated and tested on priority since all modules are tested at
once.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels is tested with
higher modules until all modules are tested. The primary purpose of this integration testing is, each
subsystem is to test the interfaces among various modules making up the subsystem. This integration
testing uses test drivers to drive and pass appropriate data to the lower level modules. Advantages:
 In bottom-up testing, no stubs are required.
 A principle advantage of this integration testing is that several disjoint subsystems can be
tested simultaneously.
Disadvantages:
 Driver modules must be produced.
 In this testing, the complexity that occurs when the system is made up of a large number of
small subsystem.
3. Top-Down Integration Testing – Top-down integration testing technique used in order to simulate
the behaviour of the lower-level modules that are not yet integrated.In this integration testing, testing
takes place from top to bottom. First high-level modules are tested and then low-level modules and
finally integrating the low-level modules to a high level to ensure the system is working as
intended. Advantages:
 Separately debugged module.
 Few or no drivers needed.
 It is more stable and accurate at the aggregate level.
Disadvantages:
 Needs many Stubs.
 Modules at lower level are tested inadequately.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched integration
testing. A mixed integration testing follows a combination of top down and bottom-up testing
approaches. In top-down approach, testing can start only after the top-level module have been coded and
unit tested. In bottom-up approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up approaches.
Advantages:
 Mixed approach is useful for very large projects having several sub projects.
 This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Disadvantages:
 For mixed integration testing, require very high cost because one part has Top-down approach
while another part has bottom-up approach.
 This integration testing cannot be used for smaller system with huge interdependence between
different modules.

Program Analysis Tools in Software Engineering

Program Analysis Tool is an automated tool whose input is the source code or the executable code of a
program and the output is the observation of characteristics of the program. It gives various
characteristics of the program such as its size, complexity, adequacy of commenting, adherence to
programming standards and many other characteristics.

Classification of Program Analysis Tools:


Program Analysis Tools are classified into two categories:

1. Static Program Analysis Tools:


Static Program Analysis Tool is such a program analysis tool that evaluates and computes various
characteristics of a software product without executing it. Normally, static program analysis tools analyze
some structural representation of a program to reach a certain analytical conclusion. Basically some
structural properties are analyzed using static program analysis tools.
The structural properties that are usually analyzed are:
1. Whether the coding standards have been fulfilled or not.
2. Some programming errors such as uninitialized variables.
3. Mismatch between actual and formal parameters.
4. Variables that are declared but never used.
Code walkthroughs and code inspections are considered as static analysis methods but static program
analysis tool is used to designate automated analysis tools. Hence, a compiler can be considered as a
static program analysis tool.

2. Dynamic Program Analysis Tools:


Dynamic Program Analysis Tool is such type of program analysis tool that require the program to be
executed and its actual behavior to be observed. A dynamic program analyzer basically implements the
code. It adds additional statements in the source code to collect the traces of program execution. When
the code is executed, it allows us to observe the behavior of the software for different test cases. Once the
software is tested and its behavior is observed, the dynamic program analysis tool performs a post
execution analysis and produces reports which describe the structural coverage that has been achieved by
the complete testing process for the program.
For example, the post execution dynamic analysis report may provide data on extent statement, branch
and path coverage achieved.
The results of dynamic program analysis tools are in the form of a histogram or a pie chart. It describes
the structural coverage obtained for different modules of the program. The output of a dynamic program
analysis tool can be stored and printed easily and provides evidence that complete testing has been done.
The result of dynamic analysis is the extent of testing performed as white box testing. If the testing result
is not satisfactory then more test cases are designed and added to the test scenario. Also dynamic analysis
helps in elimination of redundant test cases.

system testing

System testing, also referred to as system-level tests or system-integration testing, is the


process in which a quality assurance (QA) team evaluates how the various components of
an application interact together in the full, integrated system or application.

System testing verifies that an application performs tasks as designed. This step, a kind
of black box testing, focuses on the functionality of an application. System testing, for
example, might check that every kind of user input produces the intended output across the
application.

With system testing, a QA team gauges if an application meets all of its requirements, which
includes technical, business and functional requirements. To accomplish this, the QA team
might utilize a variety of test types, including performance, usability, load testing and
functional tests.
With system testing, a QA team determines whether a test case corresponds to each of an
application's most crucial requirements and user stories. These individual test cases
determine the overall test coverage for an application, and help the team catch critical
defects that hamper an application's core functionalities before release. A QA team can log
and tabulate each defect per requirement.

Additionally, each individual type of system test reports relevant metrics of a piece of
software, including:

 Performance testing: speed, average, stability and peak Response times;

 Load testing: throughput, number of users, latency; and

 Usability testing: user error rates, task success rate, time to complete a task, user
satisfaction.
Phases of system testing

System testing examines every component of an application to make sure that they work as
a complete and unified whole. A QA team typically conducts system testing after it checks
individual modules with functional or user-story testing and then each component
through integration testing.

If a software build achieves the desired results in system testing, it gets a final check
via acceptance testing before it goes to production, where users consume the software. An
app-dev team logs all defects, and establishes what kinds and amount of defects are
tolerable.

System testing tools


Various commercial and open source tools help QA teams perform and review the results of
system testing. These tools can create, manage and automate tests or test cases, and they
might also offer features beyond system testing, such as requirements management
capabilities.

Commercial system testing tools include froglogic's Squish and Inflectra's SpiraTest, while
open source tools include Robotium and SmartBear's SoapUI.
Performance Testing

Performance Testing is a type of software testing that ensures software applications to perform properly
under their expected workload. It is a testing technique carried out to determine system performance in
terms of sensitivity, reactivity and stability under a particular workload.
Performance Testing is the process of analyzing the quality and capability of a product. It is a testing
method performed to determine the system performance in terms of speed, reliability and stability under
varying workload. Performance testing is also known as Perf Testing.

Performance Testing Attributes:


 Speed:
It determines whether the software product responds rapidly.
 Scalability:
It determines amount of load the software product can handle at a time.
 Stability:
It determines whether the software product is stable in case of varying workloads.
 Reliability:
It determines whether the software product is secure or not.

Objective of Performance Testing:


1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what is needed to be improved before the product is launched in market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.

Types of Performance Testing:


1. Load testing:
It checks the product’s ability to perform under anticipated user loads. The objective is to identify
performance congestion before the software product is launched in market.
2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles high traffic or not.
The objective is to identify the breaking point of a software product.
3. Endurance testing:
It is performed to ensure the software can handle the expected load over a long period of time.
4. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.
5. Volume testing:
In volume testing large number of data is saved in a database and the overall software system’s
behavior is observed. The objective is to check product’s performance under varying database
volumes.
6. Scalability testing:
In scalability testing, software application’s effectiveness is determined in scaling up to support an
increase in user load. It helps in planning capacity addition to your software system.

Performance Testing Process:


Performance Testing Tools:
1. Jmeter
2. Open STA
3. Load Runner
4. Web Load

Regression Testing

Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software after
the modifications have been made. Regression means return of something and in the software field, it
refers to the return of a bug.

When to do regression testing?


 When a new functionality is added to the system and the code has been modified to absorb and
integrate that functionality with the existing code.
 When some defect has been identified in the software and the code is debugged to fix it.
 When the code is modified to optimize its working.

Process of Regression testing:


Firstly, whenever we make some changes to the source code for any reasons like adding new
functionality, optimization, etc. then our program when executed fails in the previously designed test
suite for obvious reasons. After the failure, the source code is debugged in order to identify the bugs in
the program. After identification of the bugs in the source code, appropriate modifications are made.
Then appropriate test cases are selected from the already existing test suite which covers all the modified
and affected parts of the source code. We can add new test cases if required. In the end regression testing
is performed using the selected test cases.
Techniques for the selection of Test cases for Regression Testing:
 Select all test cases: In this technique, all the test cases are selected from the already existing test
suite. It is the most simple and safest technique but not much efficient.
 Select test cases randomly: In this technique, test cases are selected randomly from the existing test-
suite but it is only useful if all the test cases are equally good in their fault detection capability which
is very rare. Hence, it is not used in most of the cases.
 Select modification traversing test cases: In this technique, only those test cases are selected which
covers and tests the modified portions of the source code the parts which are affected by these
modifications.
 Select higher priority test cases: In this technique, priority codes are assigned to each test case of
the test suite based upon their bug detection capability, customer requirements, etc. After assigning
the priority codes, test cases with highest priorities are selected for the process of regression testing.
Test case with highest priority has highest rank. For example, test case with priority code 2 is less
important than test case with priority code 1.
Tools for regression testing: In regression testing, we generally select the test cases form the existing
test suite itself and hence, we need not to compute their expected output and it can be easily automated
due to this reason. Automating the process of regression testing will be very much effective and time
saving.
Most commonly used tools for regression testing are:
 Selenium
 WATIR (Web Application Testing In Ruby)
 QTP (Quick Test Professional)
 RFT (Rational Functional Tester)
 Winrunner
 Silktest

Advantages of Regression Testing:


 It ensures that no new bugs has been introduced after adding new functionalities to the system.
 As most of the test cases used in Regression Testing are selected from the existing test suite and we
already know their expected outputs. Hence, it can be easily automated by the automated tools.
 It helps to maintain the quality of the source code.

Disadvantages of Regression Testing:


 It can be time and resource consuming if automated tools are not used.
 It is required even after very small changes in the code.

Object Oriented Testing in Software Testing

Software typically undergoes many levels of testing, from unit testing to system or acceptance testing.
Typically, in-unit testing, small “units”, or modules of the software, are tested separately with focus on
testing the code of that module. In higher, order testing (e.g, acceptance testing), the entire system (or a
subsystem) is tested with the focus on testing the functionality or external behavior of the system.
As information systems are becoming more complex, the object-oriented paradigm is gaining popularity
because of its benefits in analysis, design, and coding. Conventional testing methods cannot be applied
for testing classes because of problems involved in testing classes, abstract classes, inheritance, dynamic
binding, message, passing, polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A function (or a procedure)
has a clearly defined input-output behavior, while a class does not have an input-output behavior
specification. We can test a method of a class using approaches for testing functions, but we cannot test
the class using these
approaches.
According to Davis the dependencies occurring in conventional systems are:
 Data dependencies between variables
 Calling dependencies between modules
 Functional dependencies between a module and the variable it computes
 Definitional dependencies between a variable and its types.

But in Object-Oriented systems there are following additional dependencies:


 Class to class dependencies
 Class to method dependencies
 Class to message dependencies
 Class to variable dependencies
 Method to variable dependencies
 Method to message dependencies
 Method to method dependencies

Issues in Testing Classes:


Additional testing techniques are, therefore, required to test these dependencies. Another issue of interest
is that it is not possible to test the class dynamically, only its instances i.e, objects can be tested.
Similarly, the concept of inheritance opens various issues e.g., if changes are made to a parent class or
superclass, in a larger system of a class it will be difficult to test subclasses individually and isolate the
error to one class.
In object-oriented programs, control flow is characterized by message passing among objects, and the
control flow switches from one object to another by inter-object communication. Consequently, there is
no control flow within a class like functions. This lack of sequential control flow within a class requires
different approaches for testing. Furthermore, in a function, arguments passed to the function with global
data determine the path of execution within the procedure. But, in an object, the state associated with the
object also influences the path of execution, and methods of a class can communicate among themselves
through this state because this state is persistent across invocations of methods. Hence, for testing
objects, the state of an object has to play an important role.
Techniques of object-oriented testing are as follows:
1. Fault Based Testing:
This type of checking permits for coming up with test cases supported the consumer specification or
the code or both. It tries to identify possible faults (areas of design or code that may lead to errors.).
For all of these faults, a test case is developed to “flush” the errors out. These tests also force each
time of code to be executed.
This method of testing does not find all types of errors. However, incorrect specification and interface
errors can be missed. These types of errors can be uncovered by function testing in the traditional
testing model. In the object-oriented model, interaction errors can be uncovered by scenario-based
testing. This form of Object oriented-testing can only test against the client’s specifications, so
interface errors are still missed.
2. Class Testing Based on Method Testing:
This approach is the simplest approach to test classes. Each method of the class performs a well
defined cohesive function and can, therefore, be related to unit testing of the traditional testing
techniques. Therefore all the methods of a class can be involved at least once to test the class.
3. Random Testing:
It is supported by developing a random test sequence that tries the minimum variety of operations
typical to the behavior of the categories
4. Partition Testing:
This methodology categorizes the inputs and outputs of a category so as to check them severely. This
minimizes the number of cases that have to be designed.
5. Scenario-based Testing:
It primarily involves capturing the user actions then stimulating them to similar actions throughout
the test.
These tests tend to search out interaction form of error.
UNIT-5
Reliability Testing

Reliability Testing is a testing technique that relates to test the ability of a software to function and
given environmental conditions that helps in uncovering issues in the software design and functionality.
It is defined as a type of software testing that determines whether the software can perform a failure free
operation for a specific period of time in a specific environment. It ensures that the product is fault free
and is reliable for its intended purpose.
Objective of Reliability Testing:
The objective of reliability testing is:
 To find the perpetual structure of repeating failures.
 To find the number of failures occurring is the specific period of time.
 To discover the main cause of failure.
 To conduct performance testing of various modules of software product after fixing defects.
Types of Reliability Testing:
There are three types of reliability testing:-
1. Feature Testing:
Following three steps are involved in this testing:
 Each function in the software should be executed at least once.
 Interaction between two or more functions should be reduced.
 Each function should be properly executed.
2. Regression Testing:
Regression testing is basically performed whenever any new functionality is added, old functionalities
are removed or the bugs are fixed in an application to make sure with introduction of new
functionality or with the fixing of previous bugs, no new bugs are introduced in the application.
3. Load Testing:
Load testing is carried out to determine whether the application is supporting the required load
without getting breakdown. It is performed to check the performance of the software under maximum
work load.
The study of reliability testing can be divided into three categories:-
1. Modelling
2. Measurement
3. Improvement
Measurement of Reliability Testing:
 Mean Time Between Failures (MTBF):
Measurement of reliability testing is done in terms of mean time between failures (MTBF).
 Mean Time To Failure (MTTF):
The time between two consecutive failures is called as mean time to failure (MTTF).
 Mean Time To Repair (MTTR):
The time taken to fix the failures is known as mean time to repair (MTTR).
MTBF = MTTF + MTTR
Statistical Testing
Statistical Testing is a testing method whose objective is to work out the undependable of software
package product instead of discovering errors. check cases ar designed for applied mathematics testing
with a wholly totally different objective than those of typical testing.

Operation Profile:
Different classes of users might use a software package for various functions. for instance, a professional
may use the library automation software package to make member records, add books to the library, etc.
whereas a library member may use to software package to question regarding the provision of the book
or to issue and come books. Formally, the operation profile of a software package may be outlined
because the chance distribution of the input of a mean user. If the input to a variety of categories{Ci} is
split, the chance price of a category represents the chance of a mean user choosing his next input from
this class. Thus, the operation profile assigns a chance price Pi to every input category Ci.

Steps in Statistical Testing:


Statistical testing permits one to focus on testing those elements of the system that ar presumably to be
used. the primary step of applied mathematics testing is to work out the operation profile of the software
package. a successive step is to get a group of check knowledge reminiscent of the determined operation
profile. The third step is to use the check cases to the software package and record the time between
every failure. once a statistically important range of failures are ascertained, the undependable may be
computed.

Advantages and Disadvantages of Statistical Testing:


Statistical testing permits one to focus on testing elements of the system that are presumably to be used.
Therefore, it leads to a system that the users to be a lot of reliable (than truly it is!). Undependable
estimation victimization applied mathematics testing is a lot of correct compared to those of alternative
strategies like ROCOF, POFOD, etc. However it’s dangerous to perform applied mathematics testing
properly. there’s no easy and repeatable manner of process operation profiles. additionally, it’s a great
deal cumbersome to get check cases for applied mathematics checking cause the number of test cases
with that the system is to be tested ought to be statistically important.

Software Quality

Traditionally, a high-quality product is outlined in terms of its fitness of purpose. That is, a high-quality
product will specifically what the users need it to try to. For code merchandise, the fitness of purpose is
typically taken in terms of satisfaction of the wants arranged down within the SRS document. though
“fitness of purpose” could be a satisfactory definition of quality for several merchandises like an
automobile, a table fan, a grinding machine, etc. – for code merchandise, “fitness of purpose” isn’t a
completely satisfactory definition of quality. To convey associate degree example, think about a software
that’s functionally correct.
It performs all functions as laid out in the SRS document. But, it has an associate degree virtually
unusable program. despite the fact that it should be functionally correct, we have a tendency to cannot
think about it to be a high-quality product. Another example is also that of a product that will everything
that the users need however has associate degree virtually incomprehensible and not maintainable code.
Therefore, the normal construct of quality as “fitness of purpose” for code merchandise isn’t totally
satisfactory.
The modern read of high-quality associates with software many quality factors like the following:
 Portability:
A software is claimed to be transportable, if it may be simply created to figure in several package
environments, in several machines, with alternative code merchandise, etc.
 Usability:
A software has smart usability if completely different classes of users (i.e. each knowledgeable and
novice users) will simply invoke the functions of the merchandise.
 Reusability:
A software has smart reusability if completely different modules of the merchandise will simply be
reused to develop new merchandise.
 Correctness:
A software is correct if completely different needs as laid out in the SRS document are properly
enforced.
 Maintainability:
A software is reparable, if errors may be simply corrected as and once they show up, new functions
may be simply added to the merchandise, and therefore the functionalities of the merchandise may be
simply changed, etc

Quality Management
Software Quality Management ensures that the required level of quality is achieved by submitting
improvements to the product development process. SQA aims to develop a culture within the team and it is
seen as everyone's responsibility.
Software Quality management should be independent of project management to ensure independence of cost
and schedule adherences. It directly affects the process quality and indirectly affects the product quality.
Activities of Software Quality Management:
 Quality Assurance - QA aims at developing Organizational procedures and standards for quality at
Organizational level.
 Quality Planning - Select applicable procedures and standards for a particular project and modify as
required to develop a quality plan.
 Quality Control - Ensure that best practices and standards are followed by the software development
team to produce quality products.

ISO 9000 Certification in Software Engineering


The International organization for Standardization is a world wide federation of national standard bodies.
The International standards organization (ISO) is a standard which serves as a for contract between
independent parties. It specifies guidelines for development of quality system.
Quality system of an organization means the various activities related to its products or services.
Standard of ISO addresses to both aspects i.e. operational and organizational aspects which includes
responsibilities, reporting etc. An ISO 9000 standard contains set of guidelines of production process
without considering product itself.

ISO 9000 Certification

Why ISO Certification required by Software Industry?


There are several reasons why software industry must get an ISO certification. Some of reasons are as
follows :
 This certification has become a standards for international bidding.
 It helps in designing high-quality repeatable software products.
 It emphasis need for proper documentation.
 It facilitates development of optimal processes and totally quality measurements.

Features of ISO 9001 Requirements :


 Document control –
All documents concerned with the development of a software product should be properly managed
and controlled.
 Planning –
Proper plans should be prepared and monitored.
 Review –
For effectiveness and correctness all important documents across all phases should be independently
checked and reviewed .
 Testing –
The product should be tested against specification.
 Organizational Aspects –
Various organizational aspects should be addressed e.g., management reporting of the quality team.

Advantages of ISO 9000 Certification :


Some of the advantages of the ISO 9000 certification process are following :
 Business ISO-9000 certification forces a corporation to specialize in “how they are doing business”.
Each procedure and work instruction must be documented and thus becomes a springboard for
continuous improvement.
 Employees morale is increased as they’re asked to require control of their processes and document
their work processes
 Better products and services result from continuous improvement process.
 Increased employee participation, involvement, awareness and systematic employee training are
reduced problems.

Shortcomings of ISO 9000 Certification :


Some of the shortcoming of the ISO 9000 certification process are following :
 ISO 9000 does not give any guideline for defining an appropriate process and does not give guarantee
for high quality process.
 ISO 9000 certification process have no international accreditation agency exists.

Capability maturity model (CMM)

CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
 It is not a software process model. It is a framework that is used to analyze the approach and
techniques followed by any organization to develop software products.
 It also provides guidelines to further enhance the maturity of the process used to develop those
software products.
 It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
 This model describes a strategy for software process improvement that should be followed by moving
through 5 different levels.
 Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).

Shortcomings of SEI/CMM:
 It encourages the achievement of a higher maturity level in some cases by displacing the true mission,
which is improving the process and overall software quality.
 It only helps if it is put into place early in the software development process.
 It has no formal theoretical basis and in fact is based on the experience of very knowledgeable people.
 It does not have good empirical support and this same empirical support could also be constructed to
support other models.

Key Process Areas (KPA’s):


Each of these KPA’s defines the basic requirements that should be met by a software process in order to
satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project and
establish a context in which technical methods are applied, work products like models, documents, data,
reports, etc. are produced, milestones are established, quality is ensured and change is properly
managed.

The 5 levels of CMM are as follows:

Level-1: Initial –
 No KPA’s defined.
 Processes followed are Adhoc and immature and are not well defined.
 Unstable environment for software development.
 No basis for predicting product quality, time for completion, etc.

Level-2: Repeatable –
 Focuses on establishing basic project management policies.
 Experience with earlier projects is used for managing new similar natured projects.
 Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It
presents a detailed plan to be followed systematically for the successful completion of good quality
software.
 Configuration Management- The focus is on maintaining the performance of the software product,
including all its components, for the entire lifecycle.
 Requirements Management- It includes the management of customer reviews and feedback which
result in some changes in the requirement set. It also consists of accommodation of those modified
requirements.
 Subcontract Management- It focuses on the effective management of qualified software contractors
i.e. it manages the parts of the software which are developed by third parties.
 Software Quality Assurance- It guarantees a good quality software product by following certain
rules and quality standard guidelines while developing.
Level-3: Defined –
 At this level, documentation of the standard guidelines and procedures takes place.
 It is a well-defined integrated set of project-specific software engineering and management processes.
 Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
 Intergroup Coordination- It consists of planned interactions between different development teams
to ensure efficient and proper fulfillment of customer needs.
 Organization Process Definition- Its key focus is on the development and maintenance of the
standard development processes.
 Organization Process Focus- It includes activities and practices that should be followed to improve
the process capabilities of an organization.
 Training Programs- It focuses on the enhancement of knowledge and skills of the team members
including the developers and ensuring an increase in work efficiency.

Level-4: Managed –
 At this stage, quantitative quality goals are set for the organization for software products as well as
software processes.
 The measurements made help the organization to predict the product and process quality within some
limits defined quantitatively.
 Software Quality Management- It includes the establishment of plans and strategies to develop
quantitative analysis and understanding of the product’s quality.
 Quantitative Management- It focuses on controlling the project performance in a quantitative
manner.

Level-5: Optimizing –
 This is the highest level of process maturity in CMM and focuses on continuous process improvement
in the organization using quantitative feedback.
 Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence of
known defects.
 Process Change Management- Its focus is on the continuous improvement of the organization’s
software processes to improve productivity, quality, and cycle time for the software product.
 Technology Change Management- It consists of the identification and use of new technologies to
improve product quality and decrease product development time.
 Defect Prevention- It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.

Personal Software Process (PSP)


The SEI CMM which is reference model for raising the maturity levels of software and predicts the most
expected outcome from the next project undertaken by the organizations does not tell software
developers about how to analyze, design, code, test and document the software products, but expects that
the developers use effectual practices. The Personal Software Process realized that the process of
individual use is completely different from that required by the team.
Personal Software Process (PSP) is the skeleton or the structure that assist the engineers in finding a way
to measure and improve the way of working to a great extend. It helps them in developing their
respective skills at a personal level and the way of doing planning, estimations against the plans.
Objectives of PSP :
The aim of PSP is to give software engineers with the regulated methods for the betterment of personal
software development processes.
The PSP helps software engineers to:
 Improve their approximating and planning skills.
 Make promises that can be fulfilled.
 Manage the standards of their projects.
 Reduce the number of faults and imperfections in their work.
Time measurement:
Personal Software Process recommend that the developers should structure the way to spend the
time.The developer must measure and count the time they spend on different activities during the
development.
PSP Planning :
The engineers should plan the project before developing because without planning a high effort may be
wasted on unimportant activities which may lead to a poor and unsatisfactory quality of the result.
Levels of Personal Software Process :
Personal Software Process (PSP) has four levels-
1. PSP 0 –
The first level of Personal Software Process, PSP 0 includes Personal measurement , basic size
measures, coding standards.
2. PSP 1 –
This level includes the planning of time and scheduling .
3. PSP 2 –
This level introduces the personal quality management ,design and code reviews.
4. PSP 3 –
The last level of the Personal Software Process is for the Personal process evolution.

Six Sigma in Software Engineering


Six Sigma is the process of producing high and improved quality output. This can be done in two phases
– identification and elimination. The cause of defects is identified and appropriate elimination is done
which reduces variation in whole processes. A six sigma method is one in which 99.99966% of all the
products to be produced have the same features and are of free from defects.
Characteristics of Six Sigma:
The Characteristics of Six Sigma are as follows:
1. Statistical Quality Control:
Six Sigma is derived from the Greek Letter ? which denote Standard Deviation in statistics. Standard
Deviation is used for measuring the quality of output.
2. Methodical Approach:
The Six Sigma is a systematic approach of application in DMAIC and DMADV which can be used to
improve the quality of production. DMAIC means for Design-Measure- Analyze-Improve-Control.
While DMADV stands for Design-Measure-Analyze-Design-Verify.
3. Fact and Data-Based Approach:
The statistical and methodical method shows the scientific basis of the technique.

4. Project and Objective-Based Focus:


The Six Sigma process is implemented to focus on the requirements and conditions.
5. Customer Focus:
The customer focus is fundamental to the Six Sigma approach. The quality improvement and control
standards are based on specific customer requirements.
6. Teamwork Approach to Quality Management:
The Six Sigma process requires organizations to get organized for improving quality.
Six Sigma Methodologies:
Two methodologies used in the Six Sigma projects are DMAIC and DMADV.
 DMAIC is used to enhance an existing business process. The DMAIC project methodology has five
phases:
1. Define
2. Measure
3. Analyze
4. Improve
5. Control
 DMADV is used to create new product designs or process designs. The DMADV project
methodology also has five phases:
1. Define
2. Measure
3. Analyze
4. Design
5. Verify

Measuring Software Quality using Quality Metrics

In Software Engineering, Software Measurement is done based on some Software Metrics where these
software metrics are referred to as the measure of various characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the software. Set of
activities in SAQ are continuously applied throughout the software process. Software Quality is
measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured. But among them,
there are few most useful metrics which are most essential in software quality measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for the software project
development. Maintaining the software code quality by writing Bug-free and semantically correct code is
very important for good software project development. In code quality both Quantitative metrics like the
number of lines, complexity, functions, rate of bugs generation, etc, and Qualitative metrics like
readability, code clarity, efficiency, maintainability, etc are measured.
2. Reliability – Reliability metrics express the reliability of software in different conditions. The
software is able to provide exact service at the right time or not is checked. Reliability can be checked
using Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of the software. Each
software has been developed for some specific purposes. Performance metrics measure the performance
of the software by determining whether the software is fulfilling the user requirements or not, by
analyzing how much time and resource it is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or not. Each software is used
by the end-user. So it is important to measure that the end-user is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as this checks whether the
system or software is working correctly without any error by satisfying the user. Correctness gives the
degree of service each function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-gradation. Maintenance is an
expensive and time-consuming process. So if the software product provides easy maintainability then we
can say software quality is up to mark. Maintainability metrics include time requires to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to integrate with other
required software’s which increases software functionality and what is the control on integration from
unauthorized software’s which increases the chances of cyberattacks.
8. Security – Security metrics measure how much secure the software is? In the age of cyber terrorism,
security is the most essential part of every software. Security assures that there are no unauthorized
changes, no fear of cyber attacks, etc when the software product is in use by the end-user.

CASE tool and its scope

A CASE (Computer power-assisted software package Engineering) tool could be a generic term
accustomed denote any type of machine-driven support for software package engineering. in a very
additional restrictive sense, a CASE tool suggests that any tool accustomed automatize some activity
related to software package development.
Several CASE tools square measure obtainable. A number of these CASE tools assist in part connected
tasks like specification, structured analysis, design, coding, testing, etc.; and other to non-phase activities
like project management and configuration management.
Reasons for using CASE tools:
The primary reasons for employing a CASE tool are:
 to extend productivity
 to assist turn out higher quality code at a lower price

CASE environment:
Although individual CASE tools square measure helpful, the true power of a toolset is often completed
only this set of tools square measure integrated into a typical framework or setting. CASE tools square
measure characterized by the stage or stages of package development life cycle that they focus on. Since
totally different tools covering different stages share common data, it’s needed that they integrate through
some central repository to possess an even read of data related to the package development artifacts. This
central repository is sometimes information lexicon containing the definition of all composite and
elementary data things.
Through the central repository, all the CASE tools in a very CASE setting share common data among
themselves. therefore a CASE setting facilities the automation of the step-wise methodologies for
package development. A schematic illustration of a CASE setting is shown in the below diagram:
Note: CASE environment is different from programming environment.
A CASE environment facilitates the automation of the in small stages methodologies for package
development. In distinction to a CASE environment, a programming environment is an Associate in a
Nursing integrated assortment of tools to support solely the cryptography part of package development.

Computer Aided Software Engineering (CASE)

Computer aided software engineering (CASE) is the implementation of computer facilitated tools and
methods in software development. CASE is used to ensure a high-quality and defect-free software.
CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers,
managers and others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like business plans, requirements
and design specifications. One of the major advantages of using CASE is the delivery of the final
product, which is more likely to meet real-world requirements as it ensures that customers remain part of
the process.
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a
framework for organizing projects and to be helpful in enhancing productivity. There was more interest
in the concept of CASE tools years ago, but less so today, as the tools have morphed into different
functions, often in reaction to software developer needs. The concept of CASE also received a heavy
dose of criticism after its release.
CASE Tools:
The essential idea of CASE tools is that in-built programs can help to analyze developing systems in
order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became part of
the software lexicon, and big companies like IBM were using these kinds of tools to help create
software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different
stages and milestones in a software development life cycle.
Types of CASE Tools:
1. Diagramming Tools:
It helps in diagrammatic and graphical representations of the data and system processes. It represents
system elements, control flow and data flow among different software components and system
structure in a pictorial form.
For example, Flow Chart Maker tool for making state-of-the-art flowcharts.

2. Computer Display and Report Generators:


It helps in understanding the data requirements and the relationships involved.

3. Analysis Tools:
It focuses on inconsistent, incorrect specifications involved in the diagram and data flow. It helps in
collecting requirements, automatically check for any irregularity, imprecision in the diagrams, data

redundancies or erroneous omissions.


For example,
 (i) Accept 360, Accompa, CaseComplete for requirement analysis.

 (ii) Visible Analyst for total analysis.

4. Central Repository:
It provides the single point of storage for data diagrams, reports and documents related to project
management.

5. Documentation Generators:
It helps in generating user and technical documentation as per standards. It creates documents for
technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.

6. Code Generators:
It aids in the auto generation of code, including definitions, with the help of the designs, documents
and diagrams.

Advantages of the CASE approach:

 As special emphasis is placed on redesign as well as testing, the servicing cost of a product over its
expected lifetime is considerably reduced.
 The overall quality of the product is improved as an organized approach is undertaken during the
process of development.

 Chances to meet real-world requirements are more likely and easier with a computer-aided software
engineering approach.
 CASE indirectly provides an organization with a competitive advantage by helping ensure the
development of high-quality products.

Disadvantages of the CASE approach:


 Cost: Using case tool is a very costly. Mostly firms engaged in software development on a small scale
do not invest in CASE tools because they think that the benefit of CASE are justifiable only in the
development of large systems.
 Learning Curve: In most cases, programmers productivity may fall in the initial phase of
implementation , because user need time to learn the technology. Many consultants offer training and
on-site services that can be important to accelerate the learning curve and to the development and use
of the CASE tools.
 Tool Mix: It is important to build an appropriate selection tool mix to urge cost advantage CASE
integration and data integration across all platforms is extremely important.

Software Maintenance

Software Maintenance is the process of modifying a software product after it has been delivered to the
customer. The main purpose of software maintenance is to modify and update software applications after
delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
 Correct faults.
 Improve the design.
 Implement enhancements.
 Interface with other systems.
 Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
 Migrate legacy software.
 Retire software.
Challenges in Software Maintenance:
The various challenges in software maintenance are given below:
 The popular age of any software program is taken into consideration up to ten to fifteen years. As
software program renovation is open ended and might maintain for decades making it very expensive.
 Older software program’s, which had been intended to paintings on sluggish machines with much less
reminiscence and garage ability can not maintain themselves tough in opposition to newly coming
more advantageous software program on contemporary-day hardware.
 Changes are frequently left undocumented which can also additionally reason greater conflicts in
future.
 As era advances, it turns into high priced to preserve vintage software program.
 Often adjustments made can without problems harm the authentic shape of the software program,
making it difficult for any next adjustments.
Categories of Software Maintenance –
Maintenance can be divided into the following:

1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed
while the system is in use, or to enhance the performance of the system.

2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware
and software.

3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change
different types of functionalities of the system according to the customer demands.

4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of the
software. It goals to attend problems, which are not significant at this moment but may cause serious
issues in future.

Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements specification
of a product from an analysis of it’s code. Reverse Engineering is becoming important, since several
existing software products, lack proper documentation, are highly unstructured, or their structure has
degraded through a series of maintenance efforts.
Why Reverse Engineering?
 Providing proper system documentation.
 Recovery of lost information.
 Assisting with maintenance.
 Facility of software reuse.
 Discovering unexpected flaws or faults.
Used of Software Reverse Engineering –
 Software Reverse Engineering is used in software design, reverse engineering enables the developer
or programmer to add new features to the existing software with or without knowing the source code.
 Reverse engineering is also useful in software testing, it helps the testers to study the virus and other
malware code .
Reverse Engineering

Software Reverse Engineering is a process of recovering the design, requirement specifications and
functions of a product from an analysis of its code. It builds a program database and generates
information from this.
The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and to produce the necessary documents for a legacy system.
Reverse Engineering Goals:

 Cope with Complexity.


 Recover lost information.
 Detect side effects.
 Synthesise higher abstraction.
 Facilitate Reuse.

Steps of Software Reverse Engineering:

1. Collection Information:
This step focuses on collecting all possible information (i.e., source design documents etc.) about the
software.

2. Examining the information:


The information collected in step-1 as studied so as to get familiar with the system.

3. Extracting the structure:


This step concerns with identification of program structure in the form of structure chart where each
node corresponds to some routine.
4. Recording the functionality:
During this step processing details of each module of the structure, charts are recorded using
structured language like decision table, etc.

5. Recording data flow:


From the information extracted in step-3 and step-4, set of data flow diagrams are derived to show the
flow of data among the processes.

6. Recording control flow:


High level control structure of the software is recorded.

7. Review extracted design:


Design document extracted is reviewed several times to ensure consistency and correctness. It also
ensures that the design represents the program.

8. Generate documentation:
Finally, in this step, the complete documentation including SRS, design document, history, overview,
etc. are recorded for future use.

Reverse Engineering Tools:


Reverse engineering if done manually would consume lot of time and human labour and hence must be
supported by automated tools. Some of tools are given below:
 CIAO and CIA: A graphical navigator for software and web repositories along with a collection of
Reverse Engineering tools.
 Rigi: A visual software understanding tool.
 Bunch: A software clustering/modularization tool.
 GEN++: An application generator to support development of analysis tools for the C++ language.
 PBS: Software Bookshelf tools for extracting and visualizing the architecture of programs.

Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify
and update software application after delivery to correct errors and to improve performance. Software
is a model of the real world. When the real world changes, the software require alteration wherever
possible.

Software Maintenance is an inclusive activity that includes error corrections, enhancement of


capabilities, deletion of obsolete capabilities, and optimization.

Need for Maintenance


Software Maintenance is needed for:-

o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.

Thus the maintenance is required to ensure that the system continues to satisfy user requirements.

Types of Software Maintenance

1. Corrective Maintenance
Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.

2. Adaptive Maintenance
It contains modifying the software to match changes in the ever-changing environment.

3. Preventive Maintenance
It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered using
new technology. This maintenance prevents the system from dying out.

4. Perfective Maintenance
It defines improving processing efficiency or performance or restricting the software to enhance
changeability. This may contain enhancement of existing system functionality, improvement in
computational efficiency, etc.

Cost and efforts of software maintenance

Software Maintenance is a very broad activity that takes place once the operation is done. It optimizes
the software performance by reducing errors, eliminating useless lines of codes and applying advanced
development. It can take up to 1-2 years to build a software system while its maintenance and
modification can be an ongoing activity for 15-20 years.
Categories of Software Maintenance:
1. Corrective Maintenance
2. Adaptive Maintenance
3. Perfective Maintenance
4. Preventive Maintenance
The cost of system maintenance represents a large proportion of the budget of most organizations that
use software system. More than 65% of software lifecycle cost is expanded in the maintenance activities.
Cost of software maintenance can be controlled by postponing the. development opportunity of software
maintenance but this will cause the following intangible cost:
 Customer dissatisfaction when requests for repair or modification cannot be addressed in a timely
manner.
 Reduction in overall software quality as a result of changes that introduce hidden errors in maintained
software.
Software maintenance cost factors:
The key factors that distinguish development and maintenance and which lead to higher maintenance
cost are divided into two subcategories:
1. Non-Technical factors
2. Technical factors
Non-Technical factors:
The Non-Technical factors include:
1. Application Domain
2. Staff stability
3. Program lifetime
4. Dependence on External Environment
5. Hardware stability
Technical factors:
Technical factors include the following:
1. module independence
2. Programming language
3. Programming style
4. Program validation and testing
5. Documentation
6. Configuration management techniques
Efforts expanded on maintenance may be divided into productivity activities (for example analysis and
evaluation, design and modification, coding). The following expression provides a module of
maintenance efforts:
M = P + K(C - D)
where,
M: Total effort expanded on the maintenance.
P: Productive effort.
K: An empirical constant.
C: A measure of complexity that can be attributed to a lack of good design and documentation.
D: A measure of the degree of familiarity with the software.

Basic issues in any reuse program


The following are some of the basic issues that must be clearly understood for starting any reuse program. •
Component creation
• Component indexing and storing
• Component search
• Component understanding
• Component adaptation
• Repository maintenance
Component creation– For component creation, the reusable components have to be first identified. Selection of
the right kind of components having potential for reuse is important. Domain analysis is a promising technique
which can be used to create reusable components.
Component indexing and storing– Indexing requires classification of the reusable components so that they can
be easily searched when looking for a component for reuse. The components need to be stored in a Relational
Database Management System (RDBMS) or an Object-Oriented Database System (ODBMS) for efficient access
when the number of components becomes large.
Component searching– The programmers need to search for right components matching their requirements in a
database of components. To be able to search components efficiently, the programmers require a proper method to
describe the components that they are looking for.
Component understanding– The programmers need a precise and sufficiently complete understanding of what
the component does to be able to decide whether they can reuse the component. To facilitate understanding, the
components should be well documented and should do something simple.
Component adaptation– Often, the components may need adaptation before they can be reused, since a selected
component may not exactly fit the problem at hand. However, tinkering with the code is also not a satisfactory
solution because this is very likely to be a source of bugs.
Repository maintenance– A component repository once is created requires continuous maintenance. New
components, as and when created have to be entered into the repository.The faulty components have to be tracked.
Further, when new applications emerge, the older applications become obsolete. In this case, the obsolete
components might have to be removed from the repository.

Reuse Oriented Model


Reuse Oriented Model (ROM), also known as reuse-oriented development (ROD), it can be steps of the
software development for specific duration in which software is redesigned through creating a sequence
of prototypes known as models, every system is derived from the previous one with constant series of
defined rules.
The reuse-oriented model isn’t always sensible in its pure form due to cause of an entire repertoire of
reusable additives that might not be available. In such cases, several new system components need to be
designed. If it is not done, ROM has to compromise in perceived requirements, leading to a product that
does not meet exact requirements of user. This model depends upon perception that maintenance might
be viewed as a pastime involving reuse of existing system components.
The reuse model has 4 fundamental steps which are followed :
1. To identify components of old system that are most suitable for reuse.
2. To understand all system components.
3. To modify old system components to achieve new requirements.
4. To integrate all of modified parts into new system.
A specific framework is required for categorization of components and consequently required
modification. The complete reuse version may begin from any segment of the existence cycle – need,
planning, code, design, or analyze data – not like other models.
Advantages :
 It can reduce total cost of software development.
 The risk factor is very low.
 It can save lots of time and effort.
 It is very efficient in nature.
Disadvantages :
 Reuse-oriented model is not always worked as a practice in its true form.
 Compromises in requirements may lead to a system that does not fulfill requirement of user.
 Sometimes using old system component, that is not compatible with new version of component, this
may lead to an impact on system evolution

Organisational considerations for software reuse


Reuse is widely promoted as one of the most promising methods for increasing
productivity and quality within software development. Until recently most research into
strategies for systematic reuse has focused on solution of the technical issues. Now as
companies (mostly IT focused) implement the strategies developed, they find there are
other issues which hold back their success, somewhat unrelated to the technical solutions
offered. Reuse processes are not simple technologies and methods slotted into a
development, like the transition in design notation from traditional approaches to an
object-;orientated method. Whereas technology changes involve retraining developers.
Reuse requires the whole organisation and funding of development to be revised. If the
magnitude of change involved in transitioning an IT organisation is so encompassing,
where does this leave the rest of industry which is increasingly reliant on software to
support their business process? This paper looks at organisational and management issues
raised by the introduction of software reuse to the development process. We identify
inhibitors of reuse adoption, look at causes of these and suggest possible solutions. We aim
to concisely present all those non-;technical issues that should be considered when
introducing a reuse program. Considered also is how these issues affect companies which
have IT in only a business support capacity, making this paper relevant throughout
industry.
UNIT-5

Reliability Testing

Reliability Testing is a testing technique that relates to test the ability of a software to function and
given environmental conditions that helps in uncovering issues in the software design and functionality.
It is defined as a type of software testing that determines whether the software can perform a failure free
operation for a specific period of time in a specific environment. It ensures that the product is fault free
and is reliable for its intended purpose.
Objective of Reliability Testing:
The objective of reliability testing is:
 To find the perpetual structure of repeating failures.
 To find the number of failures occurring is the specific period of time.
 To discover the main cause of failure.
 To conduct performance testing of various modules of software product after fixing defects.
Types of Reliability Testing:
There are three types of reliability testing:-
1. Feature Testing:
Following three steps are involved in this testing:
 Each function in the software should be executed at least once.
 Interaction between two or more functions should be reduced.
 Each function should be properly executed.
2. Regression Testing:
Regression testing is basically performed whenever any new functionality is added, old functionalities
are removed or the bugs are fixed in an application to make sure with introduction of new
functionality or with the fixing of previous bugs, no new bugs are introduced in the application.
3. Load Testing:
Load testing is carried out to determine whether the application is supporting the required load
without getting breakdown. It is performed to check the performance of the software under maximum
work load.
The study of reliability testing can be divided into three categories:-
1. Modelling
2. Measurement
3. Improvement
Measurement of Reliability Testing:
 Mean Time Between Failures (MTBF):
Measurement of reliability testing is done in terms of mean time between failures (MTBF).
 Mean Time To Failure (MTTF):
The time between two consecutive failures is called as mean time to failure (MTTF).
 Mean Time To Repair (MTTR):
The time taken to fix the failures is known as mean time to repair (MTTR).
MTBF = MTTF + MTTR
Statistical Testing

Statistical Testing is a testing method whose objective is to work out the undependable of software
package product instead of discovering errors. check cases ar designed for applied mathematics testing
with a wholly totally different objective than those of typical testing.

Operation Profile:
Different classes of users might use a software package for various functions. for instance, a professional
may use the library automation software package to make member records, add books to the library, etc.
whereas a library member may use to software package to question regarding the provision of the book
or to issue and come books. Formally, the operation profile of a software package may be outlined
because the chance distribution of the input of a mean user. If the input to a variety of categories{Ci} is
split, the chance price of a category represents the chance of a mean user choosing his next input from
this class. Thus, the operation profile assigns a chance price Pi to every input category Ci.

Steps in Statistical Testing:


Statistical testing permits one to focus on testing those elements of the system that ar presumably to be
used. the primary step of applied mathematics testing is to work out the operation profile of the software
package. a successive step is to get a group of check knowledge reminiscent of the determined operation
profile. The third step is to use the check cases to the software package and record the time between
every failure. once a statistically important range of failures are ascertained, the undependable may be
computed.

Advantages and Disadvantages of Statistical Testing:


Statistical testing permits one to focus on testing elements of the system that are presumably to be used.
Therefore, it leads to a system that the users to be a lot of reliable (than truly it is!). Undependable
estimation victimization applied mathematics testing is a lot of correct compared to those of alternative
strategies like ROCOF, POFOD, etc. However it’s dangerous to perform applied mathematics testing
properly. there’s no easy and repeatable manner of process operation profiles. additionally, it’s a great
deal cumbersome to get check cases for applied mathematics checking cause the number of test cases
with that the system is to be tested ought to be statistically important.

Software Quality

Traditionally, a high-quality product is outlined in terms of its fitness of purpose. That is, a high-quality
product will specifically what the users need it to try to. For code merchandise, the fitness of purpose is
typically taken in terms of satisfaction of the wants arranged down within the SRS document. though
“fitness of purpose” could be a satisfactory definition of quality for several merchandises like an
automobile, a table fan, a grinding machine, etc. – for code merchandise, “fitness of purpose” isn’t a
completely satisfactory definition of quality. To convey associate degree example, think about a software
that’s functionally correct.
It performs all functions as laid out in the SRS document. But, it has an associate degree virtually
unusable program. despite the fact that it should be functionally correct, we have a tendency to cannot
think about it to be a high-quality product. Another example is also that of a product that will everything
that the users need however has associate degree virtually incomprehensible and not maintainable code.
Therefore, the normal construct of quality as “fitness of purpose” for code merchandise isn’t totally
satisfactory.
The modern read of high-quality associates with software many quality factors like the following:
 Portability:
A software is claimed to be transportable, if it may be simply created to figure in several package
environments, in several machines, with alternative code merchandise, etc.
 Usability:
A software has smart usability if completely different classes of users (i.e. each knowledgeable and
novice users) will simply invoke the functions of the merchandise.
 Reusability:
A software has smart reusability if completely different modules of the merchandise will simply be
reused to develop new merchandise.
 Correctness:
A software is correct if completely different needs as laid out in the SRS document are properly
enforced.
 Maintainability:
A software is reparable, if errors may be simply corrected as and once they show up, new functions
may be simply added to the merchandise, and therefore the functionalities of the merchandise may be
simply changed, etc

Quality Management
Software Quality Management ensures that the required level of quality is achieved by submitting
improvements to the product development process. SQA aims to develop a culture within the team and it is
seen as everyone's responsibility.
Software Quality management should be independent of project management to ensure independence of cost
and schedule adherences. It directly affects the process quality and indirectly affects the product quality.
Activities of Software Quality Management:
 Quality Assurance - QA aims at developing Organizational procedures and standards for quality at
Organizational level.
 Quality Planning - Select applicable procedures and standards for a particular project and modify as
required to develop a quality plan.
 Quality Control - Ensure that best practices and standards are followed by the software development
team to produce quality products.

ISO 9000 Certification in Software Engineering


The International organization for Standardization is a world wide federation of national standard bodies.
The International standards organization (ISO) is a standard which serves as a for contract between
independent parties. It specifies guidelines for development of quality system.
Quality system of an organization means the various activities related to its products or services.
Standard of ISO addresses to both aspects i.e. operational and organizational aspects which includes
responsibilities, reporting etc. An ISO 9000 standard contains set of guidelines of production process
without considering product itself.

ISO 9000 Certification

Why ISO Certification required by Software Industry?


There are several reasons why software industry must get an ISO certification. Some of reasons are as
follows :
 This certification has become a standards for international bidding.
 It helps in designing high-quality repeatable software products.
 It emphasis need for proper documentation.
 It facilitates development of optimal processes and totally quality measurements.

Features of ISO 9001 Requirements :


 Document control –
All documents concerned with the development of a software product should be properly managed
and controlled.
 Planning –
Proper plans should be prepared and monitored.
 Review –
For effectiveness and correctness all important documents across all phases should be independently
checked and reviewed .
 Testing –
The product should be tested against specification.
 Organizational Aspects –
Various organizational aspects should be addressed e.g., management reporting of the quality team.

Advantages of ISO 9000 Certification :


Some of the advantages of the ISO 9000 certification process are following :
 Business ISO-9000 certification forces a corporation to specialize in “how they are doing business”.
Each procedure and work instruction must be documented and thus becomes a springboard for
continuous improvement.
 Employees morale is increased as they’re asked to require control of their processes and document
their work processes
 Better products and services result from continuous improvement process.
 Increased employee participation, involvement, awareness and systematic employee training are
reduced problems.

Shortcomings of ISO 9000 Certification :


Some of the shortcoming of the ISO 9000 certification process are following :
 ISO 9000 does not give any guideline for defining an appropriate process and does not give guarantee
for high quality process.
 ISO 9000 certification process have no international accreditation agency exists.

Capability maturity model (CMM)

CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
 It is not a software process model. It is a framework that is used to analyze the approach and
techniques followed by any organization to develop software products.
 It also provides guidelines to further enhance the maturity of the process used to develop those
software products.
 It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
 This model describes a strategy for software process improvement that should be followed by moving
through 5 different levels.
 Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).

Shortcomings of SEI/CMM:
 It encourages the achievement of a higher maturity level in some cases by displacing the true mission,
which is improving the process and overall software quality.
 It only helps if it is put into place early in the software development process.
 It has no formal theoretical basis and in fact is based on the experience of very knowledgeable people.
 It does not have good empirical support and this same empirical support could also be constructed to
support other models.

Key Process Areas (KPA’s):


Each of these KPA’s defines the basic requirements that should be met by a software process in order to
satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project and
establish a context in which technical methods are applied, work products like models, documents, data,
reports, etc. are produced, milestones are established, quality is ensured and change is properly
managed.

The 5 levels of CMM are as follows:

Level-1: Initial –
 No KPA’s defined.
 Processes followed are Adhoc and immature and are not well defined.
 Unstable environment for software development.
 No basis for predicting product quality, time for completion, etc.

Level-2: Repeatable –
 Focuses on establishing basic project management policies.
 Experience with earlier projects is used for managing new similar natured projects.
 Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It
presents a detailed plan to be followed systematically for the successful completion of good quality
software.
 Configuration Management- The focus is on maintaining the performance of the software product,
including all its components, for the entire lifecycle.
 Requirements Management- It includes the management of customer reviews and feedback which
result in some changes in the requirement set. It also consists of accommodation of those modified
requirements.
 Subcontract Management- It focuses on the effective management of qualified software contractors
i.e. it manages the parts of the software which are developed by third parties.
 Software Quality Assurance- It guarantees a good quality software product by following certain
rules and quality standard guidelines while developing.
Level-3: Defined –
 At this level, documentation of the standard guidelines and procedures takes place.
 It is a well-defined integrated set of project-specific software engineering and management processes.
 Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
 Intergroup Coordination- It consists of planned interactions between different development teams
to ensure efficient and proper fulfillment of customer needs.
 Organization Process Definition- Its key focus is on the development and maintenance of the
standard development processes.
 Organization Process Focus- It includes activities and practices that should be followed to improve
the process capabilities of an organization.
 Training Programs- It focuses on the enhancement of knowledge and skills of the team members
including the developers and ensuring an increase in work efficiency.

Level-4: Managed –
 At this stage, quantitative quality goals are set for the organization for software products as well as
software processes.
 The measurements made help the organization to predict the product and process quality within some
limits defined quantitatively.
 Software Quality Management- It includes the establishment of plans and strategies to develop
quantitative analysis and understanding of the product’s quality.
 Quantitative Management- It focuses on controlling the project performance in a quantitative
manner.

Level-5: Optimizing –
 This is the highest level of process maturity in CMM and focuses on continuous process improvement
in the organization using quantitative feedback.
 Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence of
known defects.
 Process Change Management- Its focus is on the continuous improvement of the organization’s
software processes to improve productivity, quality, and cycle time for the software product.
 Technology Change Management- It consists of the identification and use of new technologies to
improve product quality and decrease product development time.
 Defect Prevention- It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.

Personal Software Process (PSP)


The SEI CMM which is reference model for raising the maturity levels of software and predicts the most
expected outcome from the next project undertaken by the organizations does not tell software
developers about how to analyze, design, code, test and document the software products, but expects that
the developers use effectual practices. The Personal Software Process realized that the process of
individual use is completely different from that required by the team.
Personal Software Process (PSP) is the skeleton or the structure that assist the engineers in finding a way
to measure and improve the way of working to a great extend. It helps them in developing their
respective skills at a personal level and the way of doing planning, estimations against the plans.
Objectives of PSP :
The aim of PSP is to give software engineers with the regulated methods for the betterment of personal
software development processes.
The PSP helps software engineers to:
 Improve their approximating and planning skills.
 Make promises that can be fulfilled.
 Manage the standards of their projects.
 Reduce the number of faults and imperfections in their work.
Time measurement:
Personal Software Process recommend that the developers should structure the way to spend the
time.The developer must measure and count the time they spend on different activities during the
development.
PSP Planning :
The engineers should plan the project before developing because without planning a high effort may be
wasted on unimportant activities which may lead to a poor and unsatisfactory quality of the result.
Levels of Personal Software Process :
Personal Software Process (PSP) has four levels-
1. PSP 0 –
The first level of Personal Software Process, PSP 0 includes Personal measurement , basic size
measures, coding standards.
2. PSP 1 –
This level includes the planning of time and scheduling .
3. PSP 2 –
This level introduces the personal quality management ,design and code reviews.
4. PSP 3 –
The last level of the Personal Software Process is for the Personal process evolution.

Six Sigma in Software Engineering


Six Sigma is the process of producing high and improved quality output. This can be done in two phases
– identification and elimination. The cause of defects is identified and appropriate elimination is done
which reduces variation in whole processes. A six sigma method is one in which 99.99966% of all the
products to be produced have the same features and are of free from defects.
Characteristics of Six Sigma:
The Characteristics of Six Sigma are as follows:
1. Statistical Quality Control:
Six Sigma is derived from the Greek Letter ? which denote Standard Deviation in statistics. Standard
Deviation is used for measuring the quality of output.
2. Methodical Approach:
The Six Sigma is a systematic approach of application in DMAIC and DMADV which can be used to
improve the quality of production. DMAIC means for Design-Measure- Analyze-Improve-Control.
While DMADV stands for Design-Measure-Analyze-Design-Verify.
3. Fact and Data-Based Approach:
The statistical and methodical method shows the scientific basis of the technique.

4. Project and Objective-Based Focus:


The Six Sigma process is implemented to focus on the requirements and conditions.
5. Customer Focus:
The customer focus is fundamental to the Six Sigma approach. The quality improvement and control
standards are based on specific customer requirements.
6. Teamwork Approach to Quality Management:
The Six Sigma process requires organizations to get organized for improving quality.
Six Sigma Methodologies:
Two methodologies used in the Six Sigma projects are DMAIC and DMADV.
 DMAIC is used to enhance an existing business process. The DMAIC project methodology has five
phases:
1. Define
2. Measure
3. Analyze
4. Improve
5. Control
 DMADV is used to create new product designs or process designs. The DMADV project
methodology also has five phases:
1. Define
2. Measure
3. Analyze
4. Design
5. Verify

Measuring Software Quality using Quality Metrics

In Software Engineering, Software Measurement is done based on some Software Metrics where these
software metrics are referred to as the measure of various characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the software. Set of
activities in SAQ are continuously applied throughout the software process. Software Quality is
measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured. But among them,
there are few most useful metrics which are most essential in software quality measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for the software project
development. Maintaining the software code quality by writing Bug-free and semantically correct code is
very important for good software project development. In code quality both Quantitative metrics like the
number of lines, complexity, functions, rate of bugs generation, etc, and Qualitative metrics like
readability, code clarity, efficiency, maintainability, etc are measured.
2. Reliability – Reliability metrics express the reliability of software in different conditions. The
software is able to provide exact service at the right time or not is checked. Reliability can be checked
using Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of the software. Each
software has been developed for some specific purposes. Performance metrics measure the performance
of the software by determining whether the software is fulfilling the user requirements or not, by
analyzing how much time and resource it is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or not. Each software is used
by the end-user. So it is important to measure that the end-user is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as this checks whether the
system or software is working correctly without any error by satisfying the user. Correctness gives the
degree of service each function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-gradation. Maintenance is an
expensive and time-consuming process. So if the software product provides easy maintainability then we
can say software quality is up to mark. Maintainability metrics include time requires to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to integrate with other
required software’s which increases software functionality and what is the control on integration from
unauthorized software’s which increases the chances of cyberattacks.
8. Security – Security metrics measure how much secure the software is? In the age of cyber terrorism,
security is the most essential part of every software. Security assures that there are no unauthorized
changes, no fear of cyber attacks, etc when the software product is in use by the end-user.

CASE tool and its scope

A CASE (Computer power-assisted software package Engineering) tool could be a generic term
accustomed denote any type of machine-driven support for software package engineering. in a very
additional restrictive sense, a CASE tool suggests that any tool accustomed automatize some activity
related to software package development.
Several CASE tools square measure obtainable. A number of these CASE tools assist in part connected
tasks like specification, structured analysis, design, coding, testing, etc.; and other to non-phase activities
like project management and configuration management.
Reasons for using CASE tools:
The primary reasons for employing a CASE tool are:
 to extend productivity
 to assist turn out higher quality code at a lower price

CASE environment:
Although individual CASE tools square measure helpful, the true power of a toolset is often completed
only this set of tools square measure integrated into a typical framework or setting. CASE tools square
measure characterized by the stage or stages of package development life cycle that they focus on. Since
totally different tools covering different stages share common data, it’s needed that they integrate through
some central repository to possess an even read of data related to the package development artifacts. This
central repository is sometimes information lexicon containing the definition of all composite and
elementary data things.
Through the central repository, all the CASE tools in a very CASE setting share common data among
themselves. therefore a CASE setting facilities the automation of the step-wise methodologies for
package development. A schematic illustration of a CASE setting is shown in the below diagram:
Note: CASE environment is different from programming environment.
A CASE environment facilitates the automation of the in small stages methodologies for package
development. In distinction to a CASE environment, a programming environment is an Associate in a
Nursing integrated assortment of tools to support solely the cryptography part of package development.

Computer Aided Software Engineering (CASE)

Computer aided software engineering (CASE) is the implementation of computer facilitated tools and
methods in software development. CASE is used to ensure a high-quality and defect-free software.
CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers,
managers and others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like business plans, requirements
and design specifications. One of the major advantages of using CASE is the delivery of the final
product, which is more likely to meet real-world requirements as it ensures that customers remain part of
the process.
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a
framework for organizing projects and to be helpful in enhancing productivity. There was more interest
in the concept of CASE tools years ago, but less so today, as the tools have morphed into different
functions, often in reaction to software developer needs. The concept of CASE also received a heavy
dose of criticism after its release.
CASE Tools:
The essential idea of CASE tools is that in-built programs can help to analyze developing systems in
order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became part of
the software lexicon, and big companies like IBM were using these kinds of tools to help create
software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different
stages and milestones in a software development life cycle.
Types of CASE Tools:
1. Diagramming Tools:
It helps in diagrammatic and graphical representations of the data and system processes. It represents
system elements, control flow and data flow among different software components and system
structure in a pictorial form.
For example, Flow Chart Maker tool for making state-of-the-art flowcharts.

2. Computer Display and Report Generators:


It helps in understanding the data requirements and the relationships involved.

3. Analysis Tools:
It focuses on inconsistent, incorrect specifications involved in the diagram and data flow. It helps in
collecting requirements, automatically check for any irregularity, imprecision in the diagrams, data

redundancies or erroneous omissions.


For example,
 (i) Accept 360, Accompa, CaseComplete for requirement analysis.

 (ii) Visible Analyst for total analysis.

4. Central Repository:
It provides the single point of storage for data diagrams, reports and documents related to project
management.

5. Documentation Generators:
It helps in generating user and technical documentation as per standards. It creates documents for
technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.

6. Code Generators:
It aids in the auto generation of code, including definitions, with the help of the designs, documents
and diagrams.

Advantages of the CASE approach:

 As special emphasis is placed on redesign as well as testing, the servicing cost of a product over its
expected lifetime is considerably reduced.
 The overall quality of the product is improved as an organized approach is undertaken during the
process of development.

 Chances to meet real-world requirements are more likely and easier with a computer-aided software
engineering approach.
 CASE indirectly provides an organization with a competitive advantage by helping ensure the
development of high-quality products.

Disadvantages of the CASE approach:


 Cost: Using case tool is a very costly. Mostly firms engaged in software development on a small scale
do not invest in CASE tools because they think that the benefit of CASE are justifiable only in the
development of large systems.
 Learning Curve: In most cases, programmers productivity may fall in the initial phase of
implementation , because user need time to learn the technology. Many consultants offer training and
on-site services that can be important to accelerate the learning curve and to the development and use
of the CASE tools.
 Tool Mix: It is important to build an appropriate selection tool mix to urge cost advantage CASE
integration and data integration across all platforms is extremely important.

Software Maintenance

Software Maintenance is the process of modifying a software product after it has been delivered to the
customer. The main purpose of software maintenance is to modify and update software applications after
delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
 Correct faults.
 Improve the design.
 Implement enhancements.
 Interface with other systems.
 Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
 Migrate legacy software.
 Retire software.
Challenges in Software Maintenance:
The various challenges in software maintenance are given below:
 The popular age of any software program is taken into consideration up to ten to fifteen years. As
software program renovation is open ended and might maintain for decades making it very expensive.
 Older software program’s, which had been intended to paintings on sluggish machines with much less
reminiscence and garage ability can not maintain themselves tough in opposition to newly coming
more advantageous software program on contemporary-day hardware.
 Changes are frequently left undocumented which can also additionally reason greater conflicts in
future.
 As era advances, it turns into high priced to preserve vintage software program.
 Often adjustments made can without problems harm the authentic shape of the software program,
making it difficult for any next adjustments.
Categories of Software Maintenance –
Maintenance can be divided into the following:

1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed
while the system is in use, or to enhance the performance of the system.

2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware
and software.

3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change
different types of functionalities of the system according to the customer demands.

4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of the
software. It goals to attend problems, which are not significant at this moment but may cause serious
issues in future.

Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements specification
of a product from an analysis of it’s code. Reverse Engineering is becoming important, since several
existing software products, lack proper documentation, are highly unstructured, or their structure has
degraded through a series of maintenance efforts.
Why Reverse Engineering?
 Providing proper system documentation.
 Recovery of lost information.
 Assisting with maintenance.
 Facility of software reuse.
 Discovering unexpected flaws or faults.
Used of Software Reverse Engineering –
 Software Reverse Engineering is used in software design, reverse engineering enables the developer
or programmer to add new features to the existing software with or without knowing the source code.
 Reverse engineering is also useful in software testing, it helps the testers to study the virus and other
malware code .
Reverse Engineering

Software Reverse Engineering is a process of recovering the design, requirement specifications and
functions of a product from an analysis of its code. It builds a program database and generates
information from this.
The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and to produce the necessary documents for a legacy system.
Reverse Engineering Goals:

 Cope with Complexity.


 Recover lost information.
 Detect side effects.
 Synthesise higher abstraction.
 Facilitate Reuse.

Steps of Software Reverse Engineering:

1. Collection Information:
This step focuses on collecting all possible information (i.e., source design documents etc.) about the
software.

2. Examining the information:


The information collected in step-1 as studied so as to get familiar with the system.

3. Extracting the structure:


This step concerns with identification of program structure in the form of structure chart where each
node corresponds to some routine.
4. Recording the functionality:
During this step processing details of each module of the structure, charts are recorded using
structured language like decision table, etc.

5. Recording data flow:


From the information extracted in step-3 and step-4, set of data flow diagrams are derived to show the
flow of data among the processes.

6. Recording control flow:


High level control structure of the software is recorded.

7. Review extracted design:


Design document extracted is reviewed several times to ensure consistency and correctness. It also
ensures that the design represents the program.

8. Generate documentation:
Finally, in this step, the complete documentation including SRS, design document, history, overview,
etc. are recorded for future use.

Reverse Engineering Tools:


Reverse engineering if done manually would consume lot of time and human labour and hence must be
supported by automated tools. Some of tools are given below:
 CIAO and CIA: A graphical navigator for software and web repositories along with a collection of
Reverse Engineering tools.
 Rigi: A visual software understanding tool.
 Bunch: A software clustering/modularization tool.
 GEN++: An application generator to support development of analysis tools for the C++ language.
 PBS: Software Bookshelf tools for extracting and visualizing the architecture of programs.

Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to modify
and update software application after delivery to correct errors and to improve performance. Software
is a model of the real world. When the real world changes, the software require alteration wherever
possible.

Software Maintenance is an inclusive activity that includes error corrections, enhancement of


capabilities, deletion of obsolete capabilities, and optimization.

Need for Maintenance


Software Maintenance is needed for:-

o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.

Thus the maintenance is required to ensure that the system continues to satisfy user requirements.

Types of Software Maintenance

1. Corrective Maintenance
Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.

2. Adaptive Maintenance
It contains modifying the software to match changes in the ever-changing environment.

3. Preventive Maintenance
It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered using
new technology. This maintenance prevents the system from dying out.

4. Perfective Maintenance
It defines improving processing efficiency or performance or restricting the software to enhance
changeability. This may contain enhancement of existing system functionality, improvement in
computational efficiency, etc.

Cost and efforts of software maintenance

Software Maintenance is a very broad activity that takes place once the operation is done. It optimizes
the software performance by reducing errors, eliminating useless lines of codes and applying advanced
development. It can take up to 1-2 years to build a software system while its maintenance and
modification can be an ongoing activity for 15-20 years.
Categories of Software Maintenance:
1. Corrective Maintenance
2. Adaptive Maintenance
3. Perfective Maintenance
4. Preventive Maintenance
The cost of system maintenance represents a large proportion of the budget of most organizations that
use software system. More than 65% of software lifecycle cost is expanded in the maintenance activities.
Cost of software maintenance can be controlled by postponing the. development opportunity of software
maintenance but this will cause the following intangible cost:
 Customer dissatisfaction when requests for repair or modification cannot be addressed in a timely
manner.
 Reduction in overall software quality as a result of changes that introduce hidden errors in maintained
software.
Software maintenance cost factors:
The key factors that distinguish development and maintenance and which lead to higher maintenance
cost are divided into two subcategories:
1. Non-Technical factors
2. Technical factors
Non-Technical factors:
The Non-Technical factors include:
1. Application Domain
2. Staff stability
3. Program lifetime
4. Dependence on External Environment
5. Hardware stability
Technical factors:
Technical factors include the following:
1. module independence
2. Programming language
3. Programming style
4. Program validation and testing
5. Documentation
6. Configuration management techniques
Efforts expanded on maintenance may be divided into productivity activities (for example analysis and
evaluation, design and modification, coding). The following expression provides a module of
maintenance efforts:
M = P + K(C - D)
where,
M: Total effort expanded on the maintenance.
P: Productive effort.
K: An empirical constant.
C: A measure of complexity that can be attributed to a lack of good design and documentation.
D: A measure of the degree of familiarity with the software.

Basic issues in any reuse program


The following are some of the basic issues that must be clearly understood for starting any reuse program. •
Component creation
• Component indexing and storing
• Component search
• Component understanding
• Component adaptation
• Repository maintenance
Component creation– For component creation, the reusable components have to be first identified. Selection of
the right kind of components having potential for reuse is important. Domain analysis is a promising technique
which can be used to create reusable components.
Component indexing and storing– Indexing requires classification of the reusable components so that they can
be easily searched when looking for a component for reuse. The components need to be stored in a Relational
Database Management System (RDBMS) or an Object-Oriented Database System (ODBMS) for efficient access
when the number of components becomes large.
Component searching– The programmers need to search for right components matching their requirements in a
database of components. To be able to search components efficiently, the programmers require a proper method to
describe the components that they are looking for.
Component understanding– The programmers need a precise and sufficiently complete understanding of what
the component does to be able to decide whether they can reuse the component. To facilitate understanding, the
components should be well documented and should do something simple.
Component adaptation– Often, the components may need adaptation before they can be reused, since a selected
component may not exactly fit the problem at hand. However, tinkering with the code is also not a satisfactory
solution because this is very likely to be a source of bugs.
Repository maintenance– A component repository once is created requires continuous maintenance. New
components, as and when created have to be entered into the repository.The faulty components have to be tracked.
Further, when new applications emerge, the older applications become obsolete. In this case, the obsolete
components might have to be removed from the repository.

Reuse Oriented Model


Reuse Oriented Model (ROM), also known as reuse-oriented development (ROD), it can be steps of the
software development for specific duration in which software is redesigned through creating a sequence
of prototypes known as models, every system is derived from the previous one with constant series of
defined rules.
The reuse-oriented model isn’t always sensible in its pure form due to cause of an entire repertoire of
reusable additives that might not be available. In such cases, several new system components need to be
designed. If it is not done, ROM has to compromise in perceived requirements, leading to a product that
does not meet exact requirements of user. This model depends upon perception that maintenance might
be viewed as a pastime involving reuse of existing system components.
The reuse model has 4 fundamental steps which are followed :
1. To identify components of old system that are most suitable for reuse.
2. To understand all system components.
3. To modify old system components to achieve new requirements.
4. To integrate all of modified parts into new system.
A specific framework is required for categorization of components and consequently required
modification. The complete reuse version may begin from any segment of the existence cycle – need,
planning, code, design, or analyze data – not like other models.
Advantages :
 It can reduce total cost of software development.
 The risk factor is very low.
 It can save lots of time and effort.
 It is very efficient in nature.
Disadvantages :
 Reuse-oriented model is not always worked as a practice in its true form.
 Compromises in requirements may lead to a system that does not fulfill requirement of user.
 Sometimes using old system component, that is not compatible with new version of component, this
may lead to an impact on system evolution

Organisational considerations for software reuse


Reuse is widely promoted as one of the most promising methods for increasing
productivity and quality within software development. Until recently most research into
strategies for systematic reuse has focused on solution of the technical issues. Now as
companies (mostly IT focused) implement the strategies developed, they find there are
other issues which hold back their success, somewhat unrelated to the technical solutions
offered. Reuse processes are not simple technologies and methods slotted into a
development, like the transition in design notation from traditional approaches to an
object-;orientated method. Whereas technology changes involve retraining developers.
Reuse requires the whole organisation and funding of development to be revised. If the
magnitude of change involved in transitioning an IT organisation is so encompassing,
where does this leave the rest of industry which is increasingly reliant on software to
support their business process? This paper looks at organisational and management issues
raised by the introduction of software reuse to the development process. We identify
inhibitors of reuse adoption, look at causes of these and suggest possible solutions. We aim
to concisely present all those non-;technical issues that should be considered when
introducing a reuse program. Considered also is how these issues affect companies which
have IT in only a business support capacity, making this paper relevant throughout
industry.

You might also like