743 13032025EBook
743 13032025EBook
Quality Management
Sub. Code 743
Developed By
Urvish Acharya
Printed and Published on behalf of Prin. L.N. Welingkar Institute of Management Development & Research
L.N. Road, Matunga (CR), Mumbai - 400 019.
ALL RIGHTS RESERVED. No part of this work covered by the copyright here on may be reproduced or used in any form or by any means – graphic,
electronic or mechanical, including photocopying, recording, taping, web distribution or information storage and retrieval systems – without the written
permission of the publisher.
Mr. Urvish Acharya holds basic educational qualification MBA (Finance) and BE
(Computers) from Mumbai and Pune University respectively. He also holds internationally
recognized certifications like CISSP, CISA, DCPLA etc.
Mr. Acharya has 15 years of industry exposure having worked with various companies in
the field of Manufacturing, Defense, Pharma and Banking. His experience involves core
technical knowledge in Network & Security, Risk based auditing and driving governance
practices for large conglomerates. Mr. Acharya is also associated with online learning
platform as a grade A trainer.
Currently he is heading IT Risk and Governance practices for one of the global company
which has presence in 13 different countries.
CONTENTS
2. System Modeling 15 – 28
3. Architecture designing 29 – 40
4. Security Engineering 41 – 50
6. Software Evolution 63 – 74
7. Software Management 75 – 96
1
SELF ASSESSMENT QUESTIONS
Chapter
INTRODUCTION TO
SOFTWARE ENGINEERING
Objectives:
After completing this chapter you will understand:
• Software engineering and its importance
• Types of Software Systems and engineering techniques
• Ethical and professional issues important for Software Engineers
• Software engineering Myths
Structure:
1.1 Introduction – Context setting
1.2 Software development – A career choice
1.3 Myth and Mistake in Software development
1.4 Software failures and its impact
1.5 Software Engineering - Ethics
1.6 Summary
1.7 Self-Assessment Questions
1.8 Multiple Choice Question
Let us ask ourselves a question – Can we run modern infrastructure without a software?
Today, almost all technology or a solution like online meetings, children schools,
Copyright © Welingkar 5
Chapter 1
While software engineers can be proud for their achievements, one thing to keep in mind
and I would emphasise is, without software engineering, we would have not explored
space, Internet, telecommunication or any solution to complicated problem. There is an
immense amount of contribution of software engineering and in fact, in 21st century and
beyond, this contribution is going to be increasing with great pace.
Every second person in IT field is some or other way writing piece of code to make
his/her life easy. For instance, many of us use several formulas in spreadsheet or write
macro to carryout desired output. Some of us also write such programs for their own
interest and enjoyment. However, vast majority of software development is professional
activity where software is developed for solving specific business problem. Few of such
examples could be Microsoft products, Oracle Database, CAD etc. Generally
professional software development is focused on usage of such software for others apart
from developers and is developed by set of teams rather than one individual. Also, it is
changed / updated throughout its lifetime. For making a career choice in this field, one
has to clearly understand that software and software engineering are two different things.
While software may be set of lines of code to carryout desired function, Software
engineering not only include programs but it also includes all associated documentation
describing structure of system, user documentation explaining how to use system etc.
Key difference between professional and amateur software development is
documentation. If you are using software for yourself, no one else will use it and you may
not need to worry about program guide, documenting program design etc. However,
6 Copyright © Welingkar
Introduction to Software Engineering
when software is written for others to use and when it is expected to be changed by other
engineers, you must provide additional information other than code of program. In
general, software engineers adopt a systematic and organized approach to their work, as
this is often the most effective way to produce high-quality software. However,
engineering is all about selecting the most appropriate method for a set of circumstances
so a more creative, less formal approach to development may be effective in some
circumstances. Less formal development is particularly appropriate for the development
of web-based systems, which requires a blend of software and graphical design skills.
The differences between traditional engineering disciplines and software engineering can
be articulated as
• Traditional engineers construct real artifacts; software engineers construct abstract
artifacts.
• The foundations of traditional engineering disciplines are mature physical sciences
and continuous mathematics; software engineering is more immature abstract
computer science and discrete mathematics.
• In physical engineering two main concerns from a product are cost of production and
reliability measured by time to failure; in software engineering two main concerns are
cost of development and reliability measured by the number of errors per thousand
lines of source code.
• Both disciplines require maintenance but in different ways; cost of maintenance in
software engineering can be much higher than the original cost of producing the
product.
One has often heard the phrase "Change is the only constant". Nowhere is it more
applicable than in the software development world.
Copyright © Welingkar 7
Chapter 1
Software change is inevitable. The change per se is not the difficult part; a key problem
for organizations is implementing and managing the change to their existing software
systems. In the world human and animal sacrifices happen even today due to false myths
and beliefs of prosperity, health or success. Similarly software myths propagate
misinformation and confusion causing problems to managers and practitioners alike. In
order to address the challenge of software changes it is important to understand
overcome some of the myths that abound in the field of software development. Dir. Roger
Pressman highlights these myths in his book "Software Engineering: A Practitioner's
Approach" quite aptly.
Management Myths
• "We already have a book of standards and procedures for building software. It
provides my people with everything the programmers need to know". If books and
documents were sufficient to imbibe knowledge and skill, projects would never fail.
Let alone practitioners even design analysts, project leaders or project managers
8 Copyright © Welingkar
Introduction to Software Engineering
hardly read any books or standards and apply them. Tools introduced for projects are
rarely properly understood and remain ineffective.
• "If a project is behind schedule, one always can add more programmers to it and
catch up."
• Software development is not a civil engineering project where adding labour might
get a bridge completed earlier than scheduled. According to Fred Brooks "adding
people to a late software project makes it later" When new people are added, people
who were working must spend time educating the newcomers, thereby reducing the
amount of time spent on productive development effort. Delays and failures are not
necessarily due to shortage of labour.
Customer Myths
• "A general statement of objectives is sufficient to begin writing programs. Details can
be added later." This is one of the major causes of software project failures and
delays. In today's environment hardware and memory sizes are large and associated
costs are relatively cheaper. Ask any programmer the number of times he recompiles
a program and the reasons for recompiling his programs. The underlying reason
would be that the program writing had begun much before a formal and detailed
description of the domain, function, behavior, or validation criteria were understood or
available. Changes keep occurring as the programming progresses with neither
customer nor the providers realizing the importance of a reasonably detailed
requirement document.
Practitioner's myths
• "Let's start coding ASAP, because once we write the program and get it to work, our
job is done”
• Until I get the program running, I have no way of assessing its quality".
• "Software engineering involves a load of paper only to slow us down"
Copyright © Welingkar 9
Chapter 1
These are common beliefs, obviously false of any fresh programmer. Playing the piano is
not just tapping the keys but one must know the music and appreciate each sound that
emanates from the piano. The keyboard on a PC may be a great tool to type but unless a
programmer "hears the music" the quality of his work will be in short supply.
There are also strong beliefs that software engineering is all about writing loads of
documentation while many believe software engineering is about writing a running code.
It is neither one. Software engineering is about understanding business problems,
inventing solutions, evaluating alternatives, and making design tradeoffs and choices.
One must document the process to know what alternatives were considered and why
particular choices were made. In addition to writing documentation, software engineering
is about delivering value for the customer; both code and documentation being very
valuable.
Historically it is found that 85% of software projects "fail." The reasons are several and as
represented in table 1.1 below,
10 Copyright © Welingkar
Introduction to Software Engineering
There are still many reports of software projects going wrong and ‘software failures’.
Software engineering is criticized as inadequate for modern software development.
However, in my view, many of these so-called software failures are a consequence of two
factors:
1. Increasing demands: As new software engineering techniques help us to build larger,
more complex systems, the demands change. Systems have to be built and delivered
more quickly; larger, even more complex systems are required; systems have to have
new capabilities that were previously thought to be impossible. Existing software
engineering methods cannot cope and new software engineering techniques have to be
developed to meet new these new demands.
2. Low expectations: It is relatively easy to write computer programs without using
software engineering methods and techniques. Many companies have drifted into
software development as their products and services have evolved. They do not use
software engineering methods in their everyday work. Consequently, their software is
often more expensive and less reliable than it should be. We need better software
engineering education and training to address this problem.
Like other engineering disciplines, software engineering is carried out within a social and
legal framework that limits the freedom of people working in that area. As a software
engineer, you must accept that your job involves wider responsibilities than simply the
application of technical skills. You must also behave in an ethical and morally responsible
way if you are to be respected as a professional engineer.
It goes without saying that you should uphold normal standards of honesty and integrity.
You should not use your skills and abilities to behave in a dishonest way or in a way that
will bring disrepute to the software engineering profession. However, there are areas
where standards of acceptable behavior are not bound by laws but by the more tenuous
notion of professional responsibility. Some of these are:
1. Confidentiality you should normally respect the confidentiality of your employers or
clients irrespective of whether or not a formal confidentiality agreement has been signed.
2. Competence You should not misrepresent your level of competence. You should not
knowingly accept work that is outside your competence.
3. Intellectual property rights you should be aware of local laws governing the use of
intellectual property such as patents and copyright. You should be careful to ensure that
the intellectual property of employers and clients is protected.
4. Computer misuse you should not use your technical skills to misuse other people’s
computers. Computer misuse ranges from relatively trivial (game playing on an
employer’s machine, say) to extremely serious (dissemination of viruses or other
malware).
Copyright © Welingkar 11
Chapter 1
In any situation where different people have different views and objectives you are likely
to be faced with ethical dilemmas. For example, if you disagree, in principle, with the
policies of more senior management in the company, how should you react? Clearly, this
depends on the particular individuals and the nature of the disagreement.
Is it best to argue a case for your position from within the organization or to resign in
principle? If you feel that there are problems with a software project, when do you reveal
these to management? If you discuss these while they are just a suspicion, you may be
overreacting to a situation; if you leave it too late, it may be impossible to resolve the
difficulties. Such ethical dilemmas face all of us in our professional lives and, fortunately,
in most cases they are either relatively minor or can be resolved without too much
difficulty.
Where they cannot be resolved, the engineer is faced with, perhaps, another problem.
The principled action may be to resign from their job but this may well affect others such
as their partner or their children.
A particularly difficult situation for professional engineers arises when their employer acts
in an unethical way. Say a company is responsible for developing a safety-critical system
and, because of time pressure, falsifies the safety validation records. Is the engineer’s
responsibility to maintain confidentiality or to alert the customer or publicize, in some way,
that the delivered system may be unsafe?
The problem here is that there are no absolutes when it comes to safety. Although the
system may not have been validated according to predefined criteria, these criteria may
be too strict. The system may actually operate safely throughout its lifetime. It is also the
case that, even when properly validated, the system may fail and cause an accident.
Early disclosure of problems may result in damage to the employer and other employees;
failure to disclose problems may result in damage to others. You must make up your own
mind in these matters. The appropriate ethical position here depends entirely on the
views of the individuals who are involved. In this case, the potential for damage, the
extent of the damage, and the people affected by the damage should influence the
decision. If the situation is very dangerous, it may be justified to publicize it using the
national press (say). However, you should always try to resolve the situation while
respecting the rights of your employer. Another ethical issue is participation in the
development of military and nuclear systems. Some people feel strongly about these
issues and do not wish to participate in any systems development associated with military
systems. Others will work on military systems but not on weapons systems. Yet others
feel that national security is an overriding principle and have no ethical objections to
working on weapons systems.
12 Copyright © Welingkar
Introduction to Software Engineering
In this situation, it is important that both employers and employees should make their
views known to each other in advance. Where an organization is involved in military or
nuclear work, they should be able to specify that employees must be willing to accept any
work assignment. Equally, if an employee is taken on and makes clear that they do not
wish to work on such systems, employers should not put pressure on them to do so at
some later date. The general area of ethics and professional responsibility is becoming
more important as software-intensive systems pervade every aspect of work and
everyday life.
1.6 SUMMARY
Copyright © Welingkar 13
Chapter 1
14 Copyright © Welingkar
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
Video2
2
SELF ASSESSMENT QUESTIONS
Chapter
SYSTEM MODELING
Objectives:
After completing this chapter you will understand:
• How graphical models can be used to represent software systems
• Types of models like context, interaction, behaviour
• Ideas underlying model-driven engineering and how system automatically
generates structural and behavioural models
Structure:
2.1 Introduction –Context setting
2.2 Context Model
2.3 Interaction Model
2.4 Structural Model
2.5 Behavioural Model
2.6 Event Driven Model
2.7 Summary
2.8 Self-Assessment Questions
2.9 Multiple Choice Question
System modeling is the process of developing abstract models of a system, with each
model presenting a different view or perspective of that system. System modeling has
generally come to mean representing the system using some kind of graphical notation,
which is now almost always based on notations in the Unified Modeling Language (UML).
However, it is also possible to develop formal (mathematical)models of a system, usually
as a detailed system specification.
Models are used during the requirements engineering process to help derive the
requirements for a system, during the design process to describe the system to
Copyright © Welingkar 15
Chapter 2
engineers implementing the system and after implementation to document the system’s
structure and operation.
One may develop different models to represent the system from different perspectives.
For example:
1. An external perspective, where you model the context or environment of the system.
2. An interaction perspective where you model the interactions between a system and
its environment or between the components of a system.
3. A structural perspective, where you model the organization of a system or the
structure of the data that is processed by the system.
4. A behavioural perspective, where you model the dynamic behaviourur of the system
and how it responds to events.
In system project or development early stages, one has to decide and agree on
boundaries or clear understanding of in-scope and out of scope deliverables. This
involves working with system stakeholders to decide what functionality should be
included in the system and what is provided by the system’s environment.
One may decide that automated support for some business processes should be
implemented but others should be manual processes or supported by different systems.
You should look at possible overlaps in functionality with existing systems and decide
where new functionality should be implemented. These decisions should be made early
in the process to limit the system costs and the time needed for understanding the
system requirements and design. For instance, if organization is thinking of picking up
customer data from their website, up to what level of customer data is required to be
captured and how it is going to be protected should be as a part of this context setting. It
is also essential to set system boundary in early stages. E.g clarity of activities to be
carried out onsite or from remote. Refer Fig 2.1 depicting context model for ATM system
of a bank.
16 Copyright © Welingkar
System Modeling
Notice in diagram, Context models normally show that the environment includes several
other automated systems. However, they do not show the types of relationships between
the systems in the environment and the system that is being specified. External systems
might produce data for or consume data from the system. They might share data with the
system, or they might be connected directly, through a network or not connected at all.
They might be physically co-located or located in separate buildings. All of these relations
may affect the requirements and design of the system being defined and must be taken
into account.
Therefore, simple context models are used along with other models, such as business
process models. These describe human and automated processes in which particular
software systems are used.
Any system to work efficiently, it involves interaction of some means. This interaction
could be as simple as user putting inputs, approving certain workflows or it could be as
difficult as several modules within system interacting with each other. This could also
involve interaction with other systems.
Copyright © Welingkar 17
Chapter 2
1) Use case modeling: Mostly used to model interactions between a system and
external actors (users or other systems).
2) Sequence diagrams: Used to model interactions between system components,
although external agents may also be included.
18 Copyright © Welingkar
System Modeling
Sequence Diagram:
Refer Fig 2.4 where in, each task is done in exact sequence. The interaction sequence is
depicted by numbers for better understanding in given diagram. As explained, sequence
cannot be skipped or altered for desired output.
Copyright © Welingkar 19
Chapter 2
2.4STRUCTURAL MODEL
Objective of structural model is to discover key data contained in the problem domain to
build a structural model of objects. Structural models of software display the organization
of a system in terms of the components that make up that system and their relationships.
Structural models may be static models, which show the structure of the system design
or dynamic models, which show the organization of the system when it is executing.
These are not the same things—the dynamic organization of a system as a set of
interacting threads may be very different from a static model of the system components.
You create structural models of a system when you are discussing and designing the
system architecture. Architectural design is a particularly important topic in software
engineering and UML component, package, and deployment diagrams may all be used
when presenting architectural models. There are two major elements of this;
20 Copyright © Welingkar
System Modeling
• Class Diagram:
Class diagrams are used when developing an object-oriented system model to show
the classes in a system and the associations between these classes. Loosely, an
object class can be thought of as a general definition of one kind of system object. An
association is a link between classes that indicates that there is a relationship
between these classes. Consequently, each class may have to have some
knowledge of its associated class.
When you are developing models during the early stages of the software engineering
process, objects represent something in the real world, such as a patient, a
prescription, a doctor, etc. As an implementation is developed, you usually need to
define additional implementation objects that are used to provide the required system
functionality.
Copyright © Welingkar 21
Chapter 2
• Generalization:
22 Copyright © Welingkar
System Modeling
Behavioral models are models of the dynamic behavior of the system as it is executing.
They show what happens or what is supposed to happen when a system responds to a
stimulus from its environment. It basically indicates how software will respond to you can
think of these stimuli as being of two types:
Many business systems are data processing systems that are primarily driven by data.
They are controlled by the data input to the system with relatively little external event
processing. Their processing involves a sequence of actions on that data and the
generation of an output. For example, a phone billing system will accept information
about calls made by a customer, calculate the costs of these calls, and generate a bill to
be sent to that customer. By contrast, real-time systems are often event driven with
minimal data processing. For example, a landline phone switching system responds to
events such as ‘receiver off hook’ by generating a dial tone, or the pressing of keys on a
handset by capturing the phone number, etc.
Copyright © Welingkar 23
Chapter 2
This interaction can happen in five ways. This often referred as CRUDE analysis;
• Create
• Read
• Update
• Delete
• Execute
Event-driven modeling shows how a system responds to external and internal events. It
is based on the assumption that a system has a finite number of states and that events
(stimuli) may cause a transition from one state to another. For example, a system
controlling a valve may move from a state ‘Valve open’ to a state ‘Valve closed’ when an
operator command (the stimulus) is received. This view of a system is particularly
appropriate for real-time systems.
Refer an example of control software for a very simple microwave oven to illustrate
event-driven modeling. Real microwave ovens are actually much more complex than this
system but the simplified system is easier to understand. This simple microwave has a
switch to select full or half power, a numeric keypad to input the cooking time, a start/stop
button, and an alphanumeric display.
I have assumed that the sequence of actions in using the microwave is:
For safety reasons, the oven should not operate when the door is open and, on
completion of cooking, a buzzer is sounded. The oven has a very simple alphanumeric
display that is used to display various alerts and warning messages. In UML state
diagrams, rounded rectangles represent system states. They may include a brief
description (following ‘do’) of the actions taken in that state. The labeled arrows
represent stimuli that force a transition from one state to another. You can indicate start
and end states using filled circles, as in activity diagrams.
GUI Applications:
Consider a graphical user interface (GUI) application like a text editor. The user interacts
with various elements such as buttons, menus, and text fields. Each user action (like
clicking a button or typing in a text field) triggers an event. The application responds to
these events by executing corresponding functions or methods. For instance, clicking the
24 Copyright © Welingkar
System Modeling
"Save" button may trigger an event to save the current document, while typing in a text
field may trigger an event to update the content being displayed.
In this example, events can include mouse clicks, keyboard inputs, window resizing, etc.
The application's behavior is driven by these events, and it transitions between different
states based on user interactions.
In both examples, the systems respond to external and internal events by transitioning
between different states and executing appropriate actions or functions. The event-driven
model allows for asynchronous and reactive behavior, enabling systems to respond
dynamically to changing conditions or user inputs.
Copyright © Welingkar 25
Chapter 2
2.7 SUMMARY
System modeling is a process of developing abstract model of systems, with each model
presenting a different view of perspective of that system. System model has now come to
mean representing a system using some kind of graphical notation which now almost
always based on notation in the Unified Modeling Language (UML)
System modeling helps the analyst to understand the functionality of system and models
are used to communicate with customers. Models of existing systems are used during
requirements engineering. They help clarify what the existing system does and can be
used as a basis for discussing its strengths and weakness. These then leads to
requirement of a new system. Models of the new systems used during requirement
engineering to help explain that proposed requirements to other system stakeholders.
There are several type of architectural models discussed in this chapter and each of them
are useful in its own context.
1. Explain why it is important to model the context of a system that is being developed.
Give two examples of possible errors that could arise if software engineers do not
understand the system context.
2. You have been asked to plan large scale event by use of software like wedding or
birthday celebration. Use sequence model and explain
3. Explain behavioral model
4. List out difference between event driven and behavioral model
2. Use case modeling and sequence diagram are part of interaction model?
a) True
b) False
26 Copyright © Welingkar
System Modeling
3. What are the component of structural model? (Select all that apply)
a) Use Case
b) Generalization
c) UML
d) Class Diagram
Copyright © Welingkar 27
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
3
SELF ASSESSMENT QUESTIONS
Chapter
ARCHITECTURE DESIGNING
Objectives:
After completing this chapter you will understand:
• Why the architectural design of software is important
• The decisions that have to be made about the system architecture during the
architectural design process
• ways of designing system architecture
Structure:
3.1 Architectural design decisions
3.2 Design guidelines
3.3 Design for development
3.4 Summary
3.5 Self-Assessment Questions
3.6 Multiple Choice Question
Like how architecture is very important for making any building in civil engineering and
creates base, provides visibility, in software field also, architectural design is a creative
process helps you design a system organization that will satisfy the functional and non-
functional requirements of a system. Because it is a creative process, the activities within
the process depend on the type of system being developed, the background and
experience of the system architect, and the specific requirements for the system. It is
therefore useful to think of architectural design as a series of decisions to be made rather
than a sequence of activities. During the architectural design process, system architects
have to make a number of structural decisions that profoundly affect the system and its
development process. Based on their knowledge and experience, they have to consider
the following fundamental questions about the system:
1. Is there a generic application architecture that can act as a template for the system
that is being designed?
2. How will the system be distributed across a number of cores or processors?
Copyright © Welingkar 29
Chapter 3
Although each software system is unique, systems in the same application domain often
have similar architectures that reflect the fundamental concepts of the domain. For
example, application product lines are applications that are built around a core
architecture with variants that satisfy specific customer requirements. When designing a
system architecture, you have to decide what your system and broader application
classes have in common, and decide how much knowledge from these application
architectures you can reuse.
In the software engineering context, design focuses on four major areas of concern: data,
architecture, interfaces and components. Each of the elements of the analysis model that
were covered in the previous chapter provides information that is necessary to create the
design models required for a complete specification of design. Information obtained from
the class based models, flow models, and behavioral models serve as the basis for
30 Copyright © Welingkar
Architecture Designing
component design. The classes and relationships defined by CRC index cards and the
detailed data content depicted by class attributes provide the basis for the data design
activity. The interface design describes how the software communicates with systems
that interoperate with it, and with humans who use it. Usage scenarios and behavioral
models provide much of the information required for interface design.
The design activity begins with the set of requirements identified, reviewed and approved
in the requirements gathering and analysis phases. For each requirement, a set of one or
more design elements will be produced as a result of interviews, workshops, and/or
prototype efforts. Design elements describe the desired software features in detail, and
include functional hierarchy diagrams, screen layout diagrams, business rules, business
process diagrams, pseudo-code, and entity-relationship diagram with a full data
dictionary. These design elements describe the software in sufficient detail so that skilled
programmers may develop the software with minimal additional input design. Design is
the "technical kernel" of software engineering and is applied regardless of the process
model used.
The main difference between the analysis and design phase is that the output of
"analysis" consist of smaller problems to solve. "Analysis" is not different even if it is
designed by different team members or groups.
"Design" focuses on the capabilities, and there can be multiple designs for the same
problem depending on the environment that solution will be hosted. They can be
operating systems, web pages, mobile or cloud computing based systems. Sometimes
the design depends on the environment that it was developed for.
It must be emphasized here that design is not coding and coding is not design. The level
of abstraction of the design model is higher than source code. Software design being an
iterative process, initially the design will be at a high level of abstraction. As design
iterations occur, subsequent refinement leads to design representations at much lower
levels of abstraction.
- Design Quality
The importance of software design can be stated with a single word - quality. The design
process is very important. From a practical standpoint, as a builder, one would not
attempt to build a house without an approved blueprint thereby risking the structural
integrity and customer satisfaction.
Copyright © Welingkar 31
Chapter 3
Design is the place where quality needs to be embedded in software engineering. This
phase is where the customers' requirements can be translated to niched software
application or system and serves as the foundation for all the software engineering and
software support activities downstream. Without a proper design there is a high risk of
developing unstable systems and delays plus increased cost follows. Quality of the
evolving design is assessed with a series of formal technical reviews. To assess quality,
there are certain guidelines which one should be mindful of,
- Design Characteristics
The definition of a "good software design" varies depending on the application being
designed. For example, for designing embedded applications in a space ship, the
memory size used by a program is an important issue to characterize. Memory size is
critical for embedded applications and in this case one has to balance weight of the
chips, space, power consumption, costs and other constraints. The design
comprehensibility decreases to achieve code compactness. Hence criteria used to judge
a "given design solution" varies depending on the application. Some desirable
32 Copyright © Welingkar
Architecture Designing
characteristics that every good software design for general application must possess are
listed below:
• Correctness: A good design should correctly implement all the functionalities
identified in the SRS document. A design has to be correct to be acceptable. The
design must implement all of the explicit requirements contained in the analysis
model, and it must accommodate all of the implicit requirements desired by the
customer.
• Understandability: A good design is easily understandable. A design that is easy to
understand is also easy to develop, maintain and change. The design must be a
readable, understandable guide for those who generate code and for those who test
and subsequently support the software.
• Efficiency: Design should be efficient. The software should perform its tasks within a
user-acceptable time and not consume too much memory.
• Maintainability: Because of increases in the size and complexity of software
products, software maintenance tasks have become increasingly more difficult.
Maintenance includes enhancing existing functions, modifying for hardware
upgrades, and correcting code errors Software maintenance cannot be a design
afterthought; it should be possible for software maintainers to enhance the product
without tearing down and rebuilding the majority of code.
• Robustness (reliability): The software must be able to perform a required function
under stated conditions for a specified period of time. The software must be able to
operate under stress or tolerate unpredictable or invalid input.
• Reusability: Design features of a software element (or collection of software
elements) must enhance its suitability for reuse. Reusability is the use of existing
assets in some form within the software product development process. Re-use could
be of code, sub-routines, functions, modules, test suites, designs and documentation.
The ability to reuse relies on the ability to build larger things from smaller parts, and
being able to identify commonalities among those parts.
• Compatibility: The software must be able to operate with other products that are
designed for interoperability with another product. For example, MS Office 2010 is
backward-compatible with an older version of itself.
• Flexibility: New capabilities can be added to the software without major changes to
the underlying architecture. The software design must allow addition of further
features and modification with slight or no modification.
• Modularity: the resulting software must comprise well defined, independent
components which lead to better maintainability. The components could be then
implemented and tested in isolation before being integrated to form a desired
software system.
• Security: The software is able to withstand against external threats and influences
that can affect the organizations business interests.
• Portability - In an age of ubiquitous computing and fast moving technology
implementation in multiple environments over their total lifetime is an emerging need.
With change in technology the design should have some capability of usability of the
Copyright © Welingkar 33
Chapter 3
same software in different environments. The pre requirement for portability is the
generalized abstraction between the application logic and system interfaces. When
software with the same functionality is produced for several computing platforms,
portability is the key issue for development cost reduction.
• Scalability - As the usage of any application grows there is an increase in the
number of users and the data. A good design should be able to scale to meet the
increasing data or number of users.
There are several significant design concepts which provide a foundation on which more
sophisticated design methods can be applied.
• Abstraction - Procedural and Data
• Modularity - Make it intellectually manageable
• Architecture - Overall structure of the software
• Patterns - Proven solution to a known recurring problem
• Refactoring - A reorganization technique that simplifies the design
• Functional independence - Single-minded function and low coupling
• Information Hiding - Constrain access to data and procedures
• Refinement - Top down design strategy. Complementary to abstraction
➢ Abstraction
Abstraction in simple terms means concentrating on the essentials and ignoring the
details. It is a process of generalization by reducing the information content of a concept
or an observable phenomenon in order to retain only information which is relevant for a
particular purpose. There are multiple levels of abstraction - at the highest level of
abstraction, a solution is stated in broad terms using the language of the problem
environment; at lower levels of abstraction, a more detailed description of the solution is
provided. ERP applications are large complex systems and can be made understandable
by decomposing them into modules. When viewed from the outside, each module should
be simple, with the complexity hidden inside. Abstraction maxim: Simplicity is good;
complexity is bad. There are two kinds of abstraction:
1) Procedural abstraction
2) Data abstraction.
Procedural abstraction is the separation of the logical properties of an action from the
details of how the action is implemented. It refers to a sequence of instructions that have
a specific and limited function. An example would be the word "Enter" for a door. The
procedural abstraction implies the instructions and functions but specific details are
suppressed. Implemented with a "knowledge" of the object that is associated with enter.
E.g.: Open Door, Walk to door, Reach Out, Grasp Knob, Turn knob, Pull door etc. The
door may be opened by a switch, a card, manually or automatically by sensors. The
34 Copyright © Welingkar
Architecture Designing
details are not relevant for "entering". An opening of door could be a door of a castle,
fridge, a house door or a cupboard.
Data abstraction is the separation of the logical properties of data from the details of
how the data are represented. A data abstraction is a named collection of data that
describes a data object. In the context of the procedural abstraction "open", one can
define a "data" abstraction called door. Data abstraction for door would encompass a set
of attributes that describe the door e.g. manufacturer, door type, door material, swing
direction, weight, lights coming on, pull or push, mechanisms, etc. In data abstraction, the
focus is on the problem's data rather than the tasks to be carried out.
➢ Architecture
Architecture refers to the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system. In its simplest form, architecture is
the hierarchical structure of program modules, the manner in which these components
interact and the structure of data that are used by the components. Like any civil
engineering structure, software must have a solid foundation. Failing to consider key
scenarios, design for common problems or to predict the long term consequence of a key
decision can put the development work at risk. Poor architecture can make the software
unstable, produce more bugs during coding phase and it is hard to support development
for future business requirements.
The architecture design is an important phase of the whole development process; full
consideration of user requirements, business goal and system ability, it draws a blueprint
for the later work. At this stage all the key scenarios are outlined in great detail. Some
questions that need to be answered are:
• How will the user be using the application?
• How the features of the application will benefit the user?
• How can the application be designed to be maintainable to meet the development
schedule?
Copyright © Welingkar 35
Chapter 3
➢ Modularity
It is the degree to which software can be understood by examining its components
independently of one another. Modularity refers to the extent to which a software or a
Web application may be divided into smaller modules. Monolithic software cannot be
easily understood due to complexity of control paths, span of reference, number of
variables etc. Software is divided into separately named and addressable components
referred as "modules". Basic principle is to "divide and conquer" that is dividing the
problem into manageably small pieces where each piece can be solved and/or modified
separately. The pieces need to be related to the application and cannot be independent;
they must communicate. Modularity provides greater software development
manageability. Modules are divided among teams based on functionality, and
programmers need not get involved with the functionalities of other modules. New
functionalities may be easily programmed in separate modules. Besides reduction in cost
and flexibility in design, modularity offers other benefits such as augmentation i.e. adding
new solution by merely plugging in a new module. A computer is an example of modular
design with modules like power supply units, processors, mother-boards, graphics cards,
hard drives, optical drives, etc. All of these parts are easily interchangeable as long as
the parts that support the same standard interfaces are used. One good example of
modularization in the software domain is MS Office products like Excel, Word or
PowerPoint. The main menu shows "modules" like Home, Insert, Options, Format,
Review, View, etc. Each module further sub-divides into smaller modules. The design
can be seen to be flexible, interchangeable and modifiable; new functionalities can also
be added in separate modules. The cost of development of smaller modules decreases
as number of components increases. However integration of the modules require more
planning and efforts and the integration costs increases as the number of modules
increase. The diagram given below explains the "dilemma" of how much to partition;
when to stop partitioning and how to decide which the right number of modules is
➢ Information Hiding
It is the hiding of design decisions in a computer program that are most likely to change,
thus protecting other parts of the program from change if the design decision is changed.
Modules are designed so that information
36 Copyright © Welingkar
Architecture Designing
contained within a module is inaccessible to other modules that have no need for such
information "Encapsulation" is often used interchangeably with information hiding. In
simple terms "information hiding" is the principle and "encapsulation" is a technique. A
software module hides information by encapsulating the information into a module or
other construct which presents an interface.
➢ Functional Independence
Functions are designed with single- minded function and minimum interaction with other
functions. When a module has a single function to perform it is easier to achieve its
objective. No two modules ideally have the same function to be achieved.
Functional independence is a key to any good design due to:
3.4 SUMMARY
Copyright © Welingkar 37
Chapter 3
Abstraction in design is a technique in which unwanted details are not included and only
the needed information is given. In the highest level of abstraction the solution is stated in
general terms. Procedural abstraction separates the logical properties of an action from
the details of how the action is implemented while data abstraction separates logical
properties of data from the details of how the data are represented. Design Architecture
is the overall hierarchical structure of the software and the ways in which that structure
provides conceptual integrity for a system. Encapsulation and Information hiding are two
concepts that are used in modular designing that helps in making changes in some
modules during testing or later with minimum impact on other modules or code. It is
important to design modules such that each module has a specific functional
requirements. This functional independence is achieved by high cohesion and low
coupling. Cohesion is a measure that defines the degree of intra-dependability within
elements of a module. Coupling is a measure that defines the level of inter-dependability
among modules of a program. Design is never a one-step process. Stepwise refinement
is a top-down design strategy. In each step, one or several instructions of a given
program are decomposed into more detailed instructions.
38 Copyright © Welingkar
Architecture Designing
Copyright © Welingkar 39
Chapter 3
40 Copyright © Welingkar
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
Video2
4
SELF ASSESSMENT QUESTIONS
Chapter
SECURITY ENGINEERING
Objectives:
After completing this chapter you will understand:
• Difference between Application security and Infrastructure security;
• Understand Lifecycle risk assessment and operational risk assessment
• Software architecture and design guidelines for secure system development
Structure:
4.1 Security Risk Management
4.2 Lifecycle Risk Assessment
4.3 Operational Risk Assessment
4.4 Design for Security
4.5 Summary
4.6 Self-Assessment Questions
4.7 Multiple choice question
Current trends of “Cloud First”, being “Agile”, adaption of “Industry 4.0” has increased
challenges for software engineers to keep their software secure from an external threats.
Since more and more systems are connecting over Internet, variety of different external
attacks are also increased due to increased threat landscape. This has resulted in
complete different mindset/shift in software code It is hence now essential to design
systems to withstand external attacks and to recover from such attacks. Without security
precautions, it is almost inevitable that attackers will compromise a networked system.
They may misuse the system hardware, steal confidential data, or disrupt the services
offered by the system. System security engineering is therefore an increasingly important
aspect of the systems engineering process.
Security engineering is concerned with the development and evolution of systems that
can resist malicious attacks, which are intended to damage the system, its data or
reputation of an organization. Software security engineering is part of the more general
field of computer security. This has become a priority for businesses and individuals as
more and more criminals try to exploit networked systems for illegal purposes. Software
Copyright © Welingkar 41
Chapter 4
engineers should be aware of the security threats faced by systems and ways in which
these threats can be neutralized. With this thought in mind, concept of DEVOPS has now
changed to DEVSECOPS. This ensures that security is an inherent part at the time of
entire lifecycle of software development.
Refer Image 4.1: how complexity is involved with underlying presence because of
Infrastructure components.
Attackers can probe these systems for weaknesses and share information about
vulnerabilities that they have discovered. As many people use the same software, attacks
have wide applicability. Infrastructure vulnerabilities may lead to attackers gaining
unauthorized access to an application system and its data.
42 Copyright © Welingkar
Security Engineering
For critical control systems and embedded systems, it is normal practice to select an
appropriate infrastructure to support the application system. For example, embedded
system developers usually choose a real-time operating system that provides the
embedded application with the facilities that it needs. Known vulnerabilities and security
requirements can be taken into account. This means that an holistic approach can be
taken to security engineering. Application security requirements may be implemented
through the infrastructure or the application itself.
Copyright © Welingkar 43
Chapter 4
Risk Management:
Security risk assessment and management is essential for effective security engineering.
Risk management is concerned with assessing the possible losses that might ensue from
attacks on assets in the system, and balancing these losses against the costs of security
procedures that may reduce these losses. Credit card companies do this all the time. It is
relatively easy to introduce new technology to reduce credit card fraud. However, it is
often cheaper for them to compensate users for their losses due to fraud than to buy and
deploy fraud-reduction technology. As costs drop and attacks increase, this balance may
change. For example, credit card companies are now encoding information on an on-card
chip instead of a magnetic strip. This makes card copying much more difficult. Risk
management is a business issue rather than a technical issue so software engineers
should not decide what controls should be included in a system. It is up to senior
management to decide whether or not to accept the cost of security or the exposure that
results from a lack of security procedures. Rather, the role of software engineers is to
provide informed technical guidance and judgment on security issues. They are,
therefore, essential participants in the risk management process.
A critical input to the risk assessment and management process is the organizational
security policy. The organizational security policy applies to all systems and should set
out what should and should not be allowed. The security policy sets out conditions that
should always be maintained by a security system and so helps to identify risks and
threats that might arise. The security policy therefore defines what is and what is not
allowed. In the security engineering process, you design the mechanisms to implement
this policy. Risk assessment starts before the decision to acquire the system has been
made and should continue throughout the system development process and after the
system has gone into use.
2. Life-cycle risk assessment: This risk assessment takes place during the system
development life cycle and is informed by the technical system design and
implementation decisions. The results of the assessment may lead to changes to the
security requirements and the addition of new requirements. Known and potential
vulnerabilities are identified and this knowledge is used to inform decision making about
the system functionality and how it is to be implemented, tested, and deployed.
44 Copyright © Welingkar
Security Engineering
3. Operational risk assessment: After a system has been deployed and put into use,
risk assessment should continue to take account of how the system is used and
proposals for new and changed requirements. Assumptions about the operating
requirement made when the system was specified may be incorrect.
Organizational changes may mean that the system is used in different ways from those
originally planned. Operational risk assessment therefore leads to new security
requirements that have to be implemented as the system evolves.
Based on organizational security policies, preliminary risk assessment should identify the
most important security requirements for a system. These reflect how the security policy
should be implemented in that application, identify the assets to be protected, and decide
what approach should be used to provide that protection.
However, maintaining security is about paying attention to detail. It is impossible for the
initial security requirements to take all details that affect security into account. Life-cycle
risk assessment identifies the design and implementation details that affect security. This
is the important distinction between life-cycle risk assessment and preliminary risk
assessment. Life-cycle risk assessment affects the interpretation of existing security
requirements, generates new requirements, and influences the overall design of the
system.
When assessing risks at this stage, you should have much more detailed information
about what needs to be protected, and you also will know something about the
vulnerabilities in the system. Some of these vulnerabilities will be inherent in the design
choices made. For example, a vulnerability in all password-based systems is that an
Copyright © Welingkar 45
Chapter 4
A model of the life-cycle risk analysis process, based on the preliminary risk analysis
process. The most important difference between these processes is that you now have
information about information representation and distribution and the database
organization for the high-level assets that have to be protected. You are also aware of
important design decisions such as the software to be reused, infrastructure controls and
protection, etc. Based on this information, your analysis identifies changes to the security
requirements and the system design to provide additional protection for the important
system assets.
46 Copyright © Welingkar
Security Engineering
may not need as extensive protection. If the key is protected, then an attacker will
only be able to access routine information, without being able to link this to an
individual patient.
2. Assume that, at the beginning of a session, a design decision is made to copy patient
records to a local client system. This allows work to continue if the server is
unavailable. It makes it possible for a health-care worker to access patient records
from a laptop, even if no network connection is available.
However, you now have two sets of records to protect and the client copies are subject to
additional risks, such as theft of the laptop computer. You, therefore, have to think about
what controls should be used to reduce risk. For example, client records on the laptop
may have to be encrypted.
When you develop an application by reusing an existing system, you have to accept the
design decisions made by the developers of that system. Let us assume that some of
these design decisions are as follows:
1. System users are authenticated using a login name/password combination. No other
authentication method is supported.
2. The system architecture is client-server, with clients accessing data through a
standard web browser on a client PC.
3. Information is presented to users as an editable web form. They can change
information in place and upload the revised information to the server.
Once vulnerabilities have been identified, you then have to make a decision on what
steps that you can take to reduce the associated risks. This will often involve making
decisions about additional system security requirements or the operational process of
using the system. I don’t have space here to discuss all the requirements that might be
proposed to address the inherent vulnerabilities, but some examples of requirements
might be the following:
1. A password checker program shall be made available and shall be run daily. User
passwords that appear in the system dictionary shall be identified and users with
weak passwords reported to system administrators.
2. Access to the system shall only be allowed to client computers that have been
approved and registered with the system administrators.
Copyright © Welingkar 47
Chapter 4
3. All client computers shall have a single web browser installed as approved by system
administrators.
The second and third requirements mean that all users will always access the system
through the same browser. You can decide what the most secure browser is when the
system is deployed and install that on all client computers. Security updates are
simplified because there is no need to update different browsers when security
vulnerabilities are discovered and fixed.
Security risk assessment should continue throughout the lifetime of the system to identify
emerging risks and system changes that may be required to cope with these risks. This
process is called operational risk assessment. New risks may emerge because of
changing system requirements, changes in the system infrastructure, or changes in the
environment in which the system is used.
The process of operational risk assessment is similar to the life-cycle risk assessment
process, but with the addition of further information about the environment in which the
system is used. The environment is important because characteristics of the environment
can lead to new risks to the system. For example, say a system is being used in an
environment in which users are frequently interrupted. A risk is that the interruption will
mean that the user has to leave their computer unattended. It may then be possible for
an unauthorized person to gain access to the information in the system. This could then
generate a requirement for a password-protected screen saver to be run after a short
period of inactivity.
It is generally true that it is very difficult to add security to a system after it has been
implemented. Therefore, you need to take security issues into account during the
systems design process.
48 Copyright © Welingkar
Security Engineering
3. Design for deployment—what support should be designed into systems to avoid the
introduction of vulnerabilities when a system is deployed for use?
Of course, these are not the only design issues that are important for security. Every
application is different and security design also has to take into account the purpose,
criticality, and operational environment of the application. For example, if you are
designing a military system, you need to adopt their security classification model (secret,
top secret, etc.). If you are designing a system that maintains personal information, you
may have to take into account data protection legislation that places restrictions on how
data is managed.
There is a close relationship between dependability and security. The use of redundancy
and diversity, which is fundamental for achieving dependability, may mean that a system
can resist and recover from attacks that target specific design or implementation
characteristics. Mechanisms to support a high level of availability may help the system to
recover from so-called denial of service attacks, where the aim of an attacker is to bring
down the system and stop it working properly. Designing a system to be secure inevitably
involves compromises. It is certainly possible to design multiple security measures into a
system that will reduce the chances of a successful attack. However, security measures
often require a lot of additional computation and so affect the overall performance of a
system. For example, you can reduce the chances of confidential information being
disclosed by encrypting that information. However, this means that users of the
information have to wait for it to be decrypted and this may slow down their work.
There are also tensions between security and usability. Security measures sometimes
require the user to remember and provide additional information (e.g., multiple
passwords). However, sometimes users forget this information, so the additional security
means that they can’t use the system. Designers therefore have to find a balance
between security, performance, and usability. This will depend on the type of system and
where it is being used. For example, in a military system, users are familiar with high-
security systems and so are willing to accept and follow processes that require frequent
checks. In a system for stock trading, however, interruptions of operation for security
checks would be completely unacceptable.
4.5 SUMMARY
Security engineering focuses on how to develop and maintain software systems that can
resist malicious attacks intended to damage a computer-based system or its data.
Security threats can be threats to the confidentiality, integrity, or availability of a system
or its data. Security risk management involves assessing the losses that might ensue
from attacks on a system, and deriving security requirements that are aimed at
eliminating or reducing these losses. Design for security involves designing a secure
system architecture, following good practice for secure systems design, and including
Copyright © Welingkar 49
Chapter 4
50 Copyright © Welingkar
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
Video2
5
SELF ASSESSMENT QUESTIONS
Chapter
ADVANCED SOFTWARE
ENGINEERING
Objectives:
After completing this chapter you will understand:
● The benefits and problems of reusing software when developing new systems
● Concept of an application framework as a set of reusable objects and how
frameworks can be used in application development;
● How systems can be developed by configuring and composing off-the-shelf
application software systems.
Structure:
5.1 Software reuse
5.1.1 Faults in software reuse
5.1.2 Checklist for Software reuse
5.2 Commercial Of The Shelf (COTS) solution system
5.3 Summary
5.4 Self-Assessment Questions
5.5 Multiple Choice Question
This may be in the form of program libraries or entire applications. There are many
domain-specific application systems available that can be tailored and adapted to the
needs of a specific company. Some large companies provide a range of reusable
components for their customers. Standards, such as web service standards, have made
it easier to develop general services and reuse them across a range of applications.
Copyright © Welingkar 51
Chapter 5
Software systems and components are potentially reusable entities, but their specific
nature sometimes means that it is expensive to modify them for a new situation. A
complementary form of reuse is ‘concept reuse’ where, rather than reuse a software
component, you reuse an idea, a way, or working or an algorithm. The concept that you
reuse is represented in an abstract notation (e.g., a system model), which does not
include implementation detail. It can, therefore, be configured and adapted for a range of
situations. Concept reuse can be embodied in approaches such as design pattern,
configurable system products and a program generators. The advantage of reuse is
saving of development cost, however, there are other advantages as listed below:
52 Copyright © Welingkar
Advanced Software Engineering
Benefit Explanation
Reused software, which has been tried and tested in working systemsIts design and
Increased dependability
implementation faults should have been found and fixed. In simpler terms, since
The cost of existing software is already known, whereas the costs of development are always a
matter of judgment. This is an important factor for project management because it reduces the
Reduced process risk
margin of error in project cost estimation. This is particularly true when relatively large
software components such as subsystems are reused.
Instead of doing the same work over and over again, application specialists can develop
Effective use of specialists
reusable software that encapsulates their knowledge.
Some standards, such as user interface standards, can be implemented as a set of reusable
components. For example, if menus in a user interface are implemented using reusable
Standards compliance components, all applications present the same menu formats to users. The use of standard user
interfaces improves dependability because users make fewer mistakes when presented with a
familiar interface.
Bringing a system to market as early as possible is often more important than overall
Accelerated development development costs. Reusing software can speed up system production because both
development and validation time may be reduced.
Table 5.1: Benefits of reuse of software
Important thing to remember is it is not necessary to achieve intended benefit every time
we reuse software. To understand applicability and dependency of reuse also comes with
its own cost. Few issues listed out in table 5.2
Issues Explanation
If the source code of a reused software system or component is not available,
Increased maintenance costs then maintenance costs may be higher because the reused elements of the
system may become increasingly incompatible with system changes.
Some software tools do not support development with reuse. It may be
difficult or impossible to integrate these tools with a component library
Lack of tool support system. The software process assumed by these tools may not take reuse
into account. This is particularly true for tools that support embedded
systems engineering, less so for object-oriented development tools.
Some software engineers prefer to rewrite components because they
believe they can improve on them. This is partly to do with trust and partly
Not-invented-here syndrome
to do with the fact that writing original software is seen as more
challenging than reusing other people’s software.
Populating a reusable component library and ensuring the software
Creating, maintaining, and
developers can use this library can be expensive. Development processes
using a component library
have to be adapted to ensure that the library is used.
Software components have to be discovered in a library, understood and,
Finding, understanding, and
sometimes, adapted to work in a new environment. Engineers must be
adapting reusable
reasonably confident of finding a component in the library before they
components
include a component search as part of their normal development process.
Table 5.2: Issues of reuse of software
Copyright © Welingkar 53
Chapter 5
• The development schedule for the software If the software has to be developed
quickly, you should try to reuse off-the-shelf systems rather than individual
components. These are large-grain reusable assets. Although the fit to requirements
may be imperfect, this approach minimizes the amount of development required.
• The expected software lifetime If you are developing a long-lifetime system, you
should focus on the maintainability of the system. You should not just think about the
immediate benefits of reuse but also of the long-term implications. Over its lifetime,
you will have to adapt the system to new requirements, which will mean making
changes to parts of the system. If you do not have access to the source code, you
may prefer to avoid off-the-shelf components and systems from external suppliers;
suppliers may not be able to continue support for the reused software.
• The background, skills, and experience of the development team All reuse
technologies are fairly complex and you need quite a lot of time to understand and
use them effectively. Therefore, if the development team has skills in a particular
area, this is probably where you should focus.
• The criticality of the software and its non-functional requirements For a critical
system that has to be certified by an external regulator, you may have to create a
dependability case for the system (discussed in Chapter 15). This is difficult if you
don’t have access to the source code of the software. If your software has stringent
performance requirements, it may be impossible to use strategies such as generator-
based reuse, where you generate the code from a reusable domain specific
representation of a system. These systems often generate relatively inefficient code.
• The application domain In some application domains, such as manufacturing and
medical information systems, there are several generic products that may be reused
by configuring them to a local situation. If you are working in such a domain, you
should always consider these as an option.
Interface Conflicts
Interfacing reusable software to a new software system is the majority of the effort in
software reuse. Interfaces not only include other software, but also include hardware,
humans, plant systems, and the environment. A full understanding of the interface is
important in software reuse. Reused software assumes that its interfacing components
provide a particular type of information in a specified amount and pattern. If that
information, or its protocol, is different from what the reused software expects, a software
failure may occur. Vice-versa, the reused software provides its interfacing components
54 Copyright © Welingkar
Advanced Software Engineering
with information in a particular format. If there are differences between the expected
information/format and the delivered information/format, a software failure may occur.
There are two main reasons for the introduction of interface-related errors during
software reuse. First, programmers try to force the application requirements to fit a
structure for which they know a solution, even if the solution fails to satisfy part of the
original specification. And second, the programmer is unfamiliar with the interface of
other components. To avoid the introduction of interface-related errors, the developers
should first identify the application requirements, and then, compare the capabilities,
format, etc. of the candidate reusable software against those requirements. Any
differences should be noted and evaluated to see if the reusable software can be
modified sufficiently to fit all the requirements. Not only should the software-related
requirements be analyzed, but also those related to the computer hardware, plant
system, and operator. If the reusable software is not sufficiently documented, or it is
difficult to understand, it may be better to develop the solution from the beginning instead
of attempting reuse. Once a reusable software module is placed in the software system,
integration testing of the entire, fully assembled software system is required.
Copyright © Welingkar 55
Chapter 5
For many reuse situations, it is necessary to modify the component in order for it to meet
design specifications. This is especially true with traditional software reuse and software
maintenance. Once software has been modified, it is necessary to re-validate the
software to ensure that it meets the new requirements. While previous tests and
operational experience should not be used alone to verify the quality of reused software,
it does not mean that previous testing and operational experience is discounted. It merely
suggests that one should consider the type of testing and operational experience and
how it applies to the new application. The presence of previous testing and operational
experience should decrease the amount of testing and review needed for a reusable
component.
56 Copyright © Welingkar
Advanced Software Engineering
Copyright © Welingkar 57
Chapter 5
58 Copyright © Welingkar
Advanced Software Engineering
This approach to software reuse has been very widely adopted by large companies over
the last several years, as it offers significant benefits over customized software
development:
1. As with other types of reuse, more rapid deployment of a reliable system may be
possible.
2. It is possible to see what functionality is provided by the applications and so it is
easier to judge whether or not they are likely to be suitable. Other companies may
already use the applications so experience of the systems is available.
3. Some development risks are avoided by using existing software. However, this
approach has its own risks
4. Businesses can focus on their core activity without having to devote a lot of
resources to IT systems development.
5. As operating platforms evolve, technology updates may be simplified as these are
the responsibility of the COTS product vendor rather than the customer.
There are two types of COTS product reuse, namely COTS-solution systems and COTS-
integrated systems. COTS-solution systems consist of a generic application from a single
vendor that is configured to customer requirements. COTS-integrated systems involve
integrating two or more COTS systems (perhaps from different vendors) to create an
application system. Refer table below for more details:
Copyright © Welingkar 59
Chapter 5
5.3 SUMMARY
Like recycle and reuse is widely adopted for saving environment, in software field also,
reuse is given much of importance these days. Objective is to avoid putting efforts on
what was done already before. While there are several elements of software reuse like
component reuse, object and function reuse etc. depending on need it can be adapted
while preparing new software design. While software reuse has its own advantages, it
also has its own disadvantages or issues which are listed out in this chapter. This chapter
calls out specific check list to be referred in case of software reuse. Commercial of the
Shelf (COTS) is another such example wherein software or product is developed which is
generic in nature, however can be used in different environment with different
applications. While COTS are fast and easier to use, it also has it’s own disadvantages
and it will be worth to review before put for use.
60 Copyright © Welingkar
Advanced Software Engineering
1. A commercial off the shelf product is a software system that can be adapted to the
needs of different customers without changing the source code of the system
a) True
b) False
3. Which all are element of faults in software reuse (Select all that apply):
a) People conflict
b) Interface conflict
c) Assumption of quality transfer
d) None of above
Copyright © Welingkar 61
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
6
SELF ASSESSMENT QUESTIONS
Chapter
SOFTWARE EVOLUTION
Objectives:
After completing this chapter you will understand:
• Things to consider while change in external and internal factors in software
engineering
• Intricacies in design and development changes
• Why software maintenance is important and types of them
• Software reengineering and replacement concepts
Structure:
6.1 Software Evolution – Emerging trends
6.2 Software Testing
6.3 Software Maintenance
6.4 Maintenance prediction
6.5 Software Reengineering
6.6 Summary
6.7 Self-Assessment Questions
6.8 Multiple choice question and answers
The cost and impact of these changes are accessed to see how much system is
affected by the change and how much it might cost to implement the change. If the
proposed changes are accepted, a new release of the software system is planned.
During release planning, all the proposed changes (fault repair, adaptation, and new
functionality) are considered.
Copyright © Welingkar 63
Chapter 6
A design is then made on which changes to implement in the next version of the
system. The process of change implementation is an iteration of the development
process where the revisions to the system are designed, implemented and tested.
a) Change in requirement with time: With the passes of time, the organization’s
needs and modus Operandi of working could substantially be changed so in this
frequently changing time the tools(software) that they are using need to change for
maximizing the performance.
c) Errors and bugs: As the age of the deployed software within an organization
increases their preciseness or impeccability decrease and the efficiency to bear the
increasing complexity workload also continually degrades. So, in that case, it becomes
necessary to avoid use of obsolete and aged software. All such obsolete Softwares
need to undergo the evolution process in order to become robust as per the workload
complexity of the current environment.
d) Security risks: Using outdated software within an organization may lead you to at
the verge of various software-based cyber attacks and could expose your confidential
data illegally associated with the software that is in use. So, it becomes necessary to
avoid such security breaches through regular assessment of the security
patches/modules are used within the software. If the software isn’t robust enough to
bear the current occurring Cyber attacks so it must be changed (updated).
e) For having new functionality and features: In order to increase the performance
and fast data processing and other functionalities, an organization need to continuously
evolute the software throughout its life cycle so that stakeholders & clients of the
product could work efficiently
1. Law of continuing change: This law states that any software system that
represents some real-world reality undergoes continuous change or become
progressively less useful in that environment.
64 Copyright © Welingkar
Software Evolution
Software testing is the next phase after completion of design and code construction.
Testing is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test. Software testing also provides an objective,
independent view of the software to allow the business to appreciate and understand the
risks of software implementation Software Testing is necessary because mistakes are
made by humans. Some mistakes are unimportant, but some of them are expensive or
dangerous. Before delving into the realms of testing one needs to understand its
importance and its need
Testing Myths
Software testing has its share of myths just as every other field arising due to lack of
authoritative facts, evolving nature of the industry and general flaws in human logic.
Though common-sense tells that testing is part of a learning it is being challenged by
myths. Some of the prevalent myths are:
Copyright © Welingkar 65
Chapter 6
order to produce the highest quality application one should have a strong manual testing
element in place alongside an automated framework.
66 Copyright © Welingkar
Software Evolution
Verification and validation are not the same things, although they are often confused.
Boehm clarified the difference between them with a play on English words as follow:
Verification: Are we building the product right? It refers to the set of activities that
ensure that the software correctly implements a specific function. This is a static method
for verifying design and code.
Validation: Are we building the right product? Validation refers to a different set of
activities that ensure that the software that has been built is traceable to customer
requirements. This is a dynamic process for
checking and testing the real product.
Types of Testing:
Based on whether the actual execution of software under evaluation is needed or not,
there are two major categories of quality assurance activities - Static and Dynamic
Testing
Static Testing
Static Testing focuses on the range of methods that are used to determine or estimate
software quality without reference to actual executions.
• It implies testing software without execution on a computer.
• Involves just examination/review and evaluation.
• It is a process of reviewing the work product & is done using a checklist.
• Static Testing helps weed out many errors/bugs at an early stage.
• Static Testing lays strict emphasis on conforming to specifications.
• Static Testing can discover dead codes, infinite loops, uninitialized and unused
variables, standard violations.
Copyright © Welingkar 67
Chapter 6
Techniques in Static testing include code inspection, code review, peer review, desk
checking, program review, etc. One of the simplest forms of static testing is compiling. A
compiler delivers error message when it finds syntax errors or other invalid operations but
does note execute the code. A "linking loader" that links a set of modules into one
executable program will fail unless it finds every variable and functions referred to; here
too the modules are not executed.
Code review
Code review for a model is carried out after the module is successfully compiled and the
all the syntax errors have been eliminated. Code reviews are extremely cost-effective
strategies for reduction in coding errors and to produce high quality code. Normally, two
types of reviews are carried out on the code of a module. These two code review
techniques are "code inspection" and "code walk through".
Code Inspection
It is a formal analysis of the program source code done by a team of developers and
subject matter experts to find defects as define by meeting computer system design. In
contrast to code walk through, the aim of code inspection is to discover some common
types of errors caused due to oversight and improper programming. In other words,
during code inspection the code is examined for the presence of certain kinds of errors.
In addition to the commonly made errors, adherence to coding standards is also checked
during code inspection. It is a good practice to collect statistics regarding different types
of errors commonly committed by the developers and identify the type of errors most
frequently committed.
Dynamic Testing
Testing deals with specific methods to ascertain software quality through actual
execution i.e. with real data and under real or simulated conditions. Techniques in this
area include synthesis of inputs, the use of structurally dictated testing procedures, and
the automation of testing environment generation. The static and dynamic methods are
inseparable but will be discussed separately. Under Dynamic Testing code is executed.
68 Copyright © Welingkar
Software Evolution
As the name implies it checks for functional behavior of software system, memory usage,
CPU usage and overall performance of the system. This tests the dynamic behavior of
code. Dynamic Testing is performed to confirm that the software product works in
conformance with the business requirements. This testing is also called as validation
testing. In dynamic testing the software must actually be compiled and run. It involves
working with the software, giving input values and checking if the output is as expected
by executing specific test cases which can be done manually or with the use of an
automated process. Dynamic testing is performed at all levels of testing i.e. Unit,
Integration, System and Acceptance and it can be either black or white box testing. The
levels of testing, White Box and Black Box testing will be covered in subsequent sections.
Software maintenance is the general process of changing a system after it has been
delivered. The term is usually applied to custom software in which separate development
groups are involved before and after delivery. The changes made to the software may be
simple changes to correct coding errors, more extensive changes to correct design
errors, or significant enhancements to correct specification errors or accommodate
new requirements. Changes are implemented by modifying existing system components
and, where necessary, by adding new components to the system.
Copyright © Welingkar 69
Chapter 6
1. Team stability After a system has been delivered, it is normal for the development
team to be broken up and for people to work on new projects. The new team or the
individuals responsible for system maintenance do not understand the system or the
background to system design decisions. They need to spend time understanding the
existing system before implementing changes to it.
3. Staff skills Maintenance staff are often relatively inexperienced and unfamiliar with the
application domain. Maintenance has a poor image among software engineers. It is seen
as a less-skilled process than system development and is often allocated to the most
junior staff. Furthermore, old systems may be written in obsolete programming
70 Copyright © Welingkar
Software Evolution
languages. The maintenance staff may not have much experience of development in
these languages and must learn these languages to maintain the system.
Managers hate surprises, especially if these result in unexpectedly high costs. You
should therefore try to predict what system changes might be proposed and what parts of
the system are likely to be the most difficult to maintain. You should also try to estimate
the overall maintenance costs for a system in a given time period. Figure 6.2 shows
these predictions and associated questions. Predicting the number of change requests
for a system requires an understanding of the relationship between the system and its
external environment. Some systems have a very complex relationship with their external
environment and changes to that environment inevitably result in changes to the system.
To evaluate the relationships between a system and its environment, you should assess:
1. The number and complexity of system interfaces: The larger the number of
interfaces and the more complex these interfaces, the more likely it is that interface
changes will be required as new requirements are proposed.
2. The number of inherently volatile system requirements: Requirements that reflect
organizational policies and procedures are likely to be more volatile than requirements
that are based on stable domain characteristics.
3. The business processes: in which the system is used As business processes evolve,
they generate system change requests. The more business processes that use a system,
the more the demands for system change.
Copyright © Welingkar 71
Chapter 6
the process of system evolution involves understanding the program that has to be
changed and then implementing these changes. However, many systems, especially
older legacy systems, are difficult to understand and change. The programs may have
been optimized for performance or space utilization at the expense of understandability,
or, over time, the initial program structure may have been corrupted by a series of
changes. To make legacy software systems easier to maintain, you can reengineer these
systems to improve their structure and understandability. Reengineering may involve re-
documenting the system, refactoring the system architecture, translating programs to a
modern programming language, and modifying and updating the structure and values of
the system’s data. The functionality of the software is not changed and, normally, you
should try to avoid making major changes to the system architecture. There are two
important benefits from reengineering rather than replacement:
1. Reduced risk: There is a high risk in redeveloping business-critical software. Errors
may be made in the system specification or there may be development problems.
Delays in introducing the new software may mean that business is lost and extra
costs are incurred.
2. Reduced cost: The cost of reengineering may be significantly less than the cost of
developing new software. Ulrich (1990) quotes an example of a commercial system
for which the reimplementation costs were estimated at $50 million. The system was
successfully reengineered for $12 million. It is assumed that, with modern software
technology, the relative cost of reimplementation is probably less than this but will still
considerably exceed the costs of reengineering
6.6 SUMMARY
Business requirement are ever changing due to change in either external surrounding or
customer preference or uncontrolled natural events for instance pandemic is real such
example. Because of such factors, software is expected to go to frequent changes and it
is expected to be adaptable, flexible and quick. Apart from these factors there is
technology and cyber risk change also brings pressure on software engineers to refine
their software as per such technological enhancement. Having this demand on change,
there are several laws one should keep in mind while such changes. Small or big,
technological or business related – any change must undergo rigorous test before it gets
delivered for use. There are several myths and types of such testing. Mistakes are made
delivering software before it is matured enough to use and because of which it never
enters in to software maintenance phase. This phase is triggered once software is put to
use or delivered to customer. This phase does not bring significant change in software
and largely focuses of either functionality addition or adaption/platform change.
72 Copyright © Welingkar
Software Evolution
1. Identify need of software evolution over a period of time. Explain any two need
2. What are laws applicable in Software Evolution? – Explain in brief
3. List down difference between software Reengineering and Software maintenance
4. List testing myths and explain each in brief
3. Which of the following type is applicable with change of either organization change or
nature of business requirement change?
a) Fault repair
b) Environmental adaption
c) Functionality addition
d) Staff addition
Copyright © Welingkar 73
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
Video2
7
SELF ASSESSMENT QUESTIONS
Chapter
SOFTWARE MANAGEMENT
Objectives:
After completing this chapter you will understand:
• Why quality management is important
• How measurement may be helpful in assessing some software quality attributes
and the current limitations of software measurement.
• How reviews and inspections are used as a mechanism for software quality
assurance.
• Processes around software change management
• Difference between system version and system release
Structure:
7.1 Quality Management
7.1.1 Quality Planning
7.1.2 Quality Assurance
7.1.3 Quality Control
7.1.4 Models of quality
7.2 Configuration Management
7.2.1 Change Management
7.2.2 Version Management
7.2.3 System Building
7.2.4 Release Management
7.3 Process Improvements
7.4 Summary
7.5 Self-Assessment Questions
7.6 Multiple choice question
Quality of a creative or artistic item depends very much on the individual perception of the
person viewing it and the value each person places for the item.
Copyright © Welingkar 75
Chapter 7
stakeholders, each perceiving quality in one's own way. Some of these are direct tangible
views while others are indirect or derived.
ISO standards define quality as "the totality of features and characteristics of a product or
service that bears its ability to satisfy stated or implied needs." Quality is a manifestation
of some characteristics IN a product or a process. Quality could be "non-inferiority or
superiority" of something.
It includes a process for "identifying which quality standards are relevant to the project
and determining how to satisfy them". Obviously one shoe does not fit all. Selecting and
modifying applicable quality standards and procedures for a particular project is part of
the planning process. Countries have raised their own standards of quality in order to
meet International standards and customer demands. There are many methods for
quality management and improvement. These cover product improvement, process
improvement and people based improvement. Some of these are
76 Copyright © Welingkar
Software Management
Most often the gap is because the producer simply fails to consider who the customers
are and what they need. To meet such challenges, a systematic process that translates
quality policy into measurable objectives and requirements, and lays down a sequence of
steps for realizing them within a specified timeframe i.e. Quality planning is essential
QA activities are determined before production work begins and these activities are
performed while the product is being developed. QA is generic and does not concern the
specific requirements of the product being developed.
Copyright © Welingkar 77
Chapter 7
QC refers to quality related activities associated with the creation of product or project
deliverables. QC with a narrower focus is used to verify that deliverables are of
acceptable quality and that they are complete and correct. QC control is about adherence
to requirements. QC activities are performed after the product is developed.
78 Copyright © Welingkar
Software Management
2. Check sheet: A check sheet is a simple tool used to systematically collect and
organize data for analysis. It typically consists of a table or form with predefined
categories or criteria relevant to the process or task being observed. Users mark
tally marks or check boxes to indicate occurrences or attributes of interest,
facilitating easy visual interpretation of trends, patterns, or issues. Check sheets
are valuable for quality control, problem-solving, and process improvement
initiatives across various industries.
3. Control chart: The control chart is a graph used to study how a process changes over
time. Data are plotted in time order. A control chart always has a central line for the
average, an upper line for the upper control limit, and a lower line for the lower control
limit. These lines are determined from historical data.
Copyright © Welingkar 79
Chapter 7
5. Pareto chart: The Pareto Chart is a very powerful tool for showing the relative
importance of problems. It contains both bars and lines, where individual values are
represented in descending order by bars, and the cumulative total of the sample is
represented by the curved line.
80 Copyright © Welingkar
Software Management
6. Scatter diagram: A scatter diagram (Also known as scatter plot, scatter graph, and
correlation chart) is a tool for analyzing relationships between two variables for
determining how closely the two variables are related. One variable is plotted on the
horizontal axis and the other is plotted on the vertical axis. The pattern of their intersecting
points can graphically show relationship patterns.
7. Run Chart: A run chart is a line graph of data plotted over time. By collecting and
charting data over time, you can find trends or patterns in the process. Because they do
not use control limits, run charts cannot tell you if a process is stable.
8. Six Sigma: Six Sigma is a set of methodologies and tools used to improve business
processes by reducing defects and errors, minimizing variation, and increasing quality and
efficiency .
Copyright © Welingkar 81
Chapter 7
"Quality is what the customer defines it to be", is a very old, but, nevertheless, an apt
definition of Quality. If it's perfection that the customer wants, the pursuit for Quality would
mean pursuit for perfection.
Quality becomes the key differentiator that can provide the competitive edge. In the
current globally competitive environment, one has to be nimble and quality conscious to
keep up with a customer base that is demanding.
Software systems always change during development and use. Bugs are discovered and
have to be fixed. System requirements change, and you have to implement these
changes in a new version of the system. New versions of hardware and system platforms
become available, and you have to adapt your systems to work with them. Competitors
introduce new features in their system that you have to match. As changes are made to
the software, a new version of the system is created. Most systems, therefore, can be
thought of as a set of versions, each of which has to be maintained and managed.
Configuration management (CM) is concerned with the policies, processes, and tools for
managing changing software systems. You need to manage evolving systems because it
is easy to lose track of what changes and component versions have been incorporated
into each system version. Versions implement proposals for change, corrections of faults,
and adaptations for different hardware and operating systems. There may be several
versions under development and in use at the same time. If you don’t have effective
configuration management procedures in place, you may waste effort modifying the
wrong version of a system, deliver the wrong version of a system to customers, or forget
where the software source code for a particular version of the system or component is
stored. Configuration management is useful for individual projects as it is easy for one
person to forget what changes have been made. It is essential for team projects where
several developers are working at the same time on a software system. Sometimes these
developers are all working in the same place but, increasingly, development teams are
distributed with members in different locations across the world. The use of a
configuration management system ensures that teams have access to information about
a system that is under development and do not interfere with each other’s work.
The configuration management of a software system product involves four closely related
activities
1. Change management: This involves keeping track of requests for changes to the
software from customers and developers, working out the costs and impact of making
these changes, and deciding if and when the changes should be implemented.
82 Copyright © Welingkar
Software Management
2. Version management: This involves keeping track of the multiple versions of system
components and ensuring that changes made to components by different developers do
not interfere with each other.
3. System building: This is the process of assembling program components, data, and
libraries, and then compiling and linking these to create an executable system.
4. Release management: This involves preparing software for external release and
keeping track of the system versions that have been released for customer use.
Configuration management involves dealing with a large volume of information and many
configuration management tools have been developed to support CM processes. These
range from simple tools that support a single configuration management task, such as
bug tracking, to complex and expensive integrated toolsets that support all configuration
management activities. Configuration management policies and processes define how to
record and process proposed system changes, how to decide what system components
to change, how to manage different versions of the system and its components, and how
to distribute changes to customers. Configuration management tools are used to keep
track of change proposals, store versions of system components, build systems from
these components, and track the releases of system versions to customers.
Configuration management is sometimes considered to be part of software quality
management with the same manager having both quality management and configuration
management responsibilities. When a new version of the software has been
implemented, it is handed over by the development team to the quality assurance (QA)
team. The QA team checks that the system quality is acceptable. If so, it then becomes a
controlled system, which means that all changes to the system have to be agreed on and
recorded before they are implemented.
The definition and use of configuration management standards is essential for quality
certification in both the ISO 9000 and the CMM and CMMI standards.
Some are planned and many are unplanned. Errors detected in the software need to be
corrected. New business or market conditions dictate changes in product requirements or
business rules. For example when the Euro was launched in Europe, many business
applications went through significant changes to handle the new currency. Some
Copyright © Welingkar 83
Chapter 7
countries which were not using decimal points in their current currency had to implement
systems which could handle the Euro. In one country the ", (comma)" was used as a
decimal point and there were no fractions. This too required changes in the systems
handling the transactions with Euro currency.
Changes can occur due to new customer needs demand modifications of data produced
by information systems, functionality delivered by products, or services delivered by a
computer- based system. Sometimes reorganization or business growth/downsizing
causes changes in project priorities or software engineering team structure. Budgetary
constraints can also cause a redefinition of the system or product.
What are the changes? It is often quoted that "The only constant in software
development is change." There is a confusion on terminologies used in this context i.e.
configuration management, change control and change management. Each of these has
their own nuances but are inter- related. The acronym SCM is also confusing since it can
imply Software Configuration Management or Software Change Management. Here SCM
is assumed to mean "Software Change Management".
Change control is a formal process used to ensure that changes to a product or system
are introduced in a controlled and coordinated manner. It reduces the possibility that
unnecessary changes will be introduced to a system, introducing faults into the system or
undoing changes made by other users of software. For IT systems change control is a
major aspect of the broader discipline of change management. In software engineering,
software configuration management (CM) is the task of tracking and controlling changes
in the software, part of the larger cross-discipline field of configuration management.
Configuration Management practices includes revision control and the establishment of
baselines. If something goes wrong, CM can determine what was changed and who
changed it. If a configuration is working well, CM can determine how to replicate it across
many hosts. The purpose of Software Configuration Management is to establish and
maintain the integrity of the products of the software project throughout the project’s
software life cycle. Software Configuration Management involves identifying configuration
items for the software project, controlling these configuration items and changes to them,
and recording and reporting status and change activity for these configuration items.
84 Copyright © Welingkar
Software Management
document containing a call for an adjustment of a system and has a high significance in
the change management process.
A change request is declarative, i.e. it states what needs to be accomplished, but leaves
out how the change should be carried out.
Important elements of a change request are an ID, the project id, program id, deadline (if
applicable), an indication whether the change is required or optional, the change type, a
change abstract, assumptions and constraints. Change requests typically originate from
one of five sources:
• Problem reports that identify bugs that must be fixed, which forms the most common
source
• System enhancement requests from users
• Events in the development of other systems
• Changes in underlying technology and/or standards
• Demands from senior management or stakeholders
All change history is logged with the change request, including all state changes along
with dates and reasons for the change. The Change Management System ensures that
every change request is received, analyzed and either approve d or rejected. If it is
approved, all other project constraints will also be analyzed for any possible impact due
to this implementation of change.
A good change management system ensures that all affected parameters are identified
and analyzed for any impact before the change is implemented to the system, in order to
avoid or minimize the adverse effects.
Baselines
One important aspect of change management is to keep track of the changes and control
them before they control the project. Base lining is a software change management
concept that helps practitioners to control change without seriously impeding justifiable
change. The IEEE standards define a baseline as:
• A specification or product that has been formally reviewed and agreed upon, that
thereafter serves as the basis for further development, and that can be changed only
through formal change control procedures.
• A baseline is a milestone in the development of software that is marked by the
delivery of one or more software configuration items and the approval of these SCIs
that is obtained through a formal technical review.
The configuration items include any artifacts that are created during the project and
controlled. Examples are All Plans, SRS documents, UML Diagrams, Design documents,
Interface Designs, Test Cases, Code, Test Results, Implementation Manuals, User
Manuals and many other items. Even Minutes of meetings are configuration items. Items
Copyright © Welingkar 85
Chapter 7
are created, formally reviewed, approved and "checked in" through a "toll" gate to a
project database. Once base lining of these artifacts is done, any change of any of the
items needs to go through a change control mechanism. The items are "checked out"
through another toll gate, the required changes and its impact is reviewed, the item taken
up for changes, and then follow through the same cycle of formal review, approval and
"check-in". The significance of the toll gate is that once an item is "checked out" or in
WIP, it cannot be "checked out" by another person till such time the previous version is
"checked in". This avoids inadvertent overwriting of any changes by a made in a code by
one programmer.
The Software Change Management (SCM) Process addresses the following questions
• How does a software team identify the discrete elements of a software configuration?
• How does an organization manage the many existing versions of a program (and its
documentation) in a manner that will enable change to be accommodated efficiently?
• How does an organization control the changes before and after software is released
to a customer?
• Who has responsibility for approving and ranking changes?
• How can we ensure that changes have been made properly?
• What mechanism is used to apprise others of changes that are made?
The British Standards Institute in its Code of Practice for IT Service Management defines
the scope of change management to include the following process steps:
Recording Changes
In practice, the basic details of a change request from the business are recorded to
initiate the change process, including references to documents that describe and
authorize the change. Well-run change management programs use a uniform means to
communicate the request for change, and work to ensure that all constituencies are
trained and empowered to request changes to the infrastructure.
86 Copyright © Welingkar
Software Management
enterprise that affect several distributed groups may need to be approved by a higher-
level change authority than a low risk routine change event. In this way, the process is
speeded for the routine kinds of changes IT departments deal with every day.
Verify change
The implementation of the change in the new SYSTEM RELEASE is verified for the last
time, now by the project manager. Maybe this has to happen before the release, but due
to conflicting literature sources and diagram complexity considerations it was chosen to
model it this way and include this issue.
As the above process makes clear, true change management differs from change control
in the depth of its overall process scope and in the range of inputs it uses. Where change
control ensures that changes are recorded and approved, change management
considers overall business impact and justification, and focuses not only on the decision
to make or not make a given change, but on the implementation of the change and the
impact of that implementation as well.
Copyright © Welingkar 87
Chapter 7
Version Control
Version control combines procedures & tools to manage different versions of
configuration objects that are created during a s/w process. A version control system
implements or is directly integrated with four major capabilities:
88 Copyright © Welingkar
Software Management
When version management systems were first developed, storage management was one
of their most important functions. The storage management features in a version control
system reduce the disk space required to maintain all system versions.
Copyright © Welingkar 89
Chapter 7
4. Executable system creation The build system should link the compiled object code
files with each other and with other required files, such as libraries and configuration
files, to create an executable system.
5. Test automation Some build systems can automatically run automated tests using
test automation tools such as JUnit. These check that the build has not been ‘broken’
by changes.
6. Reporting The build system should provide reports about the success or failure of the
build and the tests that have been run.
7. Documentation generation The build system may be able to generate release notes
about the build and system help pages.
For custom software or software product lines, managing system releases is a complex
process. Special releases of the system may have to be produced for each customer and
individual customers may be running several different releases of the system at the same
time. This means that a software company selling a specialized software product may
have to manage tens or even hundreds of different releases of that product. Their
configuration management systems and processes have to be designed to provide
information about which customers have which releases of the system and the
relationship between releases and system versions. In the event of a problem, it may be
necessary to reproduce exactly the software that has been delivered to a particular
customer.
To document a release, you have to record the specific versions of the source code
components that were used to create the executable code. You must keep copies of the
source code files, corresponding executables, and all data and configuration files.
90 Copyright © Welingkar
Software Management
You should also record the versions of the operating system, libraries, compilers, and
other tools used to build the software. These may be required to build exactly the same
system at some later date. This may mean that you have to store copies of the platform
software and the tools used to create the system in the version management system
along with the source code of the target system.
A system release is not just the executable code of the system. The release may also
include:
• configuration files defining how the release should be configured for particular
installations;
• data files, such as files of error messages, that are needed for successful system
operation;
• an installation program that is used to help install the system on target hardware;
• electronic and paper documentation describing the system;
• packaging and associated publicity that have been designed for that release.
When planning the installation of new system releases, you cannot assume that
customers will always install new system releases. Some system users may be happy
with an existing system. They may consider that it is not worth the cost of changing to a
new release. New releases of the system cannot, therefore, rely on the installation of
previous releases. To illustrate this problem, consider the following scenario:
1. Release 1 of a system is distributed and put into use.
2. Release 2 requires the installation of new data files, but some customers do not need
the facilities of release 2 so remain with release 1.
3. Release 3 requires the data files installed in release 2 and has no new data files of its
own.
The software distributor cannot assume that the files required for release 3 have already
been installed in all sites. Some sites may go directly from release 1 to release 3,
skipping release 2. Some sites may have modified the data files associated with release
2 to reflect local circumstances. Therefore, the data files must be distributed and installed
with release 3 of the system.
Copyright © Welingkar 91
Chapter 7
The marketing and packaging costs associated with new releases of software products
are high so product vendors usually only create new releases for new platforms or to add
significant new functionality. They then charge users for this new software. When
problems are discovered in an existing release, the software vendors make patches to
repair the existing software available on a website to be downloaded by customers.
The problem with using downloadable patches is that many customers may never
discover the existence of these problem repairs and may not understand why they should
be installed. They may instead continue using their existing, faulty system with the
consequent risks to their business. In some situations, where the patch is designed to
repair security loopholes, the risks of failing to install the patch can mean that the
business is susceptible to external attacks. To avoid these problems, mass-market
software vendors, such as Adobe, Apple, and Microsoft, usually implement automatic
updating where systems are updated whenever a new minor release becomes available.
However, this does not usually work for custom systems because these systems do not
exist in a standard version for all customers.
Nowadays, there is a constant demand from industry for cheaper, better software, which
has to be delivered to ever-tighter deadlines. Consequently, many software companies
have turned to software process improvement as a way of enhancing the quality of their
software, reducing costs, or accelerating their development processes. Process
improvement means understanding existing processes and changing these processes to
increase product quality and/or reduce costs and development time.
Two quite different approaches to process improvement and change are used:
1. The process maturity approach, which has focused on improving process and project
management and introducing good software engineering practice into an
organization. The level of process maturity reflects the extent to which good technical
and management practice has been adopted in organizational software development
processes. The primary goals of this approach are improved product quality and
process predictability.
2. The agile approach, which has focused on iterative development and the reduction of
overheads in the software process. The primary characteristics of agile methods are
rapid delivery of functionality and responsiveness to changing customer
requirements.
The major problems with large projects are integration, project management, and
communications. There is usually a mix of abilities and experience in the team
members and, because the development process usually takes place over a number
of years, the development team is volatile. It may change completely over the lifetime
92 Copyright © Welingkar
Software Management
of the project. For small projects, however, where there are only a few team
members, the quality of the development team is more important than the
development process used. Hence, the agile manifesto proclaims the importance of
people rather than process. If the team has a high level of ability and experience, the
quality of the product is likely to be high, irrespective of the process used. If the team
is inexperienced and unskilled, a good process may limit the damage but will not, in
itself, lead to high-quality software. Where teams are small, good development
technology is particularly important.
The small team cannot devote a lot of time to tedious administrative procedures. The
team members spend most of their time designing and programming the system, so
good tools significantly affect their productivity. For large projects, a basic level of
development technology is essential for information management. Paradoxically,
however, sophisticated software tools are less important in large projects. Team
members spend a smaller proportion of their time in development activities and more
time communicating and understanding other parts of the system. Development tools
make no difference to this. However, Web 2.0 tools that support communications,
such as wikis and blogs, can significantly improve communications between
members of distributed teams.
All too often, the real cause of software quality problems is not poor management,
inadequate processes, or poor quality training. Rather, it is the fact that organizations
must compete to survive. To gain a contract, a company may underestimate the
effort required or promise rapid delivery of a system. In an attempt to meet these
commitments, an unrealistic development schedule may be agreed upon.
Consequently, the quality of the software is adversely affected.
7.4 SUMMARY
Copyright © Welingkar 93
Chapter 7
that any activity, project or business has many stakeholders. Some of these are direct
tangible views while others are indirect or derived.
Change management involves assessing proposals for changes from system customers
and other stakeholders and deciding if it is cost-effective to implement these in a new
version of a system.
Software should be frequently rebuilt and tested immediately after a new version has
been built. This makes it easier to detect bugs and problems that have been introduced
since the last build.
The goals of process improvement are higher product quality, reduced process costs,
and faster delivery of software. The process improvement cycle involves process
measurement, process analysis and modeling, and process change.
94 Copyright © Welingkar
Software Management
Copyright © Welingkar 95
Chapter 7
96 Copyright © Welingkar
REFERENCE MATERIAL
Click on the links below to view additional reference material for this chapter
Summary
PPT
MCQ
Video1
Video2