0% found this document useful (0 votes)
35 views72 pages

SPM Lecture Notes

The document provides lecture notes on Software Project Management, detailing conventional software management practices, particularly the waterfall model, and its limitations in real-world applications. It discusses the challenges faced in software development, such as unpredictability, late risk resolution, and adversarial stakeholder relationships, while emphasizing the importance of documentation and customer involvement. Additionally, it outlines performance metrics and the evolution of software economics, highlighting the need for improved management practices in software projects.

Uploaded by

charan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views72 pages

SPM Lecture Notes

The document provides lecture notes on Software Project Management, detailing conventional software management practices, particularly the waterfall model, and its limitations in real-world applications. It discusses the challenges faced in software development, such as unpredictability, late risk resolution, and adversarial stakeholder relationships, while emphasizing the importance of documentation and customer involvement. Additionally, it outlines performance metrics and the evolution of software economics, highlighting the need for improved management practices in software projects.

Uploaded by

charan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

1

INSTITUTE OF AERONAUTICAL ENGINEERING


(Autonomous)
Dundigal, Hyderabad - 500 043

Lecture Notes:

Software Project Management(ACIC05)

Drafted by :
D. Koumudi
prasanna(IARE10939) Assistant
Professor

Department Of CSIT
Institute of Aeronautical
Engineering
January 22, 2023
2

Contents

Contents 2

1 Conventional Software Management 4


1.1 Waterfall model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 In Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 In Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Conventional Software Management Performance . . . . . . . . . . . . . . . 11
1.5 Evolution of Software economics. . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Software economics . . . . . . . . . . . . . . . . . . 11
1.7 Pragmatic Software Cost E st imat io n . . . . . . . . . . . . . . . . . . . . . 14
.

2 IMPROVING SOFTWARE ECONOMICS 16


Reducing Software Product Size .................................................................. 17
Languages ...................................................................................................... 21
Object-Oriented Methods and Visual Modeling .............................................. 21
Reuse ............................................................................................................. 21
Improving Software Processes ....................................................................... 22
Improving Team Effectiveness ....................................................................... 23
Improving Automation through Software Environments ................................. 25
Achieving Required Quality ........................................................................... 25
Peer Inspections: A Pragmatic View............................................................... 26
The Old Way and the New ............................................................................. 27
The Principles of Conventional Software Engineering .................................... 28
The Principles of Modern Software Management. ............................... 30

3 LIFE-CYCLEPHASES
Engineering and Production Stages.
Inception Phase. ………………………………………………….………. 31
Elaboration Phase…………………………………………………………32
Construction Phase……………………………………….……………….33
Transition Phase……………………………………..…………………….34
Artifacts of the Process. .................................................. ……………………….....35
The Artifact Sets. . ........................................................ ……………………….....36
The Management . ......................................................... ……………………….....37
The Engineering Sets. ................................................... ……………………….....38
Artifact Evolution over the Life Cycle. ……………….........................................39
Test Artifacts. ………………….............................................................................40
Management Artifacts.………….............................................................................41
3

Engineering Artifacts………….............................................................................42
Pragmatic Artifacts…………….............................................................................44
Model-Based Software Architectures.................................................................47
Architecture: A Management Perspective. ............................................................47
Architecture: A Technical Perspective....................................................................48

4 PROJECT ORGANIZATIONS 50
Line-of-Business Organizations ..................................................................... 50
Project Organizations ..................................................................................... 51
Evolution of Organizations ............................................................................. 52
Process Automation ........................................................................................... 53
TheProject Environment .............................................................................. 55
Process control and process instrumentation ................................................... 56
TheSevenCoreMetrics .................................................................................... 57
management indicators................................................................................... 58
quality indicators............................................................................................ 60
life-cycle expectation .................................................................................... 61
Pragmatic software metrics ............................................................................ 62
metrics automation. ...................................................................................... 64

5 CASE STUDIES 66
CCPDS-R Case Study ................................................................................... 67
Future Software Project Management Practices ............................................ 68
Modern Project Profiles.................................................................................. 69
Next Generation software Economics ............................................................ 70
Modern Process Transitions ........................................................................... 71

Bibliography 72
4

MODULE – I

MODULE–I:CONVENTIONAL SOFTWARE MANAGEMENT (09)


The waterfall model, conventional software Management performance.
Evolution of Software Economics: Software Economics. Pragmatic software cost
estimation.
[Link] software management
Conventional software management practices are sound in theory, but practice is
still tied to archaic (outdated)technology and techniques.
Conventional software economics provides a benchmark of performance for
conventional software management principles.
The best thing about software is its flexibility: It can be programmed to do almost
anything.
The worst thing about software is also its flexibility: The "almost anything"
characteristic has made it difficult to plan, monitors, and control software
development.
Three important analyses of the state of the software engineering industry are
1. Software development is still highly unpredictable. Only about 10% of software
projects aredelivered successfully within initial budget and schedule estimates.
2. Management discipline is more of a discriminator in success or failure than are
technology advances.
3. The level of software scrap and rework is indicative of an immature process.
All three analyses reached the same general conclusion: The success rate for
software projects is very low. The three analyses provide a good introduction to
the magnitude of the software problem and the current norms for conventional
software management performance.

THE WATERFALL MODEL


Most software engineering texts present the waterfall model as the source of the
"conventional" software process.
IN THEORY
It provides an insightful and concise summary of conventional software
management Three main primary points are
1. There are two essential steps common to the development of computer
programs: analysis and coding.
Waterfall Model part 1: The two basic steps to building a program.

Analysis Analysis and coding both involve creative work that directly
contributes to the usefulness of the end product.

Coding

2. In order to manage and control all of the intellectual freedom associated with
software development, one must introduce several other "overhead" steps,
including system requirements definition, software requirements definition,

4
5

program design, and testing. These steps supplement the analysis and coding
steps. Below Figure illustrates the resulting project profile and the basic steps in
developing a large-scale program.
Requirement

Analysis
Design

Coding

Testing
Operation

3. The basic framework described in the waterfall model is risky and invites failure.
The testing phase that occurs at the end of the development cycle is the first event
for which timing, storage, input/output transfers, etc., are experienced as
distinguished from analyzed. The resulting design changes are likely to be so
disruptive that the software requirements upon which the design is based are
likely violated. Either the requirements must be modified or a substantial
design change iswarranted.

Five necessary improvements for waterfall model are:-

1. Program design comes first. Insert a preliminary program design phase


between the software requirements generation phase and the analysis phase. By
this technique, the program designer assures that the software will not fail
because of storage, timing, and data flux (continuous change). As analysis
proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he
senses the consequences. If the total resources to be applied are insufficient or if
the embryonic(in an early stage of development) operational design is wrong, it
will be recognized at this early stage and the iteration with requirements and
preliminary design can be redone before final design, coding, and test
commences. How is this program design procedure implemented?

The following steps are required:


Begin the design process with program designers, not analysts or programmers.
Design, define, and allocate the data processing modes even at the risk of being
wrong. Allocate processing functions, design the database, allocate execution
time, define interfaces and processing modes with the operating system, describe
input and output processing, and define preliminary operating procedures.
Write an overview document that is understandable, informative, and current so
that every worker on the project can gain an elemental understanding of the
system.

2. Document the design. The amount of documentation required on most


software programs is quite a lot, certainly much more than most programmers,

5
6

analysts, or program designers are willing to do if left to their own devices.


Why do we need so much documentation? (1) Each designer must communicate
with interfacing designers, managers, and possibly customers. (2) During early
phases, the documentation is the design. (3) The real monetary value of
documentation is to support later modifications by a separate test team, a separate
maintenance team, and operations personnel who are not software literate.

3. Do it twice. If a computer program is being developed for the first time, arrange
matters so that the version finally delivered to the customer for operational
deployment is actually the second version insofar as critical design/operations are
concerned. Note that this is simply the entire process done in miniature, to a time
scale that is relatively small with respect to the overall effort. In the first version,
the team must have a special broad competence where they can quickly sense
trouble spots in the design, model them, model alternatives, forget the
straightforward aspects of the design that aren't worth studying at this early point,
and, finally, arrive at an error-free program.

4. Plan, control, and monitor testing. Without question, the biggest user of project
resources-manpower, computer time, and/or management judgment-is the test
phase. This is the phase of greatest risk in terms of cost and schedule. It occurs at
the latest point in the schedule, when backup alternatives are least available, if at
all. The previous three recommendations were all aimed at uncovering and
solving problems before entering the test phase. However, even after doing these
things, there is still a test phase and there are still important things to be done,
including: (1) employ a team of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like
dropped minus signs, missing factors of two, jumps to wrong addresses (do not
use the computer to detect this kind of thing, it is too expensive); (3) test every
logic path; (4) employ the final checkout on the target computer.

5. Involve the customer. It is important to involve the customer in a formal


way so that he has committed himself at earlier points before final delivery.
There are three points following requirements definition where the insight,
judgment, and commitment of the customer can bolster the development effort.
These include a "preliminary software review" following the preliminary program
design step, a sequence of "critical software design reviews" during program
design, and a "final software acceptance review".

IN PRACTICE
Some software projects still practice the conventional software management
approach.
It is useful to summarize the characteristics of the conventional process as it
has typically been applied, which is not necessarily as it was intended. Projects
destined for trouble frequently exhibit the following symptoms:

6
7

 Protracted integration and late design breakage.


 Late risk resolution.
 Requirements-driven functional decomposition.
 Adversarial (conflict or opposition) stakeholder relationships.
 Focus on documents and review meetings.

Protracted Integration and Late Design Breakage


For a typical development project that used a waterfall model management
process, Figure 1-2 illustratesdevelopment progress versus time. Progress is
defined as percent coded, that is, demonstrable in its target form.

The following sequence was common:

 Early success via paper designs and thorough (often too thorough) briefings.
 Commitment to code late in the life cycle.
 Integration nightmares (unpleasant experience) due to unforeseen implementation
issues and interfaceambiguities.
 Heavy budget and schedule pressure to get the system working.
 Late shoe-homing of no optimal fixes, with no time for redesign.
 A very fragile, unmentionable product delivered late.

In the conventional model, the entire system was designed on paper, then
implemented all at once, then integrated. Table 1-1 provides a typical profile of
cost expenditures across the spectrum of software activities.

7
8

Late risk resolution A serious issue associated with the waterfall lifecycle was
the lack of early risk resolution. Figure 1.3 illustrates a typical risk profile for
conventional waterfall model projects. It includes four distinct periods of risk
exposure, where risk is defined as the probability of missing a cost, schedule,
feature, or quality goal. Early in the life cycle, as the requirements were being
specified, the actual risk exposure was highly unpredictable.

Requirements-Driven Functional Decomposition: This approach depends on


specifying requirements com- pletely and unambiguously before other
development activities begin. It naively treats all requirements as equally
important, and depends on those requirements remaining constant over the

8
9

software development life cycle. These conditions rarely occur in the real world.
Specification of requirements is a difficult and important part of the software
development process.
Another property of the conventional approach is that the requirements were
typically specified in a functional manner. Built into the classic waterfall process
was the fundamental assumption that the software itself was decomposed into
functions; requirements were then allocated to the resulting components. This
decomposition was often very different from a decomposition based on object-
oriented design and the use of existing components. Figure 1-4 illustrates the
result of requirements-driven approaches: a software structure that is organized
around the requirements specification structure.

Adversarial Stakeholder Relationships:


The conventional process tended to result in adversarial stakeholder relationships,
in large part because of the difficulties of requirements specification and the
exchange of information solely through paper documents that captured
engineering information in ad hoc formats.
The following sequence of events was typical for most contractual software
efforts:
1. The contractor prepared a draft contract-deliverable document that captured an
intermediate artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically
within 15 to 30 days) a finalversion for approval.
This one-shot review process encouraged high levels of sensitivity on the part of
customers and contractors.

9
10

Focus on Documents and Review Meetings:


The conventional process focused on producing various documents that attempted
to describe the software product, with insufficient focus on producing tangible
increments of the products themselves. Contractors were driven to produce
literally tons of paper to meet milestones and demonstrate progress to
stakeholders, rather than spend their energy on tasks that would reduce risk
and produce quality software. Typically, presenters and the audience reviewed
the simple things that they understood rather than the complex and important
issues. Most design reviews therefore resulted in low engineering value and high
cost in terms of the effort and schedule involved in their preparation and conduct.
They presented merely a facade of progress.
Table 1-2 summarizes the results of a typical design review.

CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE


Barry Boehm's "Industrial Software Metrics Top 10 List” is a good, objective
characterization of the state ofsoftware development.
1. Finding and fixing a software problem after delivery costs 100 times more
than finding and fixing theproblem in early design phases.
2. You can compress software development schedules 25% of nominal, but no
more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the
number of source linesof code.
5. Variations among people account for the biggest differences in software
productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was
15:85; in 1985, 85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as
individual softwareprograms. Software-system products (i.e., system of systems)
cost 9 times as much.

10
11

9. Walkthroughs catch 60% of the errors


10. 80% of the contribution comes from 20% of the contributors.
2. Evolution of Software Economics
SOFTWARE ECONOMICS
Most software cost models can be abstracted into a function of five basic
parameters: size, process, personnel, environment, and required quality.
1. The size of the end product (in human-generated components), which is typically
quantified in terms of the number of source instructions or the number of
function points required to develop therequired functionality
2. The process used to produce the end product, in particular the ability of the
process to avoid non- value-adding activities (rework, bureaucratic delays,
communications overhead)
3. The capabilities of software engineering personnel, and particularly their
experience with the computer science issues and the applications domain issues of
the project
4. The environment, which is made up of the tools and techniques available to
support efficient software development and to automate the process
5. The required quality of the product, including its features, performance,

reliability, and adaptability The relationships among these parameters and the

estimated cost can be written as follows:

Effort = (Personnel) (Environment) (Quality) ( Sizeprocess)

One important aspect of software economics (as represented within today's


software cost models) is that the relationship between effort and size exhibits a
diseconomy of scale. The diseconomy of scale of software development is a
result of the process exponent being greater than 1.0. Contrary to most
manufacturingprocesses, the more software you build, the more expensive it is per
unit item.
Figure 2-1 shows three generations of basic technology advancement in
tools, components, and processes. The required levels of quality and personnel are
assumed to be constant. The ordinate of the graph refers to software unit costs
(pick your favorite: per SLOC, per function point, per component) realized by an
organization.
The three generations of software development are defined as follows:

1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools,


custom processes, and virtually all custom components built in primitive
languages. Project performance was highly predictable in that cost, schedule, and
quality objectives were almost always underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-
repeatable processes and off- the-shelf tools, and mostly (>70%) custom
components built in higher level languages. Some of thecomponents (<30%) were
available as commercial products, including the operating system, database

11
12

management system, networking, and graphical user interface.


3) Modern practices: 2000 and later, software production. This book's philosophy is
rooted in theuse of managed and measured processes, integrated automation
environments, and mostly (70%) off-the-shelf components. Perhaps as few as
30% of the components need to be custom built
Technologies for environment automation, size reduction, and process
improvement are not independent of one another. In each new era, the key is
complementary growth in all technologies. For example, the process advances
could not be used successfully without new component technologies and
increased tool automation.

Organizations are achieving better economies of scale in successive technology


eras-with very large projects (systems of systems), long-lived products, and lines
of business comprising multiple similar projects. Figure 2-2 provides an overview
of how a return on investment (ROI) profile can be achieved in subsequent efforts
across life cycles of various domains.

12
13

PRAGMATIC SOFTWARE COST ESTIMATION


One critical problem in software cost estimation is a lack of well-documented
case studies of projects that used an iterative development approach. Software
industry has inconsistently defined metrics or atomic units of measure, the data
from actual projects are highly suspect in terms of consistency and comparability.
It is hard enough to collect a homogeneous set of project data within one
organization; it is extremely difficult to homog- enize data across different
organizations with different processes, languages, domains, and so on.
There have been many debates among developers and vendors of software cost
estimation models and tools. Three topics of these debates are of particular
interest here:

1. Which cost estimation model to use?


2. Whether to measure software size in source lines of code or function points.

13
14

3. What constitutes a good estimate?


There are several popular cost estimation models (such as COCOMO,
CHECKPOINT, ESTIMACS, Knowledge Plan, Price-S, ProQMS, SEER,
SLIM, SOFTCOST, and SPQR/20), CO COMO is also one of the most open
and well-documented cost estimation models. The general accuracy of
conventional cost models (such as COCOMO) has been described as "within 20%
of actuals, 70% of the time."
Most real-world use of cost models is bottom-up (substantiating a target
cost) rather than top-down (estimating the "should" cost). Figure 2-3 illustrates
the predominant practice: The software project manager defines the target cost of
the software, and then manipulates the parameters and sizing until the target cost
can be justified. The rationale for the target cost maybe to win a proposal, to
solicit customer funding, to attain internal corporate funding, or to achieve some
other goal.
The process described in Figure 2-3 is not all bad. In fact, it is absolutely
necessary to analyze the cost risks and understand the sensitivities and trade-offs
objectively. It forces the software project manager to examine the risks associated
with achieving the target costs and to discuss this information with other
stakeholders.
A good software cost estimate has the following attributes:
 It is conceived and supported by the project manager, architecture team,
development team, and testteam accountable for performing the work.
 It is accepted by all stakeholders as ambitious but realizable.
 It is based on a well-defined software cost model with a credible basis.
 It is based on a database of relevant project experience that includes similar
processes, similartechnologies, similar environments, similar quality
requirements, and similar people.
 It is defined in enough detail so that its key risk areas are understood and the
probability of success isobjectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a
mature cost model with an experience base that reflects multiple similar projects
done by the same team with the same mature processes and tools.

14
15

MODULE – II

MODULE–II:IMPROVING SOFTWARE ECONOMICS


Reducing Software product size, improving software processes, improving team
effectiveness. Improving automation, Achieving required quality, peer
inspections. The old way and the new- The principles of conventional software
engineering. Principles of modem software management, transitioning to an
iterative process.

Improving Software Economics


Five basic parameters of the software cost model are
[Link] the size or complexity of what needs to be developed.
2. Improving the development process.
3. Using more-skilled personnel and better teams (not necessarily the same thing).
4. Using better environments (tools to automate the process).
5. Trading off or backing off on quality thresholds.
These parameters are given in priority order for most software domains. Table
3-1 lists some of the technology developments, process improvement efforts,
and management approaches targeted at improving the economics of software
development and integration.

15
16

REDUCING SOFTWARE PRODUCT SIZE


The most significant way to improve affordability and return on investment (ROI)
is usually to produce a product that achieves the design goals with the minimum
amount of human-generated source material. Component-based development is
introduced as the general term for reducing the "source" language size to achieve
a software solution.
Reuse, object-oriented technology, automatic code production, and higher order
programming languages are all focused on achieving a given system with fewer
lines of human-specified source directives (statements).
size reduction is the primary motivation behind improvements in higher order
languages (such as C++, Ada 95, Java, Visual Basic), automatic code generators
(CASE tools, visual modeling tools, GUI builders), reuse of commercial
components (operating systems, windowing environments, database management
systems, middleware, networks), and object-oriented technologies (Unified
Modeling Language, visual modeling tools, architecture frameworks).
The reduction is defined in terms of human-generated source material. In general,
when size-reducing technologies are used, they reduce the number of human-
generated source lines.
LANGUAGES

16
17

Universal function points (UFPs1) are useful estimators for language-


independent, early life-cycle estimates. The basic units of function points are
external user inputs, external outputs, internal logical data groups, external
data interfaces, and external inquiries. SLOC metrics are useful estimators for
software after a candidate solution is formulated and an implementation language
is known. Substantial data have been documented relating SLOC to function
points. Some of these results are shown in Table 3-2.
Languages expressiveness of some of today’s popular languages
LANGUAGES SLOC perUFP

Assembly 320
C 128
FORTAN77 105
COBOL85 91
Ada83 71
C++ 56
Ada95 55
Java 55
Visual Basic 35
Table 3-2

OBJECT-ORIENTED METHODS AND VISUAL MODELING


Object-oriented technology is not germane to most of the software management
topics discussed here, and books on object-oriented technology abound. Object-
oriented programming languages appear to benefit both software productivity and
software quality. The fundamental impact of object-oriented technology is in
reducing the overall size of what needs to be developed.
People like drawing pictures to explain something to others or to themselves.
When they do it for software system design, they call these pictures diagrams
or diagrammatic models and the very notation for them a modeling language.
These are interesting examples of the interrelationships among the dimensions of
improving software eco- nomics.

1. An object-oriented model of the problem and its solution encourages a common


vocabulary between the end users of a system and its developers, thus creating a
shared understanding of the problem being solved.
2. The use of continuous integration creates opportunities to recognize risk early and
make incremental corrections without destabilizing the entire development effort.
3. An object-oriented architecture provides a clear separation of concerns among
disparate elements of a system, creating firewalls that prevent a change in one part
of the system from rending the fabric of the entire architecture.

17
18

Booch also summarized five characteristics of a successful object-oriented


project.

1. A ruthless focus on the development of a system that provides a well


understood collection of essentialminimal characteristics.
2. The existence of a culture that is centered on results, encourages communication,
and yet is not afraidto fail.
3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life
cycle.

REUSE
Reusing existing components and building reusable components have been
natural software engineering activities since the earliest improvements in
programming languages. With reuse in order to minimize development costs
while achieving all the other required attributes of performance, feature set, and
quality. Try to treat reuse as a mundane part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial
products supported by organizations with the following characteristics:

 They have an economic motivation for continued support.


 They take ownership of improving product quality, adding new features, and
transitioning to newtechnologies.
 They have a sufficiently broad customer base to be profitable.
The cost of developing a reusable component is not trivial. Figure 3-1 examines
the economic trade-offs. Thesteep initial curve illustrates the economic obstacle
to developing reusable components.
Reuse is an important discipline that has an impact on the efficiency of all
workflows and the quality of most artifacts.

18
19

COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize
integration of commercial components and off-the-shelf products. While the use
of commercial components is certainly desirable as a means of reducing custom
development, it has not proven to be straightforward in practice. Table 3-3
identifies some of the advantages and disadvantages of using commercial
components.

19
20

IMPROVING SOFTWARE PROCESSES


Process is an overloaded term. Three distinct process perspectives are.

 Metaprocess: an organization's policies, procedures, and practices for pursuing a


software-intensive line of business. The focus of this process is on organizational
economics, long-term strategies, and software ROI.
 Macroprocess: a project's policies, procedures, and practices for producing a
complete software product within certain cost, schedule, and quality constraints.
The focus of the macro process is on creating an adequate instance of the Meta
process for a specific set of constraints.
 Microprocess: a project team's policies, procedures, and practices for achieving
an artifact of the software process. The focus of the micro process is on achieving
an intermediate product baseline with adequate quality and adequate functionality
as economically and rapidly as practical.
Although these three levels of process overlap somewhat, they have different
objectives, audiences, metrics, concerns, and time scales as shown in Table 3-4

In a perfect software engineering world with an immaculate problem description,


an obvious solution space, a development team of experienced geniuses, adequate
resources, and stakeholders with common goals, we could execute a software
development process in one iteration with almost no scrap and rework. Because
we work in an imperfect world, however, we need to manage engineering
activities so that scrap and rework profiles do not have an impact on the win

20
21

conditions of any stakeholder. This should be the underlying premise for most
process improvements.
IMPROVING TEAM EFFECTIVENESS
Teamwork is much more important than the sum of the individuals. With software
teams, a project manager needs to configure a balance of solid talent with highly
skilled people in the leverage positions. Some maxims of team management
include the following:
 A well-managed project can succeed with a nominal engineering team.
 A mismanaged project will almost never succeed, even with an expert team of
engineers.
 A well-architected system can be built by a nominal team of software builders.
 A poorly architected system will flounder even with an expert team of builders.
Boehm five staffing principles are
1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and motivation of the
people available.
3. The principle of career progression: An organization does best in the long run by
helping its peopleto self-actualize.
4. The principle of team balance: Select people who will complement and harmonize
with one another
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone

Software project managers need many leadership qualities in order to enhance


team effectiveness. The following are some crucial attributes of successful
software project managers that deserve much more attention:

1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right
person in the right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders
is a prerequisite for success.
Decision-making skill. The jillion books written about management have failed to
provide a clear definition of this attribute. We all know a good leader when we
run into one, and decision-making skill seems obvious despite its intangible
definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate
progress, exploit eccentric prima donnas, transition average people into top
performers, eliminate misfits, and consolidate diverse opinions into a team
direction.
Selling skill. Successful project managers must sell all stakeholders (including
themselves) on decisions and priorities, sell candidates on job positions, sell
changes to the status quo in the face of resistance, and sell achievements against
objectives. In practice, selling requires continuous negotiation, compromise, and
empathy

21
22

IMPROVING AUTOMATION THROUGH SOFTWARE


ENVIRONMENTS
The tools and environment used in the software process generally have a
linear effect on the productivity of the process. Planning tools, requirements
management tools, visual modeling tools, compilers, editors, debuggers, quality
assurance analysis tools, test tools, and user interfaces provide crucial
automation support for evolving the software engineering artifacts. Above all,
configuration management environments provide the foundation for executing and
instrument the process. At first order, the isolated impact of tools and automation
generally allows improvements of 20% to 40% in effort. However, tools and
environments must be viewed as the primary delivery vehicle for process
automation and improvement, so their impact can be much higher.
Automation of the design process provides payback in quality, the ability
to estimate costs and schedules, and overall productivity using a smaller team.
Round-trip engineering describes the key capability of environments that support
iterative development. As we have moved into maintaining different information
repositories for the engineering artifacts, we need automation support to ensure
efficient and error-free transition of data from one artifact to another. Forward
engineering is the automation of one engineering artifact from another, more
abstract representation. For example, compilers and linkers have provided
automated transition of source code into executable code.
Reverse engineering is the generation or modification of a more abstract
representation from an existing artifact (for example, creating a visual design
model from a source code representation).
Economic improvements associated with tools and environments. It is common
for tool vendors to make rela- tively accurate individual assessments of life-cycle
activities to support claims about the potential economic impact of their tools. For
example, it is easy to find statements such as the following from companies in a
particular tool.
 Requirements analysis and evolution activities consume 40% of life-cycle costs.
 Software design activities have an impact on more than 50% of the resources.
 Coding and unit testing activities consume about 50% of software development
effort and schedule.
 Test activities can consume as much as 50% of a project's resources.
 Configuration control and change management are critical activities that can
consume as much as25% of resources on a large-scale project.
 Documentation activities can consume more than 30% of project engineering
resources.
 Project management, business administration, and progress assessment can
consume as much as 30%of project budgets.

ACHIEVING REQUIRED QUALITY


Software best practices are derived from the development process and
technologies. Table 3-5 summarizessome dimensions of quality improvement.

Key practices that improve overall software quality include the following:

22
23

Focusing on driving requirements and critical use cases early in the life cycle,
focusing on requirements completeness and traceability late in the life cycle, and
focusing throughout the life cycle on a balance between requirements evolution,
design evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an
architecture as it evolves from a high-level prototype into a fully compliant
product
Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation
Using visual modeling and higher level languages that support architectural
control, abstraction, reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-
based evaluations

Conventional development processes stressed early sizing and timing estimates of


computer program resource utilization. However, the typical chronology of events
in performance assessment was as follows
Project inception. The proposed design was asserted to be low risk with adequate
performance margin.

23
24

Initial design review. Optimistic assessments of adequate design margin were


based mostly on paper analysis or rough simulation of the critical threads. In most
cases, the actual application algorithms and database sizes were fairly well
understood.
Mid-life-cycle design review. The assessments started whittling away at the
margin, as early benchmarks and initial tests began exposing the optimism
inherent in earlier estimates.
Integration and test. Serious performance problems were uncovered, necessitating
fundamental changes in the architecture. The underlying infrastructure was
usually the scapegoat, but the real culprit was immature use of the infrastructure,
immature architectural solutions, or poorly understood early design trade-offs.

PEER INSPECTIONS: A PRAGMATIC VIEW


Peer inspections are frequently over hyped as the key aspect of a quality system.
In my experience, peer reviews are valuable as secondary mechanisms, but they
are rarely significant contributors to quality compared with the following primary
quality mechanisms and indicators, which should be emphasized in the
management process:

 Transitioning engineering information from one artifact set to another, thereby


assessing the consistency, feasibility, understandability, and technology
constraints inherent in the engineering artifacts
 Major milestone demonstrations that force the artifacts to be assessed against
tangible criteria in thecontext of relevant use cases
 Environment tools (compilers, debuggers, analyzers, automated test suites) that
ensure representationrigor, consistency, completeness, and change control
 Life-cycle testing for detailed insight into critical trade-offs, acceptance
criteria, and requirementscompliance
 Change management metrics for objective insight into multiple-perspective
change trends andconvergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality
products. All authors of software and documentation should have their products
scrutinized as a natural by-product of the process. Therefore, the coverage of
inspections should be across all authors rather than across all components.

THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERING

1. Make quality Quality must be quantified and mechanisms put into place to
motivate its achievement
2. High-quality software is possible. Techniques that have been
demonstrated to increase quality include involving the customer, prototyping,
simplifying design, conducting inspections, and hiring the best people

24
25

3. Give products to customers early. No matter how hard you try to learn
users' needs during the requirementsphase, the most effective way to determine
real needs is to give users a product and let them play with it [Link] the
problem before writing the requirements. When faced with what they believe
is a problem, most engineers rush to offer a solution. Before you try to solve a
problem, be sure to explore all the alternativesand don't be blinded by the
obvious solution
5. Evaluate design alternatives. After the requirements are agreed upon,
you must examine a variety of architectures and algorithms. You certainly do not
want to use” architecture" simply because it was used in the requirements
specification.
6. Use an appropriate process model. Each project must select a process
that makes ·the most sense for that project on the basis of corporate culture,
willingness to take risks, application area, volatility of requirements, and the
extent to which requirements are well understood.
7. Use different languages for different phases. Our industry's eternal
thirst for simple solutions to complex problems has driven many to declare that
the best development method is one that uses the same notation through- out the
life cycle.
8. Minimize intellectual distance. To minimize intellectual distance, the
software's structure should be as close as possible to the real-world structure
9. Put techniques before tools. An undisciplined software engineer with a
tool becomes a dangerous, undisciplined software engineer
10. Get it right before you make it faster. It is far easier to make a working
program run faster than it is to make a fast program work. Don't worry about
optimization during initial coding
11. Inspect code. Inspecting the detailed design and code is a much better way to find
errors than testing
12. Good management is more important than good technology. Good
management motivates people to do their best, but there are no universal "right"
styles of management.

25
26

13. People are the key to success. Highly skilled people with appropriate
experience, talent, and training are key. [Link] with care. Just because
everybody is doing something does not make it right for you. It may be right, but
you must carefully assess its applicability to your environment.
15. Take responsibility. When a bridge collapses we ask, "What did the
engineers do wrong?" Even when software fails, we rarely ask this. The fact is
that in any engineering discipline, the best methods can be used to produce awful
designs, and the most antiquated methods to produce elegant designs.
16. Understand the customer's priorities. It is possible the customer would
tolerate 90% of the functionality delivered late if they could have 10% of it on
time.
17. The more they see, the more they need. The more functionality (or
performance) you provide a user, the more functionality (or performance) the
user wants.
18. Plan to throw one away. One of the most important critical success factors is
whether or not a product isentirely new. Such brand-new applications,
architectures, interfaces, or algorithms rarely work the first time.
19. Design for change. The architectures, components, and specification techniques
you use must accommodatechange.
20. Design without documentation is not design. I have often heard software
engineers say, "I have finished thedesign. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks
constructs that perform a functioncorrectly, but in an obscure way. Show the
world how smart you are by avoiding tricky code
23. Encapsulate. Information-hiding is a simple, proven concept that results in
software that is easier to testand much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to
measure software's inherent maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics
available to report the inherent complexity of software, none is as intuitive and

26
27

easy to use as Tom McCabe's


26. Don't test your own software. Software developers should never be
the primary testers of their ownsoftware.
27. Analyze causes for errors. It is far more cost-effective to reduce the
effect of an error by preventing it than it is to find and fix it. One way to do this is
to analyze the causes of errors as they are detected
28. Realize that software's entropy increases. Any software system that
undergoes continuous change will growin complexity and will become more and
more disorganized
29. People and time are not interchangeable. Measuring a project solely by
person-months makes little [Link] excellence. Your employees will do
much better if you have high expectations for them.

THE PRINCIPLES OF MODERN SOFTWARE MANAGEMENT

Top 10 principles of modern software management are. (The first five, which
are the main themes of my definition of an iterative process, are summarized in
Figure 4-1.)

1. Base the process on an architecture-first approach. This requires that a


demonstrable balance be achieved among the driving requirements, the
architecturally significant design decisions, and the life- cycle plans before the
resources are committed for full-scale development.
2. Establish an iterative life-cycle process that confronts risk early. With today's
sophisticated software systems, it is not possible to define the entire problem,
design the entire solution, build the software, and then test the end product in
sequence. Instead, an iterative process that refines the problem understanding, an
effective solution, and an effective plan over several iterations encourages a
balanced treatment of all stakeholder objectives. Major risks must be addressed
early to increase predictability and avoid expensive downstream scrap and rework.
3. Transition design methods to emphasize component-based development.
Moving from a line-of- code mentality to a component-based mentality is
necessary to reduce the amount of human-generated source code and custom
development.

4. Establish a change management environment. The dynamics of iterative


development,
including concurrent workflows by different teams working on shared
artifacts, necessitates objectivelycontrolled baselines.

27
28

5. Enhance change freedom through tools that support round-trip


engineering. Round-trip engineering is the environment support necessary to
automate and synchronize
engineering information in different formats(such as requirements
specifications, design models, source code, executable code, test cases).
6. Capture design artifacts in rigorous, model-based notation. A model based
approach (such as UML) supports the evolution of semantically rich graphical and
textual design notations.
7. Instrument the process for objective quality control and progress
assessment. Life-cycle assessment of the progress and the quality of all
intermediate products must be integrated into the process.
8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels
of detail. It is essential that the software management process drive toward early
and continuous demonstrations within the operational context of the system,
namely its use cases.
10. Establish a configurable process that is economically scalable. No single
process is suitable for all software developments.

28
29

Table 4-1 maps top 10 risks of the conventional process to the key attributes
and principles of a modernprocess

TRANSITIONING TO AN ITERATIVE PROCESS

Modern software development processes have moved away from the conventional
waterfall model, in which each stage of the development process is dependent on
completion of the previous stage.
The economic benefits inherent in transitioning from the conventional
waterfall model to an iterative development process are significant but difficult to
quantify. As one benchmark of the expected economic impact of process
improvement, consider the process exponent parameters of the COCOMO II

29
30

model. (Appendix B provides more detail on the COCOMO model) This


exponent can range from 1.01 (virtually no diseconomy of scale) to 1.26
(significant diseconomy of scale). The parameters that govern the value of the
process exponent are application precedentedness, process flexibility, architecture
risk resolution, team cohesion, and software process maturity.
The following paragraphs map the process exponent parameters of CO
COMO II to my top 10 principles ofa modern process.

 Application precedentedness. Domain experience is a critical factor in


understanding how to plan and execute a software development project. For
unprecedented systems, one of the key goals is to confront risks and establish
early precedents, even if they are incomplete or experimental. This is one of the
primary reasons that the software industry has moved to an iterative life-cycle
process. Early iterations in the life cycle establish precedents from which the
product, the process, and the plans can be elaborated in evolving levels of detail.
 Process flexibility. Development of modern software is characterized by such a
broad solution space and so many interrelated concerns that there is a paramount
need for continuous incorporation of changes. These changes may be inherent in
the problem understanding, the solution space, or the plans. Project artifacts must
be supported by efficient change management commensurate with project needs.
A configurable process that allows a common framework to be adapted across a
range of projects isnecessary to achieve a software return on investment.
 Architecture risk resolution. Architecture-first development is a crucial theme
underlying a successful iterative development process. A project team develops
and stabilizes architecture before developing all the components that make up the
entire suite of applications components. An architecture-first and component-
based development approach forces the infrastructure, common mechanisms, and
control mechanisms to be elaborated early in the life cycle and drives all
component make/buy decisions into the architecture process.
 Team cohesion. Successful teams are cohesive, and cohesive teams are
successful. Successful teams and cohesive teams share common objectives and
priorities. Advances in technology (such as programming languages, UML, and
visual modeling) have enabled more rigorous and understandable notations for
communicating software engineering information, particularly in the requirements
and design artifacts that previously were ad hoc and based completely on paper
exchange. These model-based formats have also enabled the round-trip
engineering support needed to establish change freedom sufficient for evolving
design representations.
 Software process maturity. The Software Engineering Institute's Capability
Maturity Model (CMM) is a well-accepted benchmark for software process
assessment. One of key themes is that truly mature processes are enabled through
an integrated environment that provides the appropriate level of automation to
instrument the process for objective quality control.

30
31

MODULE– III

MODULE– III:LIFE CYCLE PHASES (09)


Engineering and production stages, inception. Elaboration, construction,
transition phases. Artifacts of the process: The artifact sets. Management artifacts,
Engineering artifacts, programmatic artifacts.

ENGINEERING AND PRODUCTION STAGES

To achieve economies of scale and higher returns on investment, we must


move toward a software manufacturing process driven by technological
improvements in process automation and component-based development. Two
stages of the life cycle are:
1. The engineering stage, driven by less predictable but smaller teams doing design
and synthesisactivities
2. The production stage, driven by more predictable but larger teams doing
construction, test, anddeployment activities

The transition between engineering and production is a crucial event for


the various stakeholders. The production plan has been agreed upon, and there
is a good enough understanding of the problem and the solution that all
stakeholders can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and
elaboration, and the production stage into construction and transition. These four
phases of the life-cycle process are loosely mapped to the conceptual framework
of the spiral model as shown in Figure 5-1

31
32

INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among
stakeholders on the life-cycleobjectives for the project.

PRIMARY OBJECTIVES
 Establishing the project's software scope and boundary conditions, including an
operational concept, acceptance criteria, and a clear understanding of what is and
is not intended to be in the product
 Discriminating the critical use cases of the system and the primary scenarios of
operation that willdrive the major design trade-offs
 Demonstrating at least one candidate architecture against some of the primary
scenanos
 Estimating the cost and schedule for the entire project (including detailed
estimates for theelaboration phase)
 Estimating potential risks (sources of unpredictability)
ESSENTIAL ACTMTIES
 Formulating the scope of the project. The information repository should be
sufficient to define the problem space and derive the acceptance criteria for the
end product.
 Synthesizing the architecture. An information repository is created that is
sufficient to demonstrate the feasibility of at least one candidate architecture and
an, initial baseline of make/buy decisions so that the cost, schedule, and resource
estimates can be derived.
 Planning and preparing a business case. Alternatives for risk management,
staffing, iteration plans, and cost/schedule/profitability trade-offs are evaluated.
PRIMARY EVALUATION CRITERIA
 Do all stakeholders concur on the scope definition and cost and schedule
estimates?
 Are requirements understood, as evidenced by the fidelity of the critical use

32
33

cases?
 Are the cost and schedule estimates, priorities, risks, and development processes
credible?
 Do the depth and breadth of an architecture prototype demonstrate the preceding
criteria? (The primary value of prototyping candidate architecture is to provide
a vehicle for understanding the scope and assessing the credibility of the
development group in solving the particular technical problem.)
 Are actual resource expenditures versus planned expenditures acceptable

ELABORATION PHASE

At the end of this phase, the "engineering" is considered complete. The


elaboration phase activities must ensure that the architecture, requirements, and
plans are stable enough, and the risks sufficiently mitigated, that the cost and
schedule for the completion of the development can be predicted within an
acceptable range. During the elaboration phase, an executable architecture
prototype is built in one or more iterations, depending on the scope, size, & risk.
PRIMARY OBJECTIVES
 Baselining the architecture as rapidly as practical (establishing a configuration-
managed snapshot in whichall changes are rationalized, tracked, and maintained)
 Baselining the vision
 Baselining a high-fidelity plan for the construction phase
 Demonstrating that the baseline architecture will support the vision at a
reasonable cost in a reasonabletime

ESSENTIAL ACTIVITIES
 Elaborating the vision.
 Elaborating the process and infrastructure.
 Elaborating the architecture and selecting components.

PRIMARY EVALUATION CRITERIA


 Is the vision stable?
 Is the architecture stable?
 Does the executable demonstration show that the major risk elements have
been addressed and crediblyresolved?
 Is the construction phase plan of sufficient fidelity, and is it backed up with a
credible basis of estimate?
 Do all stakeholders agree that the current vision can be met if the current plan is
executed to develop the

33
34

complete system in the context of the current architecture?


 Are actual resource expenditures versus planned expenditures acceptable?

CONSTRUCTION PHASE
During the construction phase, all remaining components and application
features are integrated into the application, and all features are thoroughly tested.
Newly developed software is integrated where required. The construction phase
represents a production process, in which emphasis is placed on managing
resources and controlling operations to optimize costs, schedules, and quality.

PRIMARY OBJECTIVES
 Minimizing development costs by optimizing resources and avoiding unnecessary
scrap and rework
 Achieving adequate quality as rapidly as practical
 Achieving useful versions (alpha, beta, and other test releases) as rapidly as
practical

ESSENTIAL ACTIVITIES
 Resource management, control, and process optimization
 Complete component development and testing against evaluation criteria
 Assessment of product releases against acceptance criteria of the vision

PRIMARY EVALUATION CRITERIA


 Is this product baseline mature enough to be deployed in the user community?
(Existing defects arenot obstacles to achieving the purpose of the next release.)
 Is this product baseline stable enough to be deployed in the user community?
(Pending changes arenot obstacles to achieving the purpose of the next release.)
 Are the stakeholders ready for transition to the user community?
 Are actual resource expenditures versus planned expenditures acceptable?

TRANSITION PHASE
The transition phase is entered when a baseline is mature enough to be deployed
in the end-user domain. This typically requires that a usable subset of the system
has been achieved with acceptable quality levels and user documentation so that
transition to the user will provide positive results. This phase could include any of
the following activities:

1. Beta testing to validate the new system against user expectations


2. Beta testing and parallel operation relative to a legacy system it is replacing
3. Conversion of operational databases
4. Training of users and maintainers

34
35

The transition phase concludes when the deployment baseline has achieved the
complete vision.

PRIMARY OBJECTIVES
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete and
consistent with theevaluation criteria of the vision
Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into
consistent deployment baselines
Deployment-specific engineering (cutover, commercial packaging and
production, sales rollout kit development, field personnel training)
Assessment of deployment baselines against the complete vision and acceptance
criteria in therequirements set

EVALUATION CRITERIA
Is the user satisfied?
Are actual resource expenditures versus planned expenditures acceptable?

5. Artifacts of the process

THE ARTIFACT SETS


To make the development of a complete software system manageable, distinct
collections of information are organized into artifact sets. Artifact represents
cohesive information that typically is developed and reviewed as a single entity.
Life-cycle software artifacts are organized into five distinct sets that are
roughly partitioned by the underlying language of the set: management (ad hoc
textual formats), requirements (organized text and models of the problem space),
design (models of the solution space), implementation (human-readable
programming language and associated source files), and deployment (machine-
process able languages and associated files). The artifact sets are shown in Figure
6-1.

35
36

THE MANAGEMENT SET


The management set captures the artifacts associated with process planning
and execution. These artifacts use ad hoc notations, including text,
graphics, or whatever representation is required to capture the "contracts"
among project personnel (project management, architects, developers, testers,
marketers, administrators), among stakeholders (funding authority, user,
software project manager, organization manager, regulatory agency), and
between project personnel and stakeholders. Specific artifacts included in this
set are the work breakdown structure (activity breakdown and financial
tracking mechanism), the business case (cost, schedule, profit expectations),
the release specifications (scope, plan, objectives for release baselines), the
software development plan (project process instance), the release descriptions
(results of release baselines), the status assessments (periodic snapshots of
project progress), the software change orders (descriptions of discrete baseline
changes), the deployment documents (cutover plan, training course, sales
rollout kit), and the environment (hardware and software tools, process
automation, & documentation).
Management set artifacts are evaluated, assessed, and measured through a
combination of the following:
 Relevant stakeholder review
 Analysis of changes between the current version of the artifact and previous
versions
 Major milestone demonstrations of the balance among all artifacts and, in
particular, the accuracy ofthe business case and vision artifacts

36
37

THE ENGINEERING SETS


The engineering sets consist of the requirements set, the design set, the
implementation set, and the deployment set.
Requirements Set
Requirements artifacts are evaluated, assessed, and measured through a
combination of the following:

 Analysis of consistency with the release specifications of the management set


 Analysis of consistency between the vision and the requirements models
 Mapping against the design, implementation, and deployment sets to evaluate
the consistency andcompleteness and the semantic balance between information
in the different sets
 Analysis of changes between the current version of requirements artifacts and
previous versions(scrap, rework, and defect elimination trends)
 Subjective review of other dimensions of quality

Design Set
UML notation is used to engineer the design models for the solution. The
design set contains varying levelsof abstraction that represent the components of
the solution space (their identities, attributes, static relationships, dynamic
interactions). The design set is evaluated, assessed, and measured through a
combinationof the following:
 Analysis of the internal consistency and quality of the design model
 Analysis of consistency with the requirements models
 Translation into implementation and deployment sets and notations (for example,
traceability, source code generation, compilation, linking) to evaluate the
consistency and completeness and the semantic balance between information in
the sets
 Analysis of changes between the current version of the design model and previous
versions (scrap, rework, and defect elimination trends)
 Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations)
that represents the tangible implementations of components (their form, interface,
and dependency relationships)
Implementation sets are human-readable formats that are evaluated, assessed,
and measured through a
combination of the following:
 Analysis of consistency with the design models
 Translation into deployment set notations (for example, compilation and linking)
to evaluate the consistency and completeness among artifact sets
 Assessment of component source or executable files against relevant

37
38

evaluation criteria throughinspection, analysis, demonstration, or testing


 Execution of stand-alone component test cases that automatically compare
expected results withactual results
 Analysis of changes between the current version of the implementation set
and previous versions(scrap, rework, and defect elimination trends)
 Subjective review of other dimensions of quality
Deployment Set
The deployment set includes user deliverables and machine language notations,
executable software, and the
build scripts, installation scripts, and executable target specific data necessary
to use the product in its target environment.
Deployment sets are evaluated, assessed, and measured through a combination of
the following:

 Testing against the usage scenarios and quality attributes defined in the
requirements set to evaluate the consistency and completeness and the~ semantic
balance between information in the two sets
 Testing the partitioning, replication, and allocation strategies in mapping
components of the implementation set to physical resources of the deployment
system (platform type, number, network topology)
 Testing against the defined usage scenarios in the user manual such as
installation, user-oriented dynamic reconfiguration, mainstream usage, and
anomaly management
 Analysis of changes between the current version of the deployment set and
previous versions (defect elimination trends, performance changes)
 Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life
cycle; the other sets take on check and balance roles. As illustrated in Figure 6-2,
each phase has a predominant focus: Requirements are the focus of the inception
phase; design, the elaboration phase; implementation, the construction phase; and
deploy- ment, the transition phase. The management artifacts also evolve, but at a
fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact
sets.
1. Management: scheduling, workflow, defect tracking, change management,
documentation, spreadsheet, resource management, and presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage
analysis tools, and testmanagement tools
5. Deployment: test coverage and test automation tools, network management tools,
commercial components(operating systems, GUIs, RDBMS, networks,
middleware), and installation tools.

38
39

Implementation Set versus Deployment Set


The separation of the implementation set (source code) from the deployment set
(executable code) is important because there are very different concerns with each
set. The structure of the information delivered to the user (and typically the test
organization) is very different from the structure of the source code information.
Engineering decisions that have an impact on the quality of the deployment set
but are relatively incomprehensible in the design and implementation sets include
the following:
 Dynamically reconfigurable parameters (buffer sizes, color palettes, number of
servers, number ofsimultaneous clients, data files, run-time parameters)
 Effects of compiler/link optimizations (such as space optimization versus speed
optimization)
 Performance under certain allocation strategies (centralized versus distributed,
primary and shadowthreads, dynamic load balancing, hot backup versus
checkpoint/rollback)
 Virtual machine constraints (file descriptors, garbage collection, heap size,
maximum record size, disk file rotations)
 Process-level concurrency issues (deadlock and race conditions)
 Platform-specific differences in performance or behavior

ARTIFACT EVOLUTION OVER THE LIFE CYCLE


Each state of development represents a certain amount of precision in the
final system description. Early in the life cycle, precision is low and the
representation is generally high. Eventually, the precision of representation is
high and everything is specified in full detail. Each phase of development focuses
on a particular artifact set. At the end of each phase, the overall system state will
have progressed on all sets, as illustrated in Figure 6-3.

39
40

The inception phase focuses mainly on critical requirements usually with a


secondary focus on an initial deployment view. During the elaboration phase,
there is much greater depth in requirements, much more breadth in the design set,
and further work on implementation and deployment issues. The main focus of
theconstruction phase is design and implementation. The main focus of the
transition phase is on achieving consistency and completeness of the deployment
set in the context of the other sets.
TEST ARTIFACTS
 The test artifacts must be developed concurrently with the product from
inception throughdeployment. Thus, testing is a full-life-cycle activity, not a late
life-cycle activity.
 The test artifacts are communicated, engineered, and developed within the same
artifact sets as thedeveloped product.
 The test artifacts are implemented in programmable and repeatable formats (as
software programs).
 The test artifacts are documented in the same way that the product is documented.
 Developers of the test artifacts use the same tools, techniques, and training as the
software engineersdeveloping the product.
Test artifact subsets are highly project-specific, the following example clarifies
the relationship between test artifacts and the other artifact sets. Consider a
project to perform seismic data processing for the purpose of oil exploration. This
system has three fundamental subsystems: (1) a sensor subsystem that captures
raw seismic data in real time and delivers these data to (2) a technical operations

40
41

subsystem that converts raw data into an organized database and manages queries
to this database from (3) a display subsystem that allows workstation operators to
examine seismic data in human-readable form. Such a system would result in the
following test artifacts:

 Management set. The release specifications and release descriptions capture the
objectives, evaluation criteria, and results of an intermediate milestone. These
artifacts are the test plans and test results negotiated among internal project teams.
The software change orders capture test results (defects, testability changes,
requirements ambiguities, enhancements) and the closure criteria associated with
making a discrete change to a baseline.
 Requirements set. The system-level use cases capture the operational concept for
the system and the acceptance test case descriptions, including the expected
behavior of the system and its quality attributes. The entire requirement set is a
test artifact because it is the basis of all assessment activities across the life cycle.
 Design set. A test model for nondeliverable components needed to test the
product baselines is captured in the design set. These components include such
design set artifacts as a seismic event simulation for creating realistic sensor data;
a "virtual operator" that can support unattended, after- hours test cases; specific
instrumentation suites for early demonstration of resource usage; transaction rates
or response times; and use case test drivers and component stand-alone test
drivers.
 Implementation set. Self-documenting source code representations for test
components and test drivers provide the equivalent of test procedures and test
scripts. These source files may also include human-readable data files
representing certain statically defined data sets that are explicit test source files.
Output files from test drivers provide the equivalent of test reports.
 Deployment set. Executable versions of test components, test drivers, and data
files are provided.

MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results
and ancillary information necessary to document the product/process legacy,
maintain the product, improve the product, and improve the process.
Business Case
The business case artifact provides all the information necessary to determine
whether the project is worth investing in. It details the expected revenue, expected
cost, technical and management plans, and backup data necessary to demonstrate
the risks and realism of the plans. The main purpose is to transform the vision into
economic terms so that an organization can make an accurate ROI assessment.
The financial forecasts are evolutionary, updated with more accurate forecasts
as the life cycle progresses. provides a default outline for a business case.

41
42

Software Development Plan


The software development plan (SDP) elaborates the process framework into a
fully detailed plan. Two indications of a useful SDP are periodic updating (it is
not stagnant shelfware) and understanding and acceptance by managers and
practitioners alike. Figure 6-5 provides a default outline for a software
development plan.

42
43

Work Breakdown Structure


Work breakdown structure (WBS) is the vehicle for budgeting and collecting
costs. To monitor and control a project's financial performance, the software
project man1ger must have insight into project costs and how they are expended.
The structure of cost accountability is a serious project planning constraint.

Software Change Order Database


Managing change is one of the fundamental primitives of an iterative
development process. With greater change freedom, a project can iterate more
productively. This flexibility increases the content, quality, and number of
iterations that a project can achieve within a given schedule. Change freedom has
been achieved in practice through automation, and today's iterative development
environments carry the burden of change management. Organizational processes
that depend on manual change management techniques have encountered major
inefficiencies.

Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are
derived from the vision statement as well as many other sources (make/buy
analyses, risk management concerns, architectural considerations, shots in the
dark, implementation constraints, quality thresholds). These artifacts are intended
to evolve along with the process, achieving greater fidelity as the life cycle
progresses and requirements understanding matures. Figure 6-6 provides a default

outline for a release specification

Release Descriptions
Release description documents describe the results of each release, including
performance against each of the evaluation criteria in the corresponding release
specification. Release baselines should be accompanied by a release description
document that describes the evaluation criteria for that configuration baseline and
provides substantiation (through demonstration, testing, inspection, or analysis)

43
44

that each criterion has been addressed in an acceptable manner. Figure 6-7
provides a default outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status,
including the software project manager's risk assessment, quality indicators, and
management indicators. Typical status assessments should include a review of
resources, personnel staffing, financial data (cost and revenue), top 10 risks,
technical progress (metrics snapshots), major milestone plans and results, total
project or product scope & action items

Environment
An important emphasis of a modern approach is to define the development and
maintenance environment as a first-class artifact of the process. A robust,
integrated development environment must support automation of the development
process. This environment should include requirements management, visual
modeling, document automation, host and target programming tools, automated
regression testing, and continuous and integrated change management, and feature
and defect tracking.

Deployment
A deployment document can take many forms. Depending on the project, it could
include several document subsets for transitioning the product into operational
status. In big contractual efforts in which the system is delivered to a separate
maintenance organization, deployment artifacts may include computer system
operations manuals, software installation manuals, plans and procedures for
cutover (from a legacy system), site surveys, and so forth. For commercial

44
45

software products, deployment artifacts may include marketing plans, sales


rollout kits, and training courses.
Management Artifact Sequences
In each phase of the life cycle, new artifacts are produced and previously
developed artifacts are updated to incorporate lessons learned and to capture
further depth and breadth of the solution. Figure 6-8 identifies a typical sequence
of artifacts across the life-cycle phases.

45
46

46
47

ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations
such as UML, programming languages, or executable machine codes. Three
engineering artifacts are explicitly intended for more general review, and they
deserve further elaboration.

Vision Document
The vision document provides a complete vision for the software system under
development and. supports the contract between the funding authority and the
development organization. A project vision is meant to be changeable as
understanding evolves of the requirements, architecture, plans, and technology. A
good vision document should change slowly. Figure 6-9 provides a default outline
for a vision document.

Architecture Description

The architecture description provides an organized view of the software


architecture under development. It is extracted largely from the design model and
includes views of the design, implementation, and deployment sets sufficient to
understand how the operational concept of the requirements set will be achieved.
The breadth of the architecture description will vary from project to project
depending on many factors. Figure 6-10 provides a default outline for an
architecture description.

47
48

Software User Manual


The software user manual provides the user with the reference documentation
necessary to support the delivered software. Although content is highly variable
across application domains, the user manual should include installation
procedures, usage procedures and guidance, operational constraints, and a user
interface description, at a minimum. For software products with a user interface,
this manual should be developed early in the life cycle because it is a necessary
mechanism for communicating and stabilizing an important subset of
requirements. The user manual should be written by members of the test team,
who are more likely to understand the user's perspective than the development
team.

PRAGMATIC ARTIFACTS
 People want to review information but don't understand the language
of the artifact. Many interested reviewers of a particular artifact will resist
having to learn the engineering language in which the artifact is written. It is not
uncommon to find people (such as veteran software managers, veteran quality
assurance specialists, or an auditing authority from a regulatory agency) who react
as follows: "I'm not going to learn UML, but I want to review the design of this
software, so give me a separate description such as some flowcharts and text that I
can understand."
 People want to review the information but don't have access to the
tools. It is not very common for the development organization to be fully tooled;

48
49

it is extremely rare that the/other stakeholders have any capability to review the
engineering artifacts on-line. Consequently, organizations are forced to exchange
paper documents. Standardized formats (such as UML, spreadsheets, Visual
Basic, C++, and Ada 95), visualization tools, and the Web are rapidly making it
economically feasible for all stakeholders to exchange information
electronically.
 Human-readable engineering artifacts should use rigorous notations
that are complete, consistent, and used in a self-documenting manner.
Properly spelled English words should be used for all identifiers and descriptions.
Acronyms and abbreviations should be used only where they are well accepted
jargon in the context of the component's usage. Readability should be
emphasized and the use of proper English words should be required in all
engineering artifacts. This practice enables understandable representations,
browse able formats (paperless review), more-rigorous notations, and reduced
error rates.
 Useful documentation is self-defining: It is documentation that gets used.
 Paper is tangible; electronic artifacts are too easy to change. On-line
and Web-based artifacts can be changed easily and are viewed with more
skepticism because of their inherent volatility.

49
50

MODULE– IV
MODULE– IV:PROJECT ORGANIZATIONS
Project Organizations Line-of- business organizations, project organizations,
evolution of organizations, process automation. Project Control and process
instrumentation the seven-core metrics, management indicators, quality indicators,
life-cycle expectations, Pragmatic software metrics, metrics automation.

Project Organizations and Responsibilities:

 Organizations engaged in software Line-of-Business need to support projects


with the infrastructurenecessary to use a common process.
 Project organizations need to allocate artifacts & responsibilities across
project team to ensure abalance of global (architecture) & local (component)
concerns.
 The organization must evolve with the WBS & Life cycle concerns.
 Software lines of business & product teams have different motivation.
 Software lines of business are motivated by return of investment (ROI),
new business discriminators, market diversification & profitability.
 Project teams are motivated by the cost, Schedule & quality of specific
deliverables
1) Line-Of-Business Organizations:
The main features of default organization are as follows:
• Responsibility for process definition & maintenance is specific to a cohesive line
of business.
• Responsibility for process automation is an organizational role & is equal in
importance to theprocess definition role.

50
51

• Organizational role may be fulfilled by a single individual or several different


teams.
Fig: Default roles in a software Line-of-Business Organization.
Software Engineering Process Authority (SEPA)
The SEPA facilities the exchange of information & process guidance both to &
from projectpractitioners

This role is accountable to General Manager for maintaining a current


ass
essment of theorganization’s process maturity & its plan for future improvement
Project Review Authority (PRA)
The PRA is the single individual responsible for ensuring that a software
project complies withall organizational & business unit software policies,
practices & standards
A software Project Manager is responsible for meeting the requirements of a
contract or some otherproject compliance standard

Software Engineering Environment Authority( SEEA )


The SEEA is responsible for automating the organization’s process, maintaining
the organization’sstandard environment, Training projects to use the environment
& maintaining organization-wide reusable assets
The SEEA role is necessary to achieve a significant ROI for common

51
52

process
.
Infrastructure
An organization’s infrastructure provides human resources support,
project-independentresearch & development, & other capital software
engineering assets.
2) Project organizations:

Software Management
Artifacts Activities

 Business case Customer interface, PRA interface


 Software development plan Planning, monitoring
 Status assessments Risk management
Software process definition
Process improvement

System engineering Administration

Software Architecture Software Development Software Assessment

Figure 11-2. Default project organization and responsibilities

• The above figure shows a default project organization and maps project-level
roles andresponsibilities.
• The main features of the default organization are as follows:
• The project management team is an active participant, responsible for
producing as well asmanaging.

52
53

• The architecture team is responsible for real artifacts and for the integration
of components,not just for staff functions.
• The development team owns the component construction and maintenance
activities.
• The assessment team is separate from development.
• Quality is everyone’s into all activities and checkpoints.
• Each team takes responsibility for a different quality perspective.
3) EVOLUTION OF ORGANIZATIONS:

Software Management 50% Software Management 10%

Software Software Software Software


Software Architecture Software Architecture
Development Development
Assessment 20% 20% Assessment50% 20%
10% 20%
Inception
Elaboration

Software Management 10% Software Management 10%

Software Software Software Software


Software Architecture Software Architecture
Development Development
Assessment 5% 35% Assessment10% 50%
50% 30%

Transition Construction

Inception: Elaboration:
Software Software
management: 50% management: 10%
Software Software
Architecture: 20% Architecture: 50%
Software Software
development: 20% development: 20%
Software Software
Assessment Assessment
(measurement/evalu (measurement/evalu
ation):10% ation):20%

53
54

Construction: Transition:
Software Software
management: 10% management: 10%
Software Software
Architecture: 10% Architecture: 5%
Software Software
development: 50% development: 35%
Software Software
Assessment Assessment
(measurement/evalu (measurement/evalu
ation):30% ation):50%

Introductory Remarks:
The Process Automation:
The environment must be the first-class artifact of the process.
Process automation & change management is critical to an iterative process. If
the change is expensive thenthe development organization will resist it.
Round-trip engineering & integrated environments promote change freedom &
effective evolution oftechnical artifacts.
Metric automation is crucial to effective project control.
External stakeholders need access to environment resources to improve
interaction with the development team& add value to the process.
The three levels of process which requires a certain degree of process automation
for the corresponding processto be carried out efficiently.
Metaprocess (Line of business): The automation support for this level is called
an infrastructure. Macroproces (project): The automation support for a project’s
process is called an environment. Microprocess (iteration): The automation
support for generating artifacts is generally called a tool.

Tools: Automation Building blocks:


Many tools are available to automate the software development process.
Most of the core softwaredevelopment tools map closely to one of the
process workflows
Workflows Environment Tools &
process Automation Management Workflow
automation, Metrics automation
Environment Change Management,
Document Automation

54
55

Requirements
Req
uirement Management
Design Visual Modelin
Implementation -Editors, Compilers,
Debugger, Linker, RuntimeAssessment -Test
automation, defect Tracking

The Project Environment:


The project environment artifacts evolve through three discrete states.
(1) Prototyping Environment. (2) Development Environment. (3) Maintenance
Environment.
The Prototype Environment includes an architecture test bed for prototyping
project architecture to evaluatetrade-offs during inception & elaboration phase of
the life cycle.
The Development environment should include a full suite of development tools
needed to support variousProcess workflows & round-trip engineering to the
maximum extent possible.
The Maintenance Environment should typically coincide with the mature
version of the development.
There are four important environment disciplines that are critical to

55
56

management context & the success of a modern iterative development process.


Round-Trip engineering Change Management Software Change Orders (SCO)
Configuration baseline Configuration Control Board
Infrastructure Organization Policy Organization Environment Stakeholder
Environment.

Round Trip Environment


Tools must be integrated to maintain consistency & traceability.
Round-Trip engineering is the term used to describe this key requirement for
environment that support iterativedevelopment.
As the software industry moves into maintaining different information sets for
the engineering artifacts, moreautomation support is needed to ensure efficient &
error free transition of data from one artifacts to another.
Round-trip engineering is the environment support necessary to maintain
Consistency among the engineeringartifacts.

56
57

Change Management
Change management must be automated & enforced to manage multiple iterations
& to enable change freedom. Change is the fundamental primitive of iterative
Development.
I. Software Change Orders
The atomic unit of software work that is authorized to create, modify or
obsolesce components within aconfiguration baseline is called a software
change orders ( SCO )
The basic fields of the SCO are Title, description, metrics, resolution, assessment
& disposition

57
58

Change management
II. Configuration Baseline
A configuration baseline is a named collection of software components
&Supporting documentation that is subjected to change management & is
upgraded, maintained, tested, statuses & obsolesced a unit There are generally two
classes of baselines
External Product Release Internal testing Release
Three levels of baseline releases are required for most Systems
1. Major release (N)
2. Minor Release (M)
3. Interim (temporary) Release (X)
Major release represents a new generation of the product or project
A minor release represents the same basic product but with enhanced features,
performance or quality. Major & Minor releases are intended to be external
product releases that are persistent & supported for a period of time.
An interim release corresponds to a developmental configuration that is intended
to be transient.
Once software is placed in a controlled baseline all changes are tracked such that
a distinction must bemade for the cause of the change. Change categories are
Type 0: Critical Failures (must be fixed before release)
Type 1: A bug or defect either does not impair (Harm) the usefulness of the
system or can be workedaround
Type 2: A change that is an enhancement rather than a response to a defect Type
3: A change that is necessitated by the update to the environment Type 4:
Changes that are not accommodated by the other categories.
Change Management
III Configuration Control Board (CCB)
A CCB is a team of people that functions as the decision Authority on the content
of configuration baselines
A CCB includes:
1. Software managers
2. Software Architecture managers
3. Software Development managers
4. Software Assessment managers
5. Other Stakeholders who are integral to the maintenance of the
controlled software deliverysystem?
Infrastructure
The organization infrastructure provides the organization’s capital assets
including two keyartifacts - Policy & Environment
I Organization Policy:
A Policy captures the standards for project software development processes
The organization policy is usually packaged as a handbook that defines the life
cycles & the processprimitives such as
 Major milestones
 Intermediate Artifacts

58
59

 Engineering repositories
 Metrics
 Roles & Responsibilities

Infrastructure
II Organization Environment
The Environment that captures an inventory of tools which are building blocks
from which project environments can be configured efficiently & economically

Stakeholder Environment
Many large scale projects include people in external organizations that represent
other stakeholdersparticipating in the development process they might include
 Procurement agency contract monitors
 End-user engineering support personnel
 Third party maintenance contractors
 Independent verification & validation contractors
 Representatives of regulatory agencies & others.
These stakeholder representatives also need to access to development resources so
that they cancontribute value to overall effort. These stakeholders will be access
through on-line
An on-line environment accessible by the external stakeholders allow them to
participate in the processa follows
Accept & use executable increments for the hands-on evaluation.
Use the same on-line tools, data & reports that the development organization uses

59
60

to manage &monitor the project


Avoid excessive travel, paper interchange delays, format translations, paper *
shipping costs & otheroverhead cost

PROJECT CONTROL & PROCESS INSTRUMENTATION

INTERODUCTION: Software metrics are used to implement the activities and


products of the software development process. Hence, the quality of the software
products and the achievements inthe development process can be determined
using the software metrics.

Need for Software Metrics:


 Software metrics are needed for calculating the cost and schedule of a software
product with
 great accuracy.
 Software metrics are required for making an accurate estimation of the progress.
 The metrics are also required for understanding the quality of the software

60
61

product.

INDICATORS:
An indicator is a metric or a group of metrics that provides an understanding of
the software process or software product or a software project. A software
engineer assembles measures andproduce metrics from which the indicators can
be derived.
Two types of indicators are:
(i) Management indicators.
Quality indicators.
Management Indicators
The management indicators i.e., technical progress, financial status and staffing
progress are
used to determine whether a project is on budget and on schedule. The
management indicators that indicate financial status are based on earned value
system.
Quality Indicators
The quality indicators are based on the measurement of the changes occurred in
software.

SEVEN CORE METRICS OF SOFTWARE PROJECT


Software metrics instrument the activities and products of the software
development/integration process. Metrics values provide an important perspective
for managing theprocess. The most useful metrics are extracted directly from the
evolving artifacts.
There are seven core metrics that are used in managing a modern process.

Seven core metrics related to project control:

Management Indicators Quality Indicators


□ Work and Progress Change traffic and stability
□ Budgeted cost and expenditures Breakage and modularity
□ Staffing and team dynamics Rework and adaptability
□ Mean time between failures (MTBF) and maturity
MANAGEMENT INDICATORS:
Work and progress
This metric measure the work performed over time. Work is the effort to be
accomplished to complete a certain set of tasks. The various activities of an
iterative development project can be measured by defining a planned estimate of
the work in an objective measure, then tracking progress (work completed
overtime) against that plan.
The default perspectives of this metric are:
Software architecture team: - Use cases demonstrated.
Software development team: - SLOC under baseline change management, SCOs
closed Software assessment team: - SCOs opened, test hours executed and
evaluation criteria [Link] management team: - milestones completed.

61
62

The below figure shows expected progress for a typical project with three major
releases

Fig: work and progress

Budgeted cost and expenditures


This metric measures cost incurred over time. Budgeted cost is the planned
expenditure profile over the life cycle of the project. To maintain management
control, measuring cost expenditures over the project life cycle is always
necessary. Tracking financial progress takes on an organization - specific format.
Financial performance can be measured by the use of an earned value system,
which provides highly detailed cost and schedule insight. The basic parameters of
an earned value system, expressed in units of dollars, are as follows:
Expenditure Plan - It is the planned spending profile for a project over its
planned schedule. Actual progress - It is the technical accomplishment relative to
the planned progress underlying the spending profile.
Actual cost: It is the actual spending profile for a project over its actual schedule.
Earned value: It is the value that represents the planned cost of the actual
progress. Cost variance: It is the difference between the actual cost and the earned
value.
Schedule variance: It is the difference between the planned cost and the earned
value. Of all parameters in anearned value system, actual progress is the most
subjective
Assessment: Because most managers know exactly how much cost they have
incurred and how much schedule they have used, the variability in making
accurate assessments is centred in the actual progress assessment. The default
perspectives of this metric are cost per month, full-time staff per month and
percentage of budget expended.
Staffing and team dynamics
This metric measures the personnel changes over time, which involves staffing

62
63

additions and reductions over time. An iterative development should start with a
small team until the risks in the requirements and architecture have been suitably
resolved. Depending on the overlap of iterations and other project specific
circumstances, staffing can vary. Increase in staff can slow overall project
progress as new people consume the productive teamof existing people in coming
up to speed. Low attrition of good people is a sign of success. The default
perspectives of this metric are people per month added and people per month
leaving. These three management indicators are responsible for technical
progress, financial status and staffing progress.

Fig: staffing and Team dynamics

QUALITY INDICATORS:
Change traffic and stability:
This metric measures the change traffic over time. The number of software
change orders opened and closed over the life cycle is called change traffic.
Stability specifies the relationship between opened versus closed software change
orders. This metric can be collected by change type, by release, across all
releases, by term, by components, by subsystems, etc.
The below figure shows stability expectation over a healthy project’s life cycle

63
64

Fig: Change traffic and stability

Breakage and modularity


This metric measures the average breakage per change over time. Breakage is
defined as the average extent of change, which is the amount of software baseline
that needs rework and measured in source lines of code, function points,
components, subsystems, files or other units. Modularity is the average breakage
trend over time. This metric can be collected by revoke SLOC per change, by
change type, by release, by components and by subsystems.
Rework and adaptability:
This metric measures the average rework per change over time. Rework is defined
as the average cost of change which is the effort to analyse, resolve and retest all
changes to software baselines. Adaptability is defined as the rework trend over
time. This metric provides insight into rework measurement. All changes are not
created equal. Some changes can be made in a staff- hour, while others take
staff-weeks. This metric can be collected by average hours per change, by change
type, by release, by components and by subsystems.
MTBF and Maturity:
This metric measures defect rather over time. MTBF (Mean Time Between
Failures) is the average usage time between software faults. It is computed by
dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is
defined as the MTBF trend over time. Software errors can be categorized into two
types deterministic and nondeterministic. Deterministic errors are also known as
Bohr-bugs and nondeterministic errors are also called as Heisen-bugs. Bohr-bugs
are a class of errors caused when the software is stimulated in a certain way such
as coding errors. Heisen-bugs are software faults that are coincidental with a
certain probabilistic occurrence of a given situation, such as design errors. This
metric can be collected by failure counts, test hours until failure, by release, by
components and by subsystems. These four quality indicators are based primarily
on the measurement of software change across evolving baselines of engineering
data.

64
65

LIFE -CYCLE EXPECTATIONS:


There is no mathematical or formal derivation for using seven core metrics
properly. However, there werespecific reasons for selecting them:
The quality indicators are derived from the evolving product rather than the
artifacts.
They provide inside into the waste generated by the process. Scrap and rework
metrics are a standard measurement perspective of most manufacturing processes.
They recognize the inherently dynamic nature of an iterative development
process. Rather than focus on thevalue, they explicitly concentrate on the trends
or changes with respect to time.
The combination of insight from the current and the current trend provides
tangible indicators for management action.
Table 13-3. the default pattern of life cycle evolution

Metric Inception Elaboration Constructi Transition


on

Progress 5% 25% 90% 100%

Architecture 30% 90% 100% 100%

Applications <5% 20% 85% 100%

Expenditures Low Moderate High High

Effort 5% 25% 90% 100%

Schedule 10% 40% 90% 100%

Staffing Small Ramp up Steady Varying


team

Stability Volatile Moderate Moderate Stable

Architecture Volatile Moderate Stable Stable

Applications Volatile Volatile Moderate Stable

Modularity 50%-100% 25%-50% <25% 5%-10%

65
66

Architecture >50% >50% <15% <5%

Applications >80% >80% <25% <10%

Adaptability Varying Varying Benign. Benign

Architecture Varying Moderate Benign Benign

Applications Varying Varying Moderat Benign


e

Maturity Prototype Fragile Usable Robust

Architecture Prototype Usable Robust Robust

Applications Prototype Fragile Usable Robust

METRICS AUTOMATION:
Many opportunities are available to automate the project control activities of a
software project. A Software Project Control Panel (SPCP) is essential for
managing against a plan. This panel integrates data from multiple sources to show
the current status of some aspect of the project. The panel can support standard
features and provide extensive capability for detailed situation analysis. SPCP is
one example of metrics automation approach that collects, organizes and reports
values and trends extracted directly from the evolving engineering artifacts.

SPCP:
To implement a complete SPCP, the following are necessary.
 Metrics primitives - trends, comparisons and progressions
 A graphical user interface.
 Metrics collection agents
 Metrics data management server
 Metrics definitions - actual metrics presentations for requirements progress,
implementation progress, assessment progress, design progress and other progress
dimensions.
 Actors - monitor and administrator.

Monitor defines panel layouts, graphical objects and linkages to project data.
Specific monitors called roles include software project managers, software
development team leads, software architects and customers. Administrator installs

66
67

the system, defines new mechanisms, graphical objects and linkages. The whole
display is called a panel. Within a panel are graphical objects, which are types of
layouts such as dials and bar charts for information. Each graphical object displays
a metric. A panel contains a number of graphical objects positioned in a particular
geometric layout. A metric shown in a graphical object is labelled with the metric
type, summary level and insurance name (line of code, subsystem, server1).
Metrics can be displayed in two modes – value, referring to a given point in time
and graph referring to multiple and consecutive points in time. Metrics can be
displayed with or without control values. A control value is an existing
expectation either absolute or relative that is used for comparison with a
dynamically changing metric. Thresholds are examples of control values.

67
The basic fundamental metrics classes are trend, comparison and progress.

The format and content of any project panel are configurable to the software
project manager's preference for tracking metrics of top-level interest. The basic
operation of an SPCP can be described by the following top -level use case.
i. Start the SPCP
ii. Select a panel preference
iii. Select a value or graph metric
iv. Select to superimpose controls
[Link] down to trend
vi. Drill down to point in time.
vii. Drill down to lower levels of information
Drill down to lower level of indicators

68
MODULE - V
CASE STUDIES (09)
CCPDS-R Case Study and Future Software Project Management Practices
Modern Project Profiles, Next- Generation software Economics, Modern Process
Transitions

CCPDS-R Case Study and Future Software Project ManagementPractices

This appendix presents a detailed case study of a successful software project that
followed many of the techniques presented in this book. Successful here means
on budget, on schedule, and satisfactory to the customer. The Command Center
Processing and Display Sys-tem-Replacement (CCPDS-R) project was performed
for the U.S. Air Force by TRW Space and Defense in Redondo Beach, California.
The entire project included systems engineering, hardware procurement, and
software development, with each of these three major activities consuming about
one-third of the total cost. The schedule spanned 1987 through 1994.

The software effort included the development of three distinct software systems
totaling morethan one million source lines of code. This case study focuses on the
initial software development, called the Common Subsystem, for which about
355,000 source lines were developed. The Common Subsystem effort also
produced a reusable architecture, a mature process, and an integrated environment
for efficient development of the two software subsystems of roughly similar size
that followed. This case study therefore represents about one-sixth of the overall
CCPDS-R project effort.

Although this case study does not coincide exactly with the management process
presented neither in this book nor with all of today's modern technologies, it used
most of the same techniques and was managed to the same spirit and priorities.
TRW delivered

Key Points a An objective case study is a true indicator of a mature organization


and a mature project process. The software industry needs more case studies like
CGPDS-R.

69
a The metrics histories were all derived directly from the artifacts of the project's
process. These data were used to manage the project and were embraced by
practitioners, managers, and stakeholders.

a CCPDS-R was one of the pioneering projects that practiced many modern
management approaches. a This appendix provides a practical context that is
relevant to the techniques, disciplines, and opinions provided throughout this
book.

the system on budget and on schedule, and the users got more than they expected.
TRW was awarded the Space and Missile Warning Systems Award for
Excellence in 1991 for "continued, sustained performance in overall systems
engineering and project execution." A project like CCPDS-R could be developed
far more efficiently today. By incorporating current technologies and improved
processes, environments, and levels of automation, this project could probably be
built today with equal quality in half the time and at a quarter of the cost.
Some of today’s popular software cost models are not well matched to an
iterative software process focused an architecture-first approach Many cost
estimators are still using a conventional process experience base to estimate a
modern project profile A next- generation software cost model should explicitly
separate architectural engineering from application production, just as an
architecture-first process does. Two major improvements in next-generation
software cost estimation models: Separation of the engineering stage from the
production stage will force estimators to differentiate between architectural
scale and implementation size. Rigorous design notations such as UML will offer
an opportunity to define units of measure for scale that are more standardized
and therefore can be automated and tracked. Modern Software Economics:
Changes that provide a good description of what an organizational manager
should strive for in making the transition to a modern process: 1. Finding and
fixing a software problem after delivery costs 100 times more than fixing the
problem in early design phases 2. You can compress software development
schedules 25% of nominal, but no more. 3. For every $1 you spend on
development, you will spend $2 on maintenance. 4. Software development and
maintenance costs are primarily a function of the number of source lines of code
5. Variations among people account for the biggest differences in software
productivity. 6. The overall ratio of software to hardware costs is still growing –
in 1955 it was 15:85; in 1985 85:15 7. Only about 15% of software development
effort is devoted to programming 8. Software systems and products typically cost
3 times as much per SLOC as individual software programs. 9. Walkthroughs
catch 60% of the errors.

70
10. 80% of the contribution comes from 20% of the contributors. Next-Generation
Software Economics
Next-generation software economics is being practiced by some advanced
software
organizations. Many of the techniques, processes, and methods described in this
book's process framework have been practiced for several years. However, a
mature, modern process is nowhere near the state of the practice for the average
software organization. introduces several provocative hypotheses about the future
of software economics. A general structure is proposed for a cost estimation
model that would be better suited to the process framework

new approach would improve the !accuracy and precision of software cost
estimates, and would accommodate dramatic improvements in software
economies of scale. Such improvements will be enabled by advances in software
development environments. Boehm's benchmarks of conventional software project
performance and describe, in objective terms, how the process framework should
improve the overall software economics achieved by a project or organization.
Key Points

▲ Next-generation software economics should reflect better economies of scale


and improved return on investment profiles. These are the real indicators of a
mature industry.

▲ Further technology advances in round-trip engineering are critical to making


the next quantum leap in software economics.
▲ Future cost estimation models need to be based on better primitive units
defined fromwell-understood software engineering notations such as the Unified
Modeling Language.

Modern Process Transitions

Successful software management is hard work. Technical breakthroughs, process


breakthroughs, and new tools will make it easier, but management discipline will
continue to be the crux of software project success. New technological advances
will be accompanied by new opportunities for software applications, new
dimensions of complexity, new avenues of automation, and new customers with
different priorities. Accommodating these changes will perturb many of our
ingrained software management values and priorities. However, striking a balance
among requirements, designs, and plans will remain the underlying objective of
future software management endeavors, just as it is today.

71
numerous projects have been practicing some of these disciplines for years.
However, many of the techniques and disciplines suggested herein will
necessitate a significant paradigm shift. Some of these changes will be resisted by
certain stakeholders or by certain factions within a project or organization. It is
not always easy to separate cultural resistance from objective resistance.
summarizes some of the important culture shifts to be prepared for in order to
avoid as many sources of friction as possible in transitioning successfully to a
modern process.

Key Points

▲ The transition to modern software

1 processes and technologies necessitates i several culture shifts that will not ;
always be easyto achieve.

▲ Lessons learned in transitioning organizations to a modern process have


exposed severalrecurring themes of success that represent important culture

( shifts from conventional practice.

▲ A significant transition should be attempted on a significant project. Pilot i


projects do not generally attract top tal-; ent, and top talent is crucial to the success
of any significant transition.

72

You might also like