SPM Lecture Notes
SPM Lecture Notes
Lecture Notes:
Drafted by :
D. Koumudi
prasanna(IARE10939) Assistant
Professor
Department Of CSIT
Institute of Aeronautical
Engineering
January 22, 2023
2
Contents
Contents 2
3 LIFE-CYCLEPHASES
Engineering and Production Stages.
Inception Phase. ………………………………………………….………. 31
Elaboration Phase…………………………………………………………32
Construction Phase……………………………………….……………….33
Transition Phase……………………………………..…………………….34
Artifacts of the Process. .................................................. ……………………….....35
The Artifact Sets. . ........................................................ ……………………….....36
The Management . ......................................................... ……………………….....37
The Engineering Sets. ................................................... ……………………….....38
Artifact Evolution over the Life Cycle. ……………….........................................39
Test Artifacts. ………………….............................................................................40
Management Artifacts.………….............................................................................41
3
Engineering Artifacts………….............................................................................42
Pragmatic Artifacts…………….............................................................................44
Model-Based Software Architectures.................................................................47
Architecture: A Management Perspective. ............................................................47
Architecture: A Technical Perspective....................................................................48
4 PROJECT ORGANIZATIONS 50
Line-of-Business Organizations ..................................................................... 50
Project Organizations ..................................................................................... 51
Evolution of Organizations ............................................................................. 52
Process Automation ........................................................................................... 53
TheProject Environment .............................................................................. 55
Process control and process instrumentation ................................................... 56
TheSevenCoreMetrics .................................................................................... 57
management indicators................................................................................... 58
quality indicators............................................................................................ 60
life-cycle expectation .................................................................................... 61
Pragmatic software metrics ............................................................................ 62
metrics automation. ...................................................................................... 64
5 CASE STUDIES 66
CCPDS-R Case Study ................................................................................... 67
Future Software Project Management Practices ............................................ 68
Modern Project Profiles.................................................................................. 69
Next Generation software Economics ............................................................ 70
Modern Process Transitions ........................................................................... 71
Bibliography 72
4
MODULE – I
Analysis Analysis and coding both involve creative work that directly
contributes to the usefulness of the end product.
Coding
2. In order to manage and control all of the intellectual freedom associated with
software development, one must introduce several other "overhead" steps,
including system requirements definition, software requirements definition,
4
5
program design, and testing. These steps supplement the analysis and coding
steps. Below Figure illustrates the resulting project profile and the basic steps in
developing a large-scale program.
Requirement
Analysis
Design
Coding
Testing
Operation
3. The basic framework described in the waterfall model is risky and invites failure.
The testing phase that occurs at the end of the development cycle is the first event
for which timing, storage, input/output transfers, etc., are experienced as
distinguished from analyzed. The resulting design changes are likely to be so
disruptive that the software requirements upon which the design is based are
likely violated. Either the requirements must be modified or a substantial
design change iswarranted.
5
6
3. Do it twice. If a computer program is being developed for the first time, arrange
matters so that the version finally delivered to the customer for operational
deployment is actually the second version insofar as critical design/operations are
concerned. Note that this is simply the entire process done in miniature, to a time
scale that is relatively small with respect to the overall effort. In the first version,
the team must have a special broad competence where they can quickly sense
trouble spots in the design, model them, model alternatives, forget the
straightforward aspects of the design that aren't worth studying at this early point,
and, finally, arrive at an error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project
resources-manpower, computer time, and/or management judgment-is the test
phase. This is the phase of greatest risk in terms of cost and schedule. It occurs at
the latest point in the schedule, when backup alternatives are least available, if at
all. The previous three recommendations were all aimed at uncovering and
solving problems before entering the test phase. However, even after doing these
things, there is still a test phase and there are still important things to be done,
including: (1) employ a team of test specialists who were not responsible for the
original design; (2) employ visual inspections to spot the obvious errors like
dropped minus signs, missing factors of two, jumps to wrong addresses (do not
use the computer to detect this kind of thing, it is too expensive); (3) test every
logic path; (4) employ the final checkout on the target computer.
IN PRACTICE
Some software projects still practice the conventional software management
approach.
It is useful to summarize the characteristics of the conventional process as it
has typically been applied, which is not necessarily as it was intended. Projects
destined for trouble frequently exhibit the following symptoms:
6
7
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen implementation
issues and interfaceambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then
implemented all at once, then integrated. Table 1-1 provides a typical profile of
cost expenditures across the spectrum of software activities.
7
8
Late risk resolution A serious issue associated with the waterfall lifecycle was
the lack of early risk resolution. Figure 1.3 illustrates a typical risk profile for
conventional waterfall model projects. It includes four distinct periods of risk
exposure, where risk is defined as the probability of missing a cost, schedule,
feature, or quality goal. Early in the life cycle, as the requirements were being
specified, the actual risk exposure was highly unpredictable.
8
9
software development life cycle. These conditions rarely occur in the real world.
Specification of requirements is a difficult and important part of the software
development process.
Another property of the conventional approach is that the requirements were
typically specified in a functional manner. Built into the classic waterfall process
was the fundamental assumption that the software itself was decomposed into
functions; requirements were then allocated to the resulting components. This
decomposition was often very different from a decomposition based on object-
oriented design and the use of existing components. Figure 1-4 illustrates the
result of requirements-driven approaches: a software structure that is organized
around the requirements specification structure.
9
10
10
11
reliability, and adaptability The relationships among these parameters and the
11
12
12
13
13
14
14
15
MODULE – II
15
16
16
17
Assembly 320
C 128
FORTAN77 105
COBOL85 91
Ada83 71
C++ 56
Ada95 55
Java 55
Visual Basic 35
Table 3-2
17
18
REUSE
Reusing existing components and building reusable components have been
natural software engineering activities since the earliest improvements in
programming languages. With reuse in order to minimize development costs
while achieving all the other required attributes of performance, feature set, and
quality. Try to treat reuse as a mundane part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial
products supported by organizations with the following characteristics:
18
19
COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize
integration of commercial components and off-the-shelf products. While the use
of commercial components is certainly desirable as a means of reducing custom
development, it has not proven to be straightforward in practice. Table 3-3
identifies some of the advantages and disadvantages of using commercial
components.
19
20
20
21
conditions of any stakeholder. This should be the underlying premise for most
process improvements.
IMPROVING TEAM EFFECTIVENESS
Teamwork is much more important than the sum of the individuals. With software
teams, a project manager needs to configure a balance of solid talent with highly
skilled people in the leverage positions. Some maxims of team management
include the following:
A well-managed project can succeed with a nominal engineering team.
A mismanaged project will almost never succeed, even with an expert team of
engineers.
A well-architected system can be built by a nominal team of software builders.
A poorly architected system will flounder even with an expert team of builders.
Boehm five staffing principles are
1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and motivation of the
people available.
3. The principle of career progression: An organization does best in the long run by
helping its peopleto self-actualize.
4. The principle of team balance: Select people who will complement and harmonize
with one another
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right
person in the right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders
is a prerequisite for success.
Decision-making skill. The jillion books written about management have failed to
provide a clear definition of this attribute. We all know a good leader when we
run into one, and decision-making skill seems obvious despite its intangible
definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate
progress, exploit eccentric prima donnas, transition average people into top
performers, eliminate misfits, and consolidate diverse opinions into a team
direction.
Selling skill. Successful project managers must sell all stakeholders (including
themselves) on decisions and priorities, sell candidates on job positions, sell
changes to the status quo in the face of resistance, and sell achievements against
objectives. In practice, selling requires continuous negotiation, compromise, and
empathy
21
22
Key practices that improve overall software quality include the following:
22
23
Focusing on driving requirements and critical use cases early in the life cycle,
focusing on requirements completeness and traceability late in the life cycle, and
focusing throughout the life cycle on a balance between requirements evolution,
design evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an
architecture as it evolves from a high-level prototype into a fully compliant
product
Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation
Using visual modeling and higher level languages that support architectural
control, abstraction, reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-
based evaluations
23
24
1. Make quality Quality must be quantified and mechanisms put into place to
motivate its achievement
2. High-quality software is possible. Techniques that have been
demonstrated to increase quality include involving the customer, prototyping,
simplifying design, conducting inspections, and hiring the best people
24
25
3. Give products to customers early. No matter how hard you try to learn
users' needs during the requirementsphase, the most effective way to determine
real needs is to give users a product and let them play with it [Link] the
problem before writing the requirements. When faced with what they believe
is a problem, most engineers rush to offer a solution. Before you try to solve a
problem, be sure to explore all the alternativesand don't be blinded by the
obvious solution
5. Evaluate design alternatives. After the requirements are agreed upon,
you must examine a variety of architectures and algorithms. You certainly do not
want to use” architecture" simply because it was used in the requirements
specification.
6. Use an appropriate process model. Each project must select a process
that makes ·the most sense for that project on the basis of corporate culture,
willingness to take risks, application area, volatility of requirements, and the
extent to which requirements are well understood.
7. Use different languages for different phases. Our industry's eternal
thirst for simple solutions to complex problems has driven many to declare that
the best development method is one that uses the same notation through- out the
life cycle.
8. Minimize intellectual distance. To minimize intellectual distance, the
software's structure should be as close as possible to the real-world structure
9. Put techniques before tools. An undisciplined software engineer with a
tool becomes a dangerous, undisciplined software engineer
10. Get it right before you make it faster. It is far easier to make a working
program run faster than it is to make a fast program work. Don't worry about
optimization during initial coding
11. Inspect code. Inspecting the detailed design and code is a much better way to find
errors than testing
12. Good management is more important than good technology. Good
management motivates people to do their best, but there are no universal "right"
styles of management.
25
26
13. People are the key to success. Highly skilled people with appropriate
experience, talent, and training are key. [Link] with care. Just because
everybody is doing something does not make it right for you. It may be right, but
you must carefully assess its applicability to your environment.
15. Take responsibility. When a bridge collapses we ask, "What did the
engineers do wrong?" Even when software fails, we rarely ask this. The fact is
that in any engineering discipline, the best methods can be used to produce awful
designs, and the most antiquated methods to produce elegant designs.
16. Understand the customer's priorities. It is possible the customer would
tolerate 90% of the functionality delivered late if they could have 10% of it on
time.
17. The more they see, the more they need. The more functionality (or
performance) you provide a user, the more functionality (or performance) the
user wants.
18. Plan to throw one away. One of the most important critical success factors is
whether or not a product isentirely new. Such brand-new applications,
architectures, interfaces, or algorithms rarely work the first time.
19. Design for change. The architectures, components, and specification techniques
you use must accommodatechange.
20. Design without documentation is not design. I have often heard software
engineers say, "I have finished thedesign. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks
constructs that perform a functioncorrectly, but in an obscure way. Show the
world how smart you are by avoiding tricky code
23. Encapsulate. Information-hiding is a simple, proven concept that results in
software that is easier to testand much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to
measure software's inherent maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics
available to report the inherent complexity of software, none is as intuitive and
26
27
Top 10 principles of modern software management are. (The first five, which
are the main themes of my definition of an iterative process, are summarized in
Figure 4-1.)
27
28
28
29
Table 4-1 maps top 10 risks of the conventional process to the key attributes
and principles of a modernprocess
Modern software development processes have moved away from the conventional
waterfall model, in which each stage of the development process is dependent on
completion of the previous stage.
The economic benefits inherent in transitioning from the conventional
waterfall model to an iterative development process are significant but difficult to
quantify. As one benchmark of the expected economic impact of process
improvement, consider the process exponent parameters of the COCOMO II
29
30
30
31
MODULE– III
31
32
INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among
stakeholders on the life-cycleobjectives for the project.
PRIMARY OBJECTIVES
Establishing the project's software scope and boundary conditions, including an
operational concept, acceptance criteria, and a clear understanding of what is and
is not intended to be in the product
Discriminating the critical use cases of the system and the primary scenarios of
operation that willdrive the major design trade-offs
Demonstrating at least one candidate architecture against some of the primary
scenanos
Estimating the cost and schedule for the entire project (including detailed
estimates for theelaboration phase)
Estimating potential risks (sources of unpredictability)
ESSENTIAL ACTMTIES
Formulating the scope of the project. The information repository should be
sufficient to define the problem space and derive the acceptance criteria for the
end product.
Synthesizing the architecture. An information repository is created that is
sufficient to demonstrate the feasibility of at least one candidate architecture and
an, initial baseline of make/buy decisions so that the cost, schedule, and resource
estimates can be derived.
Planning and preparing a business case. Alternatives for risk management,
staffing, iteration plans, and cost/schedule/profitability trade-offs are evaluated.
PRIMARY EVALUATION CRITERIA
Do all stakeholders concur on the scope definition and cost and schedule
estimates?
Are requirements understood, as evidenced by the fidelity of the critical use
32
33
cases?
Are the cost and schedule estimates, priorities, risks, and development processes
credible?
Do the depth and breadth of an architecture prototype demonstrate the preceding
criteria? (The primary value of prototyping candidate architecture is to provide
a vehicle for understanding the scope and assessing the credibility of the
development group in solving the particular technical problem.)
Are actual resource expenditures versus planned expenditures acceptable
ELABORATION PHASE
ESSENTIAL ACTIVITIES
Elaborating the vision.
Elaborating the process and infrastructure.
Elaborating the architecture and selecting components.
33
34
CONSTRUCTION PHASE
During the construction phase, all remaining components and application
features are integrated into the application, and all features are thoroughly tested.
Newly developed software is integrated where required. The construction phase
represents a production process, in which emphasis is placed on managing
resources and controlling operations to optimize costs, schedules, and quality.
PRIMARY OBJECTIVES
Minimizing development costs by optimizing resources and avoiding unnecessary
scrap and rework
Achieving adequate quality as rapidly as practical
Achieving useful versions (alpha, beta, and other test releases) as rapidly as
practical
ESSENTIAL ACTIVITIES
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
TRANSITION PHASE
The transition phase is entered when a baseline is mature enough to be deployed
in the end-user domain. This typically requires that a usable subset of the system
has been achieved with acceptable quality levels and user documentation so that
transition to the user will provide positive results. This phase could include any of
the following activities:
34
35
The transition phase concludes when the deployment baseline has achieved the
complete vision.
PRIMARY OBJECTIVES
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete and
consistent with theevaluation criteria of the vision
Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into
consistent deployment baselines
Deployment-specific engineering (cutover, commercial packaging and
production, sales rollout kit development, field personnel training)
Assessment of deployment baselines against the complete vision and acceptance
criteria in therequirements set
EVALUATION CRITERIA
Is the user satisfied?
Are actual resource expenditures versus planned expenditures acceptable?
35
36
36
37
Design Set
UML notation is used to engineer the design models for the solution. The
design set contains varying levelsof abstraction that represent the components of
the solution space (their identities, attributes, static relationships, dynamic
interactions). The design set is evaluated, assessed, and measured through a
combinationof the following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with the requirements models
Translation into implementation and deployment sets and notations (for example,
traceability, source code generation, compilation, linking) to evaluate the
consistency and completeness and the semantic balance between information in
the sets
Analysis of changes between the current version of the design model and previous
versions (scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations)
that represents the tangible implementations of components (their form, interface,
and dependency relationships)
Implementation sets are human-readable formats that are evaluated, assessed,
and measured through a
combination of the following:
Analysis of consistency with the design models
Translation into deployment set notations (for example, compilation and linking)
to evaluate the consistency and completeness among artifact sets
Assessment of component source or executable files against relevant
37
38
Testing against the usage scenarios and quality attributes defined in the
requirements set to evaluate the consistency and completeness and the~ semantic
balance between information in the two sets
Testing the partitioning, replication, and allocation strategies in mapping
components of the implementation set to physical resources of the deployment
system (platform type, number, network topology)
Testing against the defined usage scenarios in the user manual such as
installation, user-oriented dynamic reconfiguration, mainstream usage, and
anomaly management
Analysis of changes between the current version of the deployment set and
previous versions (defect elimination trends, performance changes)
Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life
cycle; the other sets take on check and balance roles. As illustrated in Figure 6-2,
each phase has a predominant focus: Requirements are the focus of the inception
phase; design, the elaboration phase; implementation, the construction phase; and
deploy- ment, the transition phase. The management artifacts also evolve, but at a
fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact
sets.
1. Management: scheduling, workflow, defect tracking, change management,
documentation, spreadsheet, resource management, and presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage
analysis tools, and testmanagement tools
5. Deployment: test coverage and test automation tools, network management tools,
commercial components(operating systems, GUIs, RDBMS, networks,
middleware), and installation tools.
38
39
39
40
40
41
subsystem that converts raw data into an organized database and manages queries
to this database from (3) a display subsystem that allows workstation operators to
examine seismic data in human-readable form. Such a system would result in the
following test artifacts:
Management set. The release specifications and release descriptions capture the
objectives, evaluation criteria, and results of an intermediate milestone. These
artifacts are the test plans and test results negotiated among internal project teams.
The software change orders capture test results (defects, testability changes,
requirements ambiguities, enhancements) and the closure criteria associated with
making a discrete change to a baseline.
Requirements set. The system-level use cases capture the operational concept for
the system and the acceptance test case descriptions, including the expected
behavior of the system and its quality attributes. The entire requirement set is a
test artifact because it is the basis of all assessment activities across the life cycle.
Design set. A test model for nondeliverable components needed to test the
product baselines is captured in the design set. These components include such
design set artifacts as a seismic event simulation for creating realistic sensor data;
a "virtual operator" that can support unattended, after- hours test cases; specific
instrumentation suites for early demonstration of resource usage; transaction rates
or response times; and use case test drivers and component stand-alone test
drivers.
Implementation set. Self-documenting source code representations for test
components and test drivers provide the equivalent of test procedures and test
scripts. These source files may also include human-readable data files
representing certain statically defined data sets that are explicit test source files.
Output files from test drivers provide the equivalent of test reports.
Deployment set. Executable versions of test components, test drivers, and data
files are provided.
MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results
and ancillary information necessary to document the product/process legacy,
maintain the product, improve the product, and improve the process.
Business Case
The business case artifact provides all the information necessary to determine
whether the project is worth investing in. It details the expected revenue, expected
cost, technical and management plans, and backup data necessary to demonstrate
the risks and realism of the plans. The main purpose is to transform the vision into
economic terms so that an organization can make an accurate ROI assessment.
The financial forecasts are evolutionary, updated with more accurate forecasts
as the life cycle progresses. provides a default outline for a business case.
41
42
42
43
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are
derived from the vision statement as well as many other sources (make/buy
analyses, risk management concerns, architectural considerations, shots in the
dark, implementation constraints, quality thresholds). These artifacts are intended
to evolve along with the process, achieving greater fidelity as the life cycle
progresses and requirements understanding matures. Figure 6-6 provides a default
Release Descriptions
Release description documents describe the results of each release, including
performance against each of the evaluation criteria in the corresponding release
specification. Release baselines should be accompanied by a release description
document that describes the evaluation criteria for that configuration baseline and
provides substantiation (through demonstration, testing, inspection, or analysis)
43
44
that each criterion has been addressed in an acceptable manner. Figure 6-7
provides a default outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status,
including the software project manager's risk assessment, quality indicators, and
management indicators. Typical status assessments should include a review of
resources, personnel staffing, financial data (cost and revenue), top 10 risks,
technical progress (metrics snapshots), major milestone plans and results, total
project or product scope & action items
Environment
An important emphasis of a modern approach is to define the development and
maintenance environment as a first-class artifact of the process. A robust,
integrated development environment must support automation of the development
process. This environment should include requirements management, visual
modeling, document automation, host and target programming tools, automated
regression testing, and continuous and integrated change management, and feature
and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could
include several document subsets for transitioning the product into operational
status. In big contractual efforts in which the system is delivered to a separate
maintenance organization, deployment artifacts may include computer system
operations manuals, software installation manuals, plans and procedures for
cutover (from a legacy system), site surveys, and so forth. For commercial
44
45
45
46
46
47
ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations
such as UML, programming languages, or executable machine codes. Three
engineering artifacts are explicitly intended for more general review, and they
deserve further elaboration.
Vision Document
The vision document provides a complete vision for the software system under
development and. supports the contract between the funding authority and the
development organization. A project vision is meant to be changeable as
understanding evolves of the requirements, architecture, plans, and technology. A
good vision document should change slowly. Figure 6-9 provides a default outline
for a vision document.
Architecture Description
47
48
PRAGMATIC ARTIFACTS
People want to review information but don't understand the language
of the artifact. Many interested reviewers of a particular artifact will resist
having to learn the engineering language in which the artifact is written. It is not
uncommon to find people (such as veteran software managers, veteran quality
assurance specialists, or an auditing authority from a regulatory agency) who react
as follows: "I'm not going to learn UML, but I want to review the design of this
software, so give me a separate description such as some flowcharts and text that I
can understand."
People want to review the information but don't have access to the
tools. It is not very common for the development organization to be fully tooled;
48
49
it is extremely rare that the/other stakeholders have any capability to review the
engineering artifacts on-line. Consequently, organizations are forced to exchange
paper documents. Standardized formats (such as UML, spreadsheets, Visual
Basic, C++, and Ada 95), visualization tools, and the Web are rapidly making it
economically feasible for all stakeholders to exchange information
electronically.
Human-readable engineering artifacts should use rigorous notations
that are complete, consistent, and used in a self-documenting manner.
Properly spelled English words should be used for all identifiers and descriptions.
Acronyms and abbreviations should be used only where they are well accepted
jargon in the context of the component's usage. Readability should be
emphasized and the use of proper English words should be required in all
engineering artifacts. This practice enables understandable representations,
browse able formats (paperless review), more-rigorous notations, and reduced
error rates.
Useful documentation is self-defining: It is documentation that gets used.
Paper is tangible; electronic artifacts are too easy to change. On-line
and Web-based artifacts can be changed easily and are viewed with more
skepticism because of their inherent volatility.
49
50
MODULE– IV
MODULE– IV:PROJECT ORGANIZATIONS
Project Organizations Line-of- business organizations, project organizations,
evolution of organizations, process automation. Project Control and process
instrumentation the seven-core metrics, management indicators, quality indicators,
life-cycle expectations, Pragmatic software metrics, metrics automation.
50
51
51
52
process
.
Infrastructure
An organization’s infrastructure provides human resources support,
project-independentresearch & development, & other capital software
engineering assets.
2) Project organizations:
Software Management
Artifacts Activities
• The above figure shows a default project organization and maps project-level
roles andresponsibilities.
• The main features of the default organization are as follows:
• The project management team is an active participant, responsible for
producing as well asmanaging.
52
53
• The architecture team is responsible for real artifacts and for the integration
of components,not just for staff functions.
• The development team owns the component construction and maintenance
activities.
• The assessment team is separate from development.
• Quality is everyone’s into all activities and checkpoints.
• Each team takes responsibility for a different quality perspective.
3) EVOLUTION OF ORGANIZATIONS:
Transition Construction
Inception: Elaboration:
Software Software
management: 50% management: 10%
Software Software
Architecture: 20% Architecture: 50%
Software Software
development: 20% development: 20%
Software Software
Assessment Assessment
(measurement/evalu (measurement/evalu
ation):10% ation):20%
53
54
Construction: Transition:
Software Software
management: 10% management: 10%
Software Software
Architecture: 10% Architecture: 5%
Software Software
development: 50% development: 35%
Software Software
Assessment Assessment
(measurement/evalu (measurement/evalu
ation):30% ation):50%
Introductory Remarks:
The Process Automation:
The environment must be the first-class artifact of the process.
Process automation & change management is critical to an iterative process. If
the change is expensive thenthe development organization will resist it.
Round-trip engineering & integrated environments promote change freedom &
effective evolution oftechnical artifacts.
Metric automation is crucial to effective project control.
External stakeholders need access to environment resources to improve
interaction with the development team& add value to the process.
The three levels of process which requires a certain degree of process automation
for the corresponding processto be carried out efficiently.
Metaprocess (Line of business): The automation support for this level is called
an infrastructure. Macroproces (project): The automation support for a project’s
process is called an environment. Microprocess (iteration): The automation
support for generating artifacts is generally called a tool.
54
55
Requirements
Req
uirement Management
Design Visual Modelin
Implementation -Editors, Compilers,
Debugger, Linker, RuntimeAssessment -Test
automation, defect Tracking
55
56
56
57
Change Management
Change management must be automated & enforced to manage multiple iterations
& to enable change freedom. Change is the fundamental primitive of iterative
Development.
I. Software Change Orders
The atomic unit of software work that is authorized to create, modify or
obsolesce components within aconfiguration baseline is called a software
change orders ( SCO )
The basic fields of the SCO are Title, description, metrics, resolution, assessment
& disposition
57
58
Change management
II. Configuration Baseline
A configuration baseline is a named collection of software components
&Supporting documentation that is subjected to change management & is
upgraded, maintained, tested, statuses & obsolesced a unit There are generally two
classes of baselines
External Product Release Internal testing Release
Three levels of baseline releases are required for most Systems
1. Major release (N)
2. Minor Release (M)
3. Interim (temporary) Release (X)
Major release represents a new generation of the product or project
A minor release represents the same basic product but with enhanced features,
performance or quality. Major & Minor releases are intended to be external
product releases that are persistent & supported for a period of time.
An interim release corresponds to a developmental configuration that is intended
to be transient.
Once software is placed in a controlled baseline all changes are tracked such that
a distinction must bemade for the cause of the change. Change categories are
Type 0: Critical Failures (must be fixed before release)
Type 1: A bug or defect either does not impair (Harm) the usefulness of the
system or can be workedaround
Type 2: A change that is an enhancement rather than a response to a defect Type
3: A change that is necessitated by the update to the environment Type 4:
Changes that are not accommodated by the other categories.
Change Management
III Configuration Control Board (CCB)
A CCB is a team of people that functions as the decision Authority on the content
of configuration baselines
A CCB includes:
1. Software managers
2. Software Architecture managers
3. Software Development managers
4. Software Assessment managers
5. Other Stakeholders who are integral to the maintenance of the
controlled software deliverysystem?
Infrastructure
The organization infrastructure provides the organization’s capital assets
including two keyartifacts - Policy & Environment
I Organization Policy:
A Policy captures the standards for project software development processes
The organization policy is usually packaged as a handbook that defines the life
cycles & the processprimitives such as
Major milestones
Intermediate Artifacts
58
59
Engineering repositories
Metrics
Roles & Responsibilities
Infrastructure
II Organization Environment
The Environment that captures an inventory of tools which are building blocks
from which project environments can be configured efficiently & economically
Stakeholder Environment
Many large scale projects include people in external organizations that represent
other stakeholdersparticipating in the development process they might include
Procurement agency contract monitors
End-user engineering support personnel
Third party maintenance contractors
Independent verification & validation contractors
Representatives of regulatory agencies & others.
These stakeholder representatives also need to access to development resources so
that they cancontribute value to overall effort. These stakeholders will be access
through on-line
An on-line environment accessible by the external stakeholders allow them to
participate in the processa follows
Accept & use executable increments for the hands-on evaluation.
Use the same on-line tools, data & reports that the development organization uses
59
60
60
61
product.
INDICATORS:
An indicator is a metric or a group of metrics that provides an understanding of
the software process or software product or a software project. A software
engineer assembles measures andproduce metrics from which the indicators can
be derived.
Two types of indicators are:
(i) Management indicators.
Quality indicators.
Management Indicators
The management indicators i.e., technical progress, financial status and staffing
progress are
used to determine whether a project is on budget and on schedule. The
management indicators that indicate financial status are based on earned value
system.
Quality Indicators
The quality indicators are based on the measurement of the changes occurred in
software.
61
62
The below figure shows expected progress for a typical project with three major
releases
62
63
additions and reductions over time. An iterative development should start with a
small team until the risks in the requirements and architecture have been suitably
resolved. Depending on the overlap of iterations and other project specific
circumstances, staffing can vary. Increase in staff can slow overall project
progress as new people consume the productive teamof existing people in coming
up to speed. Low attrition of good people is a sign of success. The default
perspectives of this metric are people per month added and people per month
leaving. These three management indicators are responsible for technical
progress, financial status and staffing progress.
QUALITY INDICATORS:
Change traffic and stability:
This metric measures the change traffic over time. The number of software
change orders opened and closed over the life cycle is called change traffic.
Stability specifies the relationship between opened versus closed software change
orders. This metric can be collected by change type, by release, across all
releases, by term, by components, by subsystems, etc.
The below figure shows stability expectation over a healthy project’s life cycle
63
64
64
65
65
66
METRICS AUTOMATION:
Many opportunities are available to automate the project control activities of a
software project. A Software Project Control Panel (SPCP) is essential for
managing against a plan. This panel integrates data from multiple sources to show
the current status of some aspect of the project. The panel can support standard
features and provide extensive capability for detailed situation analysis. SPCP is
one example of metrics automation approach that collects, organizes and reports
values and trends extracted directly from the evolving engineering artifacts.
SPCP:
To implement a complete SPCP, the following are necessary.
Metrics primitives - trends, comparisons and progressions
A graphical user interface.
Metrics collection agents
Metrics data management server
Metrics definitions - actual metrics presentations for requirements progress,
implementation progress, assessment progress, design progress and other progress
dimensions.
Actors - monitor and administrator.
Monitor defines panel layouts, graphical objects and linkages to project data.
Specific monitors called roles include software project managers, software
development team leads, software architects and customers. Administrator installs
66
67
the system, defines new mechanisms, graphical objects and linkages. The whole
display is called a panel. Within a panel are graphical objects, which are types of
layouts such as dials and bar charts for information. Each graphical object displays
a metric. A panel contains a number of graphical objects positioned in a particular
geometric layout. A metric shown in a graphical object is labelled with the metric
type, summary level and insurance name (line of code, subsystem, server1).
Metrics can be displayed in two modes – value, referring to a given point in time
and graph referring to multiple and consecutive points in time. Metrics can be
displayed with or without control values. A control value is an existing
expectation either absolute or relative that is used for comparison with a
dynamically changing metric. Thresholds are examples of control values.
67
The basic fundamental metrics classes are trend, comparison and progress.
The format and content of any project panel are configurable to the software
project manager's preference for tracking metrics of top-level interest. The basic
operation of an SPCP can be described by the following top -level use case.
i. Start the SPCP
ii. Select a panel preference
iii. Select a value or graph metric
iv. Select to superimpose controls
[Link] down to trend
vi. Drill down to point in time.
vii. Drill down to lower levels of information
Drill down to lower level of indicators
68
MODULE - V
CASE STUDIES (09)
CCPDS-R Case Study and Future Software Project Management Practices
Modern Project Profiles, Next- Generation software Economics, Modern Process
Transitions
This appendix presents a detailed case study of a successful software project that
followed many of the techniques presented in this book. Successful here means
on budget, on schedule, and satisfactory to the customer. The Command Center
Processing and Display Sys-tem-Replacement (CCPDS-R) project was performed
for the U.S. Air Force by TRW Space and Defense in Redondo Beach, California.
The entire project included systems engineering, hardware procurement, and
software development, with each of these three major activities consuming about
one-third of the total cost. The schedule spanned 1987 through 1994.
The software effort included the development of three distinct software systems
totaling morethan one million source lines of code. This case study focuses on the
initial software development, called the Common Subsystem, for which about
355,000 source lines were developed. The Common Subsystem effort also
produced a reusable architecture, a mature process, and an integrated environment
for efficient development of the two software subsystems of roughly similar size
that followed. This case study therefore represents about one-sixth of the overall
CCPDS-R project effort.
Although this case study does not coincide exactly with the management process
presented neither in this book nor with all of today's modern technologies, it used
most of the same techniques and was managed to the same spirit and priorities.
TRW delivered
69
a The metrics histories were all derived directly from the artifacts of the project's
process. These data were used to manage the project and were embraced by
practitioners, managers, and stakeholders.
a CCPDS-R was one of the pioneering projects that practiced many modern
management approaches. a This appendix provides a practical context that is
relevant to the techniques, disciplines, and opinions provided throughout this
book.
the system on budget and on schedule, and the users got more than they expected.
TRW was awarded the Space and Missile Warning Systems Award for
Excellence in 1991 for "continued, sustained performance in overall systems
engineering and project execution." A project like CCPDS-R could be developed
far more efficiently today. By incorporating current technologies and improved
processes, environments, and levels of automation, this project could probably be
built today with equal quality in half the time and at a quarter of the cost.
Some of today’s popular software cost models are not well matched to an
iterative software process focused an architecture-first approach Many cost
estimators are still using a conventional process experience base to estimate a
modern project profile A next- generation software cost model should explicitly
separate architectural engineering from application production, just as an
architecture-first process does. Two major improvements in next-generation
software cost estimation models: Separation of the engineering stage from the
production stage will force estimators to differentiate between architectural
scale and implementation size. Rigorous design notations such as UML will offer
an opportunity to define units of measure for scale that are more standardized
and therefore can be automated and tracked. Modern Software Economics:
Changes that provide a good description of what an organizational manager
should strive for in making the transition to a modern process: 1. Finding and
fixing a software problem after delivery costs 100 times more than fixing the
problem in early design phases 2. You can compress software development
schedules 25% of nominal, but no more. 3. For every $1 you spend on
development, you will spend $2 on maintenance. 4. Software development and
maintenance costs are primarily a function of the number of source lines of code
5. Variations among people account for the biggest differences in software
productivity. 6. The overall ratio of software to hardware costs is still growing –
in 1955 it was 15:85; in 1985 85:15 7. Only about 15% of software development
effort is devoted to programming 8. Software systems and products typically cost
3 times as much per SLOC as individual software programs. 9. Walkthroughs
catch 60% of the errors.
70
10. 80% of the contribution comes from 20% of the contributors. Next-Generation
Software Economics
Next-generation software economics is being practiced by some advanced
software
organizations. Many of the techniques, processes, and methods described in this
book's process framework have been practiced for several years. However, a
mature, modern process is nowhere near the state of the practice for the average
software organization. introduces several provocative hypotheses about the future
of software economics. A general structure is proposed for a cost estimation
model that would be better suited to the process framework
new approach would improve the !accuracy and precision of software cost
estimates, and would accommodate dramatic improvements in software
economies of scale. Such improvements will be enabled by advances in software
development environments. Boehm's benchmarks of conventional software project
performance and describe, in objective terms, how the process framework should
improve the overall software economics achieved by a project or organization.
Key Points
71
numerous projects have been practicing some of these disciplines for years.
However, many of the techniques and disciplines suggested herein will
necessitate a significant paradigm shift. Some of these changes will be resisted by
certain stakeholders or by certain factions within a project or organization. It is
not always easy to separate cultural resistance from objective resistance.
summarizes some of the important culture shifts to be prepared for in order to
avoid as many sources of friction as possible in transitioning successfully to a
modern process.
Key Points
1 processes and technologies necessitates i several culture shifts that will not ;
always be easyto achieve.
72