SEI ATAM Example
SEI ATAM Example
Mario Barbacci
Paul Clements
Anthony Lattanze
Linda Northrop
William Wood
July 2003
Technical Note
CMU/SEI-2003-TN-012
The Software Engineering Institute is a federally funded research and development center sponsored by the U.S.
Department of Defense.
Copyright 2003 by Carnegie Mellon University.
NO WARRANTY
THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS
FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO,
WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED
FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF
ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT.
Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder.
Internal use. Permission to reproduce this document and to prepare derivative works from this document for internal use is
granted, provided the copyright and “No Warranty” statements are included with all reproductions and derivative works.
External use. Requests for permission to reproduce this document or prepare derivative works of this document for external
and commercial use should be addressed to the SEI Licensing Agent.
This work was created in the performance of Federal Government Contract Number F19628-00-C-0003 with Carnegie
Mellon University for the operation of the Software Engineering Institute, a federally funded research and development
center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the
work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the
copyright license under the clause at 52.227-7013.
For information about purchasing paper copies of SEI reports, please visit the publications portion of our Web site
(http://www.sei.cmu.edu/publications/pubweb.html).
Contents
About the Technical Note Series on Business and Acquisition Guidelines ....v
Abstract..............................................................................................................vii
1 Introduction ..................................................................................................1
5 Conclusions................................................................................................17
5.1 Benefits ...............................................................................................17
5.2 Summary.............................................................................................18
References .........................................................................................................19
CMU/SEI-2003-TN-012 i
ii CMU/SEI-2003-TN-012
List of Tables
CMU/SEI-2003-TN-012 iii
iv CMU/SEI-2003-TN-012
About the Technical Note Series on Business and
Acquisition Guidelines
The Product Line Systems Program at the Software Engineering Institute (SEISM) is
publishing a series of technical notes designed to condense knowledge about architecture
tradeoff analysis practices into a concise and usable form for the Department of Defense
(DoD) acquisition manager and practitioner. This series is a companion to the SEI series on
product line acquisition and business practices.
Each technical note in the series will focus on applying architecture tradeoff analysis in the
DoD. Our objective is to provide practical guidance to early adopters on ways to integrate
sound architecture tradeoff analysis practices into their acquisitions. By investigating best
commercial and government practices, the SEI is helping the DoD to overcome challenges
and increase its understanding, maturation, and transition of this technology.
Together, these two series of technical notes will lay down a conceptual foundation for DoD
architecture tradeoff analysis and product line business and acquisition practices. Further
information is available on the SEI’s Product Line Systems Program Web page at
<http://www.sei.cmu.edu/activities/plp/plp_init.html>.
SM
Software Engineering Institute is a service mark of Carnegie Mellon University.
CMU/SEI-2003-TN-012 v
vi CMU/SEI-2003-TN-012
Abstract
CMU/SEI-2003-TN-012 vii
viii CMU/SEI-2003-TN-012
1 Introduction
Over the past several years, the Software Engineering Institute (SEISM) has developed the
Architecture Tradeoff Analysis MethodSM (ATAMSM) and validated its usefulness in practice
[Clements 02b, Kazman 00]. This method not only permits evaluation of specific
architectural quality attributes (e.g., modifiability, performance, security, and reliability) but
also engineering tradeoffs to be made among possibly conflicting quality goals.
This technical note describes an ATAM evaluation of the software architecture for an avionics
system developed for the Technology Applications Program Office (TAPO) of the U.S. Army
Special Operations Command Office. The system, called the Common Avionics Architecture
System (CAAS), is being developed by Rockwell Collins in Cedar Rapids, Iowa.
SM
SEI, Architecture Tradeoff Analysis Method, and ATAM are service marks of Carnegie Mellon
University.
CMU/SEI-2003-TN-012 1
2 Context for the Architecture Evaluation
The software architecture for a system represents the earliest software design decisions. As
such, they are the most critical things to get right and the most difficult things to change
downstream in the development life cycle. Even more important, the software architecture is
the key to software quality; it permits or precludes the system’s ability to meet functional and
quality attribute goals and requirements such as reliability, modifiability, security, real-time
performance, and interoperability. The right software architecture can pave the way for
successful system development, while the wrong architecture results in a system that fails to
meet critical requirements and incurs high maintenance costs.
Modern treatment of software architecture takes advantage of the fact that there are many
relevant views of a software architecture. A view is a representation of some of the system’s
elements and the relationships associated with them. Views help us to separate concerns and
achieve intellectual control over an abstract concept. Different views also speak to different
stakeholders—those who have a vested interest in the architecture. Which ones are relevant
depends on the stakeholders and the system properties that interest them. If we consider the
analogy of a building’s architecture, various stakeholders (such as the construction engineer,
plumber, and electrician) all have an interest in how the building is to be constructed.
Although they are each interested in different elements and relationships, each of their views
is valid—each one represents a structure that maps to one of the building’s construction
goals. A suite of views is necessary to engineer the architecture of the building fully and to
represent that architecture to stakeholders.
Some experts prescribe using a fixed set of views. Rational’s Unified Process (RUP), for
example, relies on Kruchten’s “4+1 view” approach to software architecture. A current and
2 CMU/SEI-2003-TN-012
more healthy trend, however, is to recognize that architects should choose a set of views
based on the needed engineering leverage that each view provides and the stakeholder
interests that each one serves. This trend is exemplified by the recent American National
Standards Institute/Institute of Electrical and Electronics Engineers (ANSI/IEEE)
recommended practice for architectural documentation on software-intensive systems [IEEE
00] and the “views and beyond” approach to architecture documentation from the SEI
[Clements 03].
TAPO hopes to reduce integration costs, logistical support, training resources, and technical
risk, and shorten the schedule by exploiting the commonality among the three classes of
aircraft that it supports: A/MH-6, MH-47, and MH-60. Best commercial practice has
demonstrated significant advantages through a product line approach to software. A software
product line is a set of software-intensive systems sharing a common, managed set of features
that satisfy the specific needs of a particular market segment or mission and that are
developed from a common set of core assets in a prescribed way [Clements 02a]. A software
CMU/SEI-2003-TN-012 3
product line is most effectively built from a common architecture that is used to structure a
common set of components and other assets from which the set of related products is built.
TAPO’s goal is to develop a single cockpit architecture for all MH-47s, MH-60s, andA/MH-
6s that will form the basis for a software product line approach for all the systems that TAPO
supports.
The CAAS was already well into development when the SEI was commissioned to perform
the architecture evaluation. While an argument could be made that an evaluation earlier in the
life cycle would have been a better risk-mitigation approach, this evaluation served three
important purposes:
1. It gave the government a concise idea of where its architecture was at risk and identified
immediate remedial actions.
2. It gave the government confidence in many areas where the architecture was shown to
be sound and well engineered.
3. It produced a cohesive community of stakeholders in the architecture. These
stakeholders articulated several possible new evolutionary directions for the system that
the program office may not have considered previously.
If the engine is an aircraft’s heart, then the avionics system is its brain. The avionics system is
responsible for getting critical information to the aircrew in a timely manner, managing
aircraft systems, and helping the crew do its job. As described in National Defense magazine:
4 CMU/SEI-2003-TN-012
area and deliver the special operators. On the return, they descended from
altitudes near 20,000 feet in zero visibility, refueled once more from an MC-
130, and again used radar to negotiate the mountains. Overall, the mission
lasted 8.3 hours, including 6.3 hours in adverse weather over hostile territory
[Colucci 03].
CMU/SEI-2003-TN-012 5
3 The ATAM
The ATAM relies on the principle that an architecture is suitable (or not suitable) only in the
context of specific quality attributes that it must impart to the system. The ATAM uses
stakeholders’ perspectives to produce a collection of scenarios that define the qualities of
interest for the particular system under consideration. Scenarios give specific instances of
usage, performance and growth requirements, various types of failures, and various possible
threats and modifications. Once the important quality attributes are identified in detail, the
architectural decisions relevant to each one can be illuminated and analyzed with respect to
their appropriateness.
The steps of the ATAM leading up to analysis are carried out in two phases.1 In Phase 1, the
evaluation team interacts with the system’s primary decision makers: the architect(s),
manager(s), and perhaps a marketing or customer representative. During Phase 2, a larger
group of stakeholders is assembled, including developers, testers, maintainers, administrators,
and users. The two-phase approach ensures that the analysis is based on a broad and
appropriate range of perspectives.
Phase 1:
1. Present the ATAM. The evaluators explain the method so that those who will be
involved in the evaluation understand it.
2. Present the business drivers. Appropriate system representative(s) present an overview
of the system, its requirements, business goals, and context, and the architectural quality
attribute drivers.
3. Present the architecture. The system or software architect (or another lead technical
person) presents the architecture.
4. Catalog the architectural approaches. The system or software architect presents
general architectural approaches to achieving specific qualities. The evaluation team
captures a list and adds to it any approaches observed during Step 3 or learned during
the pre-exercise review of the architecture documentation; for example, “A cyclic
executive is used to ensure real-time performance.” Known architectural approaches
have known quality attribute properties, and those approaches will help carry out the
analysis steps.
1
These two phases, visible to the stakeholders, are bracketed by a preparation phrase up front and a
follow-up phase at the end; both are carried out behind the scenes.
6 CMU/SEI-2003-TN-012
5. Generate a quality attribute utility tree. Participants build a utility tree—a prioritized
set of detailed statements about which quality attributes are most important for the
architecture to carry out (such as performance, modifiability, reliability, or security) and
specific scenarios that express those attributes.
6. Analyze the architectural approaches. The evaluators and the architect(s) map the
utility tree scenarios to the architecture to see how it responds to each important
scenario.
Phase 2 begins with an encore of Step 1 and a recap of the results of Steps 2 through 6 for the
larger group of stakeholders. Then Phase 2 continues with these steps:
CMU/SEI-2003-TN-012 7
The number of scenarios that are analyzed during the evaluation is controlled by the amount
of time allowed for the evaluation, but the process ensures (via active prioritization) that the
most important ones are addressed.
After the evaluation, the evaluators write a report documenting the evaluation and recording
the information discovered. This report will also document the framework for ongoing
analysis that was discovered by the evaluators.
8 CMU/SEI-2003-TN-012
4 The Evaluation of the CAAS Software Architecture
4.1 Background
Phase 1 of the evaluation took place at the Rockwell Collins facility in Cedar Rapids, Iowa
on October 16, 2002. Twelve “decision maker” stakeholders were present, a number
somewhat above average. They included five members of the contractor organization
(architects and program managers), two members of the TAPO program office, and five
members of the 160th Special Operations Aviation Regiment from Ft. Campbell, Kentucky,
representing the user community. During Phase 1, six scenarios were analyzed.
Phase 2 took place at Fort Campbell on December 17 and 18, 2002. Fifteen stakeholders
were present, representing various roles with a vested interest in the CAAS software
architecture. Once again, the stakeholders came from the program office, the user community,
and the contractor. Here, an additional 9 scenarios were analyzed, for a total of 15.
In both cases, the evaluation team consisted of four members of the technical staff from the
SEI’s Product Line Systems Program.
Overall, the goal of the CAAS is to create a scalable system that meets the needs of multiple
helicopter cockpits to address modernization issues. Its approach is to use a single, open,
common avionics architecture system for all platforms to reduce the cost of ownership. This
approach is based on Rockwell Collins’ Cockpit Management System (CMS) in its Flight 2
family of avionics systems, augmented with IAS2 functionality.
2
IAS is a legacy avionics system developed by another contractor.
CMU/SEI-2003-TN-012 9
• Leverage existing code and documentation where practical.
• Aim for a “plug and play” architecture in which hardware is portable between platforms.
• Move away from single proprietary components and use standards where appropriate.
• Provide cross-platform commonality (hardware, software, and training).
• Reduce the logistics base.
• Use the Service Life Extension Program (SLEP) as a fleet modification/installation
center as much as possible.
• Leverage other platform developments (both commercial and DoD).
• Employ a system infrastructure or framework that facilitates system maintenance and
enhancements (including third-party participation in them).
Additional business drivers that play a role are the constraints on the system and its
architecture, which, in the CAAS, include a large set of applicable standards.
The business drivers are elicited to establish the broad requirements and context for the
system, but also to identify the driving quality attribute requirements for the architecture. For
the CAAS, the important quality attributes (in order of customer importance) are
• availability
• performance (i.e., provide timely answers)
• modifiability
Modifiability, as is often the case, manifests itself in a number of ways. For the CAAS,
modifiability refers to the following capabilities:
• growth
• testability
• openness
• portability of hardware and software across different aircraft platforms
• reconfigurability
• repeatability (That is, every system must look like every other system and provide the
same answer on every platform.)
• supportability (i.e., the ability of a third party to maintain the system)
• reuse
• affordability
These quality attributes serve as the first approximation qualities of importance in the
generation of the quality attribute utility tree, detailed in Section 4.4.
10 CMU/SEI-2003-TN-012
4.3 Architectural Approaches
Step 4 of the ATAM captures the list of architectural approaches gleaned from the
architecture presentation, as well as from the evaluation team’s pre-exercise review of the
architecture documentation. Because architectural approaches exhibit known effects on
quality attributes (e.g., redundancy increases availability, whereas layering increases
portability), explicitly identifying the approaches provides input for the analysis steps that
follow.
The overarching strategies used in the CAAS are strongly partitioning applications and
structuring the software as a series of layers. POSIX provides the primary partitioning
mechanism that establishes inviolable timing and memory walls between applications to
prevent conflicts. For example, an important result of this design is that an application that
for some reason overruns its processing time limit cannot cause another application to be late.
Similarly, an application that overruns its memory allotment cannot impinge on another
application’s memory area.
Applications hence represent encapsulations. One of the strongest aspects of the CAAS
architecture is that its design allows applications to be changed more or less independently of
each other. That aspect, along with location transparency (an application’s independence from
its hardware location), makes adding new applications a straightforward process.
The CAAS software architectural approaches are listed below along with the quality
attributes that each one nominally affects. The attributes are shown in parentheses.
1. consistent partitioning strategy: definition of a partition, “brick-wall partitioning”
(availability, safety, modifiability, testability, maintainability)
2. encapsulation: used to isolate partitions. Between partitions, applications can share only
their state via the network. The remote service interface (RSI) and remote service
provider (RSP) are examples of encapsulation that isolate the network implementation
details.
(modifiability, availability)
3. interface strategy: Accessing components only via their interfaces is strictly followed.
Besides controlling interactions and eliminating the back-door exploitation of
changeable implementation details, this strategy reduces the number of inputs and
outputs per partition.
(modifiability, maintainability)
4. layers: used to partition and isolate high-level graphics services
(portability, modifiability)
5. distributed processing: Predominantly, a client-server approach is used to decouple
“parts” of the system. Also, the Broadcast feature is used to broadcast information
periodically.
(maintainability, modifiability)
CMU/SEI-2003-TN-012 11
6. Access to sockets, bandwidth, and data is guaranteed.
(performance)
7. virtual machine: a flight-ready operating system that’s consistent with POSIX and that
has a standard POSIX application program interface (API) and Ada 95 support, which
both provide Level-A design assurance
(modifiability, availability)
8. health monitors: for checking the health of control display units (CDUs) and multi-
function displays (MFDs)
(availability)
9. use of commercial standards: including ARINC 661, POSIX, Common Object Request
Broker Architecture (CORBA), IEEE P1386/P1386.1, OpenGL, and DO 178B
(portability, maintainability, modifiability)
10. locational transparency: Applications do not know where other applications reside, and,
hence, are unaffected when applications migrate to other hardware for scheduling or
load-balancing reasons. The location is bound at configuration time.
(portability, modifiability)
11. isolation of system services: a by-product of the layering strategy
(portability, modifiability)
12. redundant software: For flight-critical functions, redundant software is introduced using
a master/slave protocol to manage failover.
(portability, availability)
13. Every application is resident on every box.
(portability)
14. Some applications are active on multiple boxes.
(availability)
15. memory and performance analysis: Partitions are cyclic; however, rate monotonic
analysis (RMA) is used to assign priorities to threads within partitions. The result is
assured schedulability.
(performance)
16. application templates (the shell): A standard template for applications incorporates
application, common software, and common reusable elements (CoRE), and ensures that
complicated protocols (such as failover) are handled consistently across all applications.
(reuse, modifiability, repeatability, affordability)
12 CMU/SEI-2003-TN-012
of the overall “goodness” of the system. Performance, modifiability, security, and availability
are typical of the high-level nodes, placed immediately under “Utility.” For the CAAS
evaluation, the second-level nodes were identified as availability, performance, modifiability,
affordability, and reliability.
Under each quality factor are specific subfactors called “attribute concerns” that arise from
considering the quality-attribute-specific stimuli and responses that the architecture must
address. For example, for the CAAS, availability was defined by the stakeholders to mean
“having a non-crashing operational flight program (OFP),” “graceful degradation in the
presence of failures,” and “no degradation in the presence of failures for which there are
redundant components/paths.” Finally, each attribute concern is elaborated by a small number
of scenarios that are leaves of the utility tree; thus, the tree has four levels:
A scenario represents a use or modification of the architecture, applied not only to determine
if the architecture meets a functional requirement, but also (and more significantly) to predict
system qualities such as performance, reliability, modifiability, and so forth.
The scenarios at the leaves of the utility tree are prioritized along two dimensions:
1. importance to the system
2. perceived risk in achieving this goal
These nodes are prioritized relative to each other, using relative rankings of high, medium,
and low.
The portion of the utility tree covering the quality attribute of availability from the CAAS
evaluation is reproduced in Table 1, without the rankings. The full utility tree contained 29
scenarios covering 5 quality attributes.
CMU/SEI-2003-TN-012 13
Table 1: Utility Tree for the Availability Quality Attribute
Phase 1: Quality Attribute Utility Tree
Quality availability
Attribute
Attribute The OFP doesn’t crash.
Concerns
Scenarios 1. Invalid data is entered by the pilot, and the system does not crash.
2. Invalid data comes from an actor on any bus, and the system does not
crash.
3. When a 1.9-second power interruption occurs, the system will execute
a warm boot and be fully operational in 2 seconds.
Attribute graceful degradation in the presence of failures
Concerns
Scenarios 1. A loss of Doppler occurs, the pilot is notified, and the Doppler timer
begins a countdown (for multi-mode radar [MMR] validity).
2. A partition fails, the rest of the processor continues working, and the
system continues to function.
Attribute no degradation in the presence of failures for which there are redundant
Concerns components/paths
Scenarios 1. The data concentrator suffers battle damage, and all flight-critical
information is still available.
2. The mission processor in the outboard MFD fails, and that display and
the rest of the system continue to operate normally.
For the CAAS evaluation, the evaluation team chose six of the highest priority scenarios and
analyzed them during Phase 1. The scenarios that were not analyzed were distributed to
Phase 2 participants as “seed scenarios” that they could place into the brainstorming pool, if
desired.
After the scenarios were generated, the stakeholders were given the opportunity to merge
those that seemed to address closely related concerns. The purpose of scenario consolidation
is to prevent votes from being split across two almost-alike scenarios.
After merging, the scenarios were prioritized using a voting process in which participants
were given six votes3 that they could allocate to any scenario or group of scenarios.
3
The number of votes is 30% of the number of brainstormed scenarios, rounded up to the nearest
integer. This is a common facilitated brainstorming and group-consensus technique.
14 CMU/SEI-2003-TN-012
Table 2: Brainstormed Scenarios from Step 7
Phase 2: Brainstormed Scenarios
Scenario Scenario Text Number
Number of Votes
2 Changes to the CAAS are reflected in the simulation and 5
training system concurrently with the airframe changes,
without coding it twice (simulation and training stakeholder).
3 No single point of failure in the system will affect the system’s 10
safety or performance (system architect stakeholder).
5 Multiple versions of the system must be fielded at the same 1
time. Those versions should be distinguishable and should not
have a negative impact on the rest of the system (system
implementer stakeholder).
9 75% of the CAAS is built from reused components increasing 9
new business opportunities (from Phase 1, program manager
stakeholder).
13 Given maximum “knob twiddling” to the level that the system’s 6
performance is degraded, the system can prioritize its flight-
critical functions, so they are NOT degraded (safety
stakeholder).
15 Given the need for a second ARC231, the radio can be 2
incorporated into the existing system by reusing existing
software at minimal or no cost (requirements stakeholder).
20 An application doesn’t crash, but starts producing bad data. 3
The system can detect the errant data and when applications
crash (reliability stakeholder).
Step 7 concludes with an examination of how the newly introduced high-priority scenarios
compare with the high-priority scenarios identified in the utility tree. A low degree of overlap
(in terms of the specific, detailed quality attributes addressed by the scenarios) could indicate
that the project’s decision makers and stakeholders had different expectations. That would
constitute a risk to the project.
The evaluation team concluded that the CAAS architecture seemed sound with respect to
most of the behavioral and quality attribute requirements levied on it. The architecture was
found to be well partitioned, and although the size of some of the partitions was a concern
with respect to their modifiability, overall, the partitioning scheme provided a robust
foundation for evolutionary flexibility. The evaluation found that the architecture was robust
with respect to the addition of new functionality and hardware.
The evaluation identified 18 risks related to the software architecture’s ability to satisfy its
behavioral, quality attribute, and evolutionary goals. In addition, 12 non-risks (areas of
CMU/SEI-2003-TN-012 15
design strength) were identified.4 An example risk was “OFP has no built-in hooks to aid in
simulation/training capability,” referring to the goal of making the same operational software
drive both the simulators and the actual helicopters. An example non-risk was “the number of
sockets used by the system is known and guaranteed,” allowing reliable performance
estimation to be carried out.
Five sensitivity points (e.g., “isolating operating system dependencies enhances portability”)
and two tradeoff points (e.g., “letting users set I/O parameters [such as turbine gas
temperature limits] increases flexibility and usability, while decreasing safety”) were
cataloged.
In addition to the items described above, the evaluation team collected a series of issues—
areas of programmatic concern not directly related to the technical aspects of the software
architecture. For example, one issue involved the stakeholders’ expressed need for a new
functional capability that was out of the scope of the current requirements and that would be
hard to provide. These issues were then brought to the attention of the program office, where
they were handled appropriately.
The identified risks suggest a set of four “themes” in the architecture. These themes represent
the key architectural issues posing potential problems for future success and possibly
threatening the business drivers identified during Step 2. For example, several of the analyzed
scenarios dealt with performance, and revealed risks about unknown performance
requirements and certain performance goals not being met. These scenarios suggested a risk
theme: More attention should be focused on performance. Such concise syntheses allowed
the program office to focus on a few key areas that would improve its chances for success in
both current development and future evolution.
4
The ATAM process concentrates on identifying risks rather than non-risks, and so oftentimes, more
risks are uncovered than non-risks. The relative numbers are not indicative of the architecture’s
quality.
16 CMU/SEI-2003-TN-012
5 Conclusions
5.1 Benefits
Every ATAM exercise is followed by a survey of stakeholders in attendance. The survey asks,
“Do you feel you will be able to act on the results of the architecture evaluation? What
follow-on actions do you anticipate? How much time do you anticipate spending on them?”
Here is what those responding had to say verbatim:
• “Yes, greater review of latency”
• “Yes, I would expect the government to consider program changes to make
improvements to the system. There will be detailed review and action plans delivered for
risk items.”
• “Difficult to ascertain since this review was conducted during code and test phase of
program. We will act on those items that affect future business/enhancements in
conjunction with our customer.”
• “Minimal actions by a user are available.”
• “Because we are so far down the road with the CAAS, we don’t see any major changes,
but we are better aware of risk areas.”
• “I expect to spend at least hours in meetings/dialogue about the [risks and issues found].”
• “Immediate impact is limited due to the point we are at in the program. Little will be
done now. May have impact as the system is evolved.”
The survey asks the participants if they feel the exercise was useful. Here is what they said:
• “Yes, [it] caused a critical look at the CAAS. It validated some architectural decisions
and raised questions about others.”
• “In general, evaluation process seems worthwhile.
• “As a maintainer/trainer, it helped me to understand the system.”
• “Unknown at this point”
• “Yes, very useful”
• “Yes, it brought issues and concerns to our attention.”
Several stakeholders lamented that the evaluation was not performed earlier. Since the project
was already in the development phase, the evaluation may have had a lesser impact than it
would have otherwise:
• “Would have been more useful earlier on in the design/requirements definition phase”
• “[It was useful], but done too late to be of significant impact.”
CMU/SEI-2003-TN-012 17
• “Do earlier! This type of exercise would have been more useful 12-14 months ago when
the design decisions were being made. If ATAM was accomplished then, more
issues/risks could have been addressed.”
• “The whole process would have been more useful if it had taken place one to two years
ago.”
5.2 Summary
Overall, this evaluation succeeded in
• raising awareness of the importance of stakeholders in the architecture process
• establishing a community of vested stakeholders and opening channels of communication
among them
• identifying a number of risk themes that can be made the subject of intense mitigation
efforts that, even though the system is in development, can be effective in heading off
disaster
• raising a number of issues with respect to previously unplanned capabilities for which
some stakeholders expressed an acute need
• elevating the role of software architecture in system acquisition
It demonstrates the applicability and usefulness of software architecture evaluation in a DoD
acquisition context. The evaluation comments underscore the SEI’s advice that software
architecture evaluations be performed before code is developed so that surfaced risks can be
mitigated when it is least costly to do so. There never seems to be time in a development
schedule to insert an architecture evaluation, but other ATAM evaluations have proven that
the time spent saves considerable time and cost later [Clements 02b].
18 CMU/SEI-2003-TN-012
References
[Bass 03] Bass, L.; Clements, P.; & Kazman, R. Software Architecture in Practice,
2nd edition. Boston, MA: Addison-Wesley, 2003.
[Clements 02a] Clements, P. & Northrop, L. Software Product Lines: Practices and
Patterns. Boston, MA: Addison-Wesley, 2002.
[Clements 02b] Clements, P.; Kazman, R.; & Klein, M. Evaluating Software
Architectures: Methods and Case Studies. Boston, MA: Addison-Wesley,
2002.
[Clements 03] Clements, P.; Bachmann, F.; Bass, L.; Garlan, D.; Ivers, J.; Little, R.;
Nord, R.; & Stafford, J. Documenting Software Architectures: Views and
Beyond. Boston, MA: Addison-Wesley, 2003.
[Colucci 03] Colucci, F. “Avionics Upgrade Underway for Special Ops Helicopters.”
National Defense 87, 591 (February 2003): 24-26.
<http://www.nationaldefensemagazine.org/article.cfm?Id=1029>.
[Kazman 00] Kazman, R.; Klein, M.; & Clements, P. ATAM: Method for Architecture
Evaluation (CMU/SEI-2000-TR-004, ADA382629). Pittsburgh, PA:
Software Engineering Institute, Carnegie Mellon University, 2000.
<http://www.sei.cmu.edu/publications/documents/00.reports
/00tr004.html>.
CMU/SEI-2003-TN-012 19
20 CMU/SEI-2003-TN-012
Form Approved
REPORT DOCUMENTATION PAGE OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching
existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding
this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters
Services, Directorate for information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of
Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503.
1. AGENCY USE ONLY 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED
Mario Barbacci, Paul Clements, Anthony Lattanze, Linda Northrop, William Wood
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION
REPORT NUMBER
Software Engineering Institute
Carnegie Mellon University CMU/SEI-2003-TN-012
Pittsburgh, PA 15213
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING AGENCY
REPORT NUMBER
HQ ESC/XPK
5 Eglin Street
Hanscom AFB, MA 01731-2116
11. SUPPLEMENTARY NOTES
17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION OF 19. SECURITY CLASSIFICATION OF 20. LIMITATION OF ABSTRACT
OF REPORT THIS PAGE ABSTRACT
UL
Unclassified Unclassified Unclassified
NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. Z39-18 298-102