Manual Book
Manual Book
• Social Sciences
• Multi-disciplinary Teamwork
• Organizational Behavior
• Leadership
• Body of Knowledge
• Problem definition
• System boundaries
• Objectives hierarchy
• Concept of operations
• Originating requirements
• Concurrent engineering
• System life cycle phases
• Integration/Qualification
Architectures
• Functional / Logical
• Physical / Operational
• Interface
Trades
• Concept Level
• Risk Management
• Defined interfaces
• Solution architecture
• Product breakdown structure
• Possess a CONOPS & a System CONTEXT
• Have at least ONE mission event time line defined
• Unprecedented Systems
• Conceptual (at best, may not even be conceived)
• Need can be expressed
• Must develop CONOPS, define mission(s), & Context
• Describe FUNCTIONS, INTERFACES, and TESTING
Techn
Deplo
Conce
Produ
Opera
Defin
Integr
Dispoe
System Engineering Knowledge
The diagram is divided into five sections, each describing how systems
knowledge is treated in the SEBoK.
The knowledge presented in this part of the SEBoK has been organized into these
areas to facilitate understanding; the intention is to present a rounded picture of
research and practice based on system knowledge. These knowledge areas should be
seen together as a “system of ideas” for connecting research, understanding, and
practice, based on system knowledge which underpins a wide range of scientific,
management, and engineering disciplines and applies to all types of domains.
The Vee Model endorses the INCOSE Systems Engineering Handbook (INCOSE 2012) definition
of life cycle stages and their purposes or activities, as shown in Figure 2 below.
The INCOSE Systems Engineering Handbook 3.2.2 contains a more detailed version of the Vee
diagram which incorporates life cycle activities into the more generic Vee model.
A similar diagram, developed at the U.S. Defense Acquisition University (DAU), can be seen in
Figure 3 below.
Figure 3. The Vee Activity Diagram (Prosnik 2010). Released by the Defense Acquisition University
(DAU)/U.S. Department of Defense (DoD).
Figure 5 shows the generic life cycle stages for a variety of stakeholders, from a standards
organization (ISO/IEC) to commercial and government organizations. Although these stages differ in
detail, they all have a similar sequential format that emphasizes the core activities as noted in Figure
2 (definition, production, and utilization/retirement).
Figure 5. Comparisons of Life Cycle Models (Forsberg, Mooz, and Cotterman 2005). Reprinted with
permission of John Wiley & Sons. All other rights are reserved by the copyright owner.
It is important to note that many of the activities throughout the life cycle are
iterated. This is an example of recursion (glossary).
• The term phase refers to the different steps of the program that support and manage the
life of the system; the phases usually do not overlap. The term “phase” is used in many
well-established models as an equivalent to the term “stage.”
Program management employs phases, milestones, and decision gates which are used to
assess the evolution of a system through its various stages. The stages contain the activities
performed to achieve goals and serve to control and manage the sequence of stages and the
transitions between each stage. For each project, it is essential to define and publish the terms
and related definitions used on respective projects to minimize confusion.
A typical program is composed of the following phases:
• The pre-study phase, which identifies potential opportunities to address user needs
with new solutions that make business sense.
• The feasibility phase consists of studying the feasibility of alternative concepts to reach a
second decision gate before initiating the execution stage. During the feasibility phase,
stakeholders' requirements and system requirements are identified, viable solutions are
identified and studied, and virtual prototypes (glossary) can be implemented. During this
phase, the decision to move forward is based on:
• whether a concept is feasible and is considered able to counter an identified threat or
exploit an opportunity;
• whether a concept is sufficiently mature to warrant continued development of a new
product or line of products; and
• whether to approve a proposal generated in response to a request for proposal.
• The execution phase includes activities related to four stages of the system life
cycle: development, production, utilization, and support. Typically, there are two decision
gates and two milestones associated with execution activities. The first milestone provides
the opportunity for management to review the plans for execution before giving the go-
ahead. The second milestone provides the opportunity to review progress before the
decision is made to initiate production. The decision gates during execution can be used to
determine whether to produce the developed Solution and whether to improve it or retire it.
These program management views apply not only to the Solution, but also to its elements and
structure.
• New projects typically begin with an exploratory research phase which generally includes the
activities of concept definition, specifically the topics of business or mission analysis and the
understanding of stakeholder needs and requirements. These mature as the project goes
from the exploratory stage to the concept stage to the development stage.
• The production phase includes the activities of system definition and system realization, as
well as the development of the system requirements (glossary) and architecture
(glossary) through verification and validation.
• The utilization phase includes the activities of system deployment and system operation.
• The support phase includes the activities of system maintenance, logistics, and product and
service life management, which may include activities such as service life extension or
capability updates, upgrades, and modernization.
• The retirement phase includes the activities of disposal and retirement, though in some
models, activities such as service life extension or capability updates, upgrades, and
modernization are grouped into the "retirement" phase.
Additional information on each of these stages can be found in the sections below (see links to
additional Part 3 articles above for further detail). It is important to note that these life cycle
stages, and the activities in each stage, are supported by a set of systems engineering
management processes.
Exploratory Research Stage
User requirements analysis and agreement is part of the exploratory research stage and is critical
to the development of successful systems. Without proper understanding of the user needs, any
system runs the risk of being built to solve the wrong problems. The first step in the exploratory
research phase is to define the user (and stakeholder) requirements and constraints. A key part
of this process is to establish the feasibility of meeting the user requirements, including
technology readiness assessment. As with many SE activities this is often done iteratively, and
stakeholder needs and requirements are revisited as new information becomes available.
A recent study by the National Research Council (National Research Council 2008) focused on
reducing the development time for US Air Force projects. The report notes that, “simply stated,
systems engineering is the translation of a user’s needs into a definition of a system and its
architecture through an iterative process that results in an effective system design.” The iterative
involvement with stakeholders is critical to the project success.
Except for the first and last decision gates of a project, the gates are performed simultaneously.
See Figure 6 below.
Concept Stage
During the concept stage, alternate concepts are created to determine the best approach to meet
stakeholder needs. By envisioning alternatives and creating models, including appropriate
prototypes, stakeholder needs will be clarified and the driving issues highlighted. This may lead
to an incremental or evolutionary approach to system development. Several different concepts
may be explored in parallel.
Development Stage
The selected concept(s) identified in the concept stage are elaborated in detail down to the
lowest level to produce the solution that meets the stakeholder requirements. Throughout this
stage, it is vital to continue with user involvement through in-process validation (the upward
arrow on the Vee models). On hardware, this is done with frequent program reviews and a
customer resident representative(s) (if appropriate). In agile development, the practice is to have
the customer representative integrated into the development team.
Production Stage
The production stage is where the SoI is built or manufactured. Product modifications may be
required to resolve production problems, to reduce production costs, or to enhance product or
SoI capabilities. Any of these modifications may influence system requirements and may require
system re-qualification, re- verification, or re-validation. All such changes require SE
assessment before changes are approved.
Utilization Stage
A significant aspect of product life cycle management is the provisioning of supporting systems
which are vital in sustaining operation of the product. While the supplied product or service may
be seen as the narrow system-of-interest (NSOI) for an acquirer, the acquirer also must
incorporate the supporting systems into a wider system-of-interest (WSOI). These supporting
systems should be seen as system assets that, when needed, are activated in response to a
situation that has emerged in respect to the operation of the NSOI. The collective name for the
set of supporting systems is the integrated logistics support (ILS) system.
It is vital to have a holistic view when defining, producing, and operating system products and
services. In Figure 7, the relationship between system design and development and the ILS
requirements is portrayed.
The requirements for reliability, resulting in the need of maintainability and testability, are driving
factors.
Support Stage
In the support stage, the SoI is provided services that enable continued operation. Modifications
may be proposed to resolve supportability problems, to reduce operational costs, or to extend
the life of a system. These changes require SE assessment to avoid loss of system capabilities
while under operation. The corresponding technical process is the maintenance process.
Retirement Stage
In the retirement stage, the SoI and its related services are removed from operation. SE
activities in this stage are primarily focused on ensuring that disposal requirements are satisfied.
In fact, planning for disposal is part of the system definition during the concept stage.
Experiences in the 20th century repeatedly demonstrated the consequences when system
retirement and disposal was not considered from the outset. Early in the 21st century, many
countries have changed their laws to hold the creator of a SoI accountable for proper end-of-life
disposal of the system.
The system requirements review (SRR) is planned to verify and validate the set of system
requirements before starting the detailed design activities.
• The preliminary design review (PDR) is planned to verify and validate the set of system
requirements, the design artifacts, and justification elements at the end of the first
engineering loop (also known as the "design-to" gate).
• The critical design review (CDR) is planned to verify and validate the set of system
requirements, the design artifacts, and justification elements at the end of the last
engineering loop (the “build-to” and “code-to” designs are released after this review).
• The integration, verification, and validation reviews are planned as the components are
assembled into higher level subsystems and elements. A sequence of reviews is held to
ensure that everything integrates properly and that there is objective evidence that all
requirements have been met. There should also be an in-process validation that the system,
as it is evolving, will meet the stakeholders’ requirements (see Figure 7).
• The final validation review is carried out at the end of the integration phase.
Other management related reviews can be planned and conducted in order to control the correct
progress of work, based on the type of system and the associated risks.
uncover problems that were not initially obvious. When Design Synthesis is
well- executed, it helps reduce the risk, cost, and time of product
development. An experienced Systems Engineer is able to distill informal
input from key stakeholders into actionable guidance and combine this
information with formal design input requirements to formulate a more
accurate picture of the design intent for the product and business goals of
the enterprise.
We need a framework because systems engineering has failed to fulfill 50 years of promises of providing
solutions to the complex problems facing society. (Wymore, 1994) pointed out that it was necessary for
systems engineering to become an engineering discipline if it was to fulfill its promises and thereby
survive. Nothing has changed in that respect since then. (Wymore, 1994) also stated that
“Systems engineering is the intellectual, academic, and professional discipline, the principal concern of
which is to ensure that all requirements for bioware/hardware/software systems are satisfied
throughout the lifecycles of the systems. This statement defines systems engineering as a discipline,
not as a process. The currently accepted processes of systems engineering are only implementations of
systems engineering. Elements of a discipline
Consider the elements that make up a discipline. One view was provided by (Kline, 1995) page 3) who
states “a discipline possesses a specific area of study, a literature, and a working community of paid
scholars and/or paid practitioners”. Systems engineering has a working community of paid scholars and
paid practitioners. However, the area of study seems to be different in each academic institution but with
various degrees of commonality. This situation can be explained by the recognition that (1) systems
engineering has only been in existence since the middle of the 20th century (Johnson, 1997; Jackson and
Keys, 1984; Hall, 1962), and (2) as an emerging discipline, systems engineering is displaying the same
characteristics as did other now established disciplines in their formative years. Thus, systems
engineering may be considered as being in a similar situation to the state of chemistry before the
development of the periodic table of the elements, or similar to the state of electrical engineering
before the development of Ohm‟s Law. This is why various academic institutions focus on different
areas of study but with some
Figure 1.
MITRE SE Roles & Expectations: MITRE systems engineers (SE) are expected to assist in or
lead efforts to define an architecture, based on a set of requirements captured during the
concept development and requirements engineering phases of the systems engineering life
cycle. The architecture definition activity usually produces operational, system, and technical
views. This architecture becomes the foundation for developers and integrators to create
design and implementation architectures and views. To effectively communicate and guide the
ensuing system development activities, the MITRE SE should have a sound understanding of
architecture frameworks and their use, and the circumstances under which each available
framework might be used. They also must be able to convey the appropriate framework
that applies to the various decisions and phases of the program.
Getting Started
Because systems are inherently multidimensional and have numerous stakeholders with
different concerns, their descriptions are as well. Architecture frameworks enable the creation of
system views that are directly relevant to stakeholders' concerns. Often, multiple models and
non-model artifacts are generated to capture and track the concerns of all stakeholders.
MITRE SEs should be actively involved in determining key architecture artifacts and
content, and guiding the development of the architecture and its depictions at the
appropriate levels of abstraction or detail. MITRE SEs should take a lead role in
standardizing the architecture modeling approach. They should provide a "reference
implementation" of the needed models and views with the goals of:
(1) setting the standards for construction and content of the models, and (2) ensuring
that the model and view elements clearly trace to the concepts and requirements from
which they are derived.
Figure 3. Applying
A plan is a point of departure. There should be clear milestone development dates, and the
needed resources should be established for the development of the architecture views and
models. Some views are precursors for others. Ensure that it is understood which views are
"feeds" for others.
Know the relationships. Models and views that relate to each other should be consistent,
concordant, and developed with reuse in mind. It is good practice to identify the data or
information that each view shares, and manage it centrally to help create the different
views. Refer to the SEG Architectural Patterns article for guidance on patterns and their
use/reuse.
Be the early bird. Inject the idea of architectures early in the process. Continuously influence
your project to use models and views throughout execution. The earlier the better.
No one trusts a skinny cook. By using models as an analysis tool yourself, particularly in day-to-
day and key discussions, you maintain focus on key architectural issues and demonstrate
how architecture artifacts can be used to enable decision making.
Which way is right and how do I get there from here? Architectures can be used to help
assess today's alternatives and different evolutionary paths to the future. Views of architecture
alternatives can be used to help judge the strengths and weaknesses of different approaches.
Views of "as is" and "to be" architectures help stakeholders understand potential migration paths
and transitions.
Try before you buy. Architectures (or parts of them) can sometimes be "tried out" during
live exercises. This can either confirm an architectural approach for application to real-world
situations or be the basis for refinement that better aligns the architecture with operational
reality. Architectures also can be used as a basis for identifying prototyping and
experimentation activities to reduce technical risk and engagements with operational users to
better illuminate their needs and operational concepts.
Taming the complexity beast. If a program or an effort is particularly large, models and views
can provide a disciplined way of communicating how you expect the system to behave. Some
behavioral models such as business process models, activity models, and sequence diagrams are
intuitive, easy to use, and easy to change to capture consensus views of system behavior.
Refer to the SEG Approaches to Architecture Development article for guidance on for model
characterization.
Keep it simple. Avoid diagrams that are complicated and non-intuitive, such as node connectivity
diagrams with many nodes and edges, especially in the early phases of a program. This can be a
deterrent for the uninitiated. Start with the operational concepts, so your architecture efforts flow
from information that users and many other stakeholders already understand.
Determining the right models and views. Once the frameworks have been chosen,
the models and views will need to be determined. It is not unusual to have to refer to
several sets of guidance, each calling for a different set of views and models to be
generated.
But it looked so pretty in the window. Lay out the requirements for your
architectures – what decisions it supports, what it will help stakeholders reason about,
and how it will do so. A simple spreadsheet can be used for this purpose. This
should happen early and often throughout the system's life cycle to ensure that the
architecture is used. Figure 4 provides an example of a worksheet that was used to
gather architecture requirements for a major aircraft program.
How do I create the right views? Selecting the right modeling approach to develop
accurate and consistent representations that can be used across program boundaries
is a critical systems engineering activity. Some of the questions to answer are:
Bringing dolls to life. If your program is developing models for large systems supporting
missions and businesses with time-sensitive needs, insight into system behavior is crucial.
Seriously consider using executable models to gain it. Today, many architecture tools support
the development of executable models easily and at reasonable cost. Mission-Level Modeling
(MLM) and Model Driven or Architecture-Based/Centric Engineering are two modeling
approaches that incorporate executable modeling. They are worth investigating to support
reasoning about technology impacts to mission performance and internal system behavior,
respectively.
How much architecture is enough? The most difficult conundrum when deciding to launch
an architecture effort is determining the level of detail needed and when to stop
producing/updating artifacts. Architecture models and views must be easily changeable. There
is an investment associated with having a "living" architecture that contains current
information, and differing levels of abstraction and views to satisfy all stakeholders. Actively
discuss this sufficiency issue with stakeholders so that the architecture effort is "right-sized."
Refer to the Architecture Specification for CANES [2].
Penny wise, pound-foolish. Generating architecture models and views can seem a lot
easier to not do. Before jumping on the "architecture is costly and has minimal utility"
bandwagon, consider the following:
If the answer to one or more of these questions is "yes," then consider concise, accurate,
concordant, and consistent models of your system.
UN 1
IT
The systems engineering process is applied to each level of system development, one
level at a time, to produce these descriptions commonly called configuration
baselines. This results in a series of configuration baselines, one at each
development level. These baselines become more detailed with each level.
In the Department of Defense (DoD) the configu- ration baselines are called the
functional baseline for the system-level description, the allocated baseline for the
subsystem/ component performance descriptions, and the product baseline for the
sub- system/component detail descriptions. Figure 1-2 shows the basic relationships
between the baselines. The triangles represent baseline control decision points, and
are usually referred to as technical re- views or audits.
Levels of Development Considerations
Significant development at any given level in the system hierarchy should not occur
until the con- figuration baselines at the higher levels are con- sidered complete,
stable, and controlled. Reviews and audits are used to ensure that the baselines are
ready for the next level of development. As will be shown in the next chapter, this
review and audit process also provides the necessary assessment of system maturity,
which supports the DoD Milestone decision process.
(Product Baseline)
solving process, applied sequentially through all stages of development, that is used to:
• Transform needs and requirements into a set of system product and process
descriptions (add- ing value and more detail with each level of development),
During the systems engineering process architectures are generated to better describe
and under- stand the system. The word “architecture” is used in various contexts in the
general field of engineering. It is used as a general description of how the subsystems
join together to form the system. It can also be a detailed description of an aspect of
a system: for example, the Operational, System, and Technical Architectures used in
Command, Con- trol, Communications, Computers, Intelligence, Surveillance, and
Reconnaissance (C4ISR), and software intensive developments. However, Sys- tems
Engineering Management as developed in DoD recognizes three universally usable
architectures that describe important aspects of the system: functional, physical, and
system architectures. This book will focus on these architectures as necessary
components of the systems engineering process.
The Functional Architecture identifies and structures the allocated functional and
performance requirements. The Physical Architecture depicts the
PROCESS OUTPUT
system product by showing how it is broken down into subsystems and components.
The System Architecture identifies all the products (including enabling products)
that are necessary to support the system and, by implication, the processes
necessary for development, production/construc- tion, deployment, operations,
support, disposal, training, and verification.
Life Cycle Integration
Life cycle integration is achieved through integrated development—that is,
concurrent consideration of all life cycle needs during the development process.
DoD policy requires integrated development, called Integrated Product and Prod- uct
Development (IPPD) in DoD, to be practiced at all levels in the acquisition chain of
command as will be explained in the chapter on IPPD. Con- current consideration of
all life cycle needs can be greatly enhanced through the use of interdisciplinary
teams. These teams are often referred to as Integrated Product Teams (IPTs).
The objective of an Integrated Product Team is to:
• Produce a design solution that satisfies initially defined requirements, and
Life cycle functions are the characteristic actions associated with the system life cycle. As
illustrated by Figure 1-4, they are development, production and construction,
deployment (fielding), operation, support, disposal, training, and verification. These
activities cover the “cradle to grave” life cycle process and are associated with major
functional groups that provide essential support to the life cycle process. These key life
cycle functions are commonly referred to as the eight primary functions of systems
engineering.
The customers of the systems engineer perform the life-cycle functions. The system
user’s needs are emphasized because their needs generate the requirement for the
system, but it must be remembered that all of the life-cycle functional areas
generate requirements for the systems engineering process once the user has established
the basic need. Those that perform the primary functions also provide life-cycle
representation in design- level integrated teams.
Primary Function Definitions
Development includes the activities required to evolve the system from customer needs
to product or process solutions.
Manufacturing/Production/Construction includes the fabrication of engineering test
models and “brass boards,” low rate initial production, full-rate production of systems
and end items, or the construction of large or unique systems or sub- systems.
Operation is the user function and includes activities necessary to satisfy defined
operational objectives and tasks in peacetime and wartime environments.
Support includes the activities necessary to pro- vide operations support, maintenance,
logistics, and material management.
Disposal includes the activities necessary to ensure that the disposal of
decommissioned, destroyed, or irreparable system components meets all
applicable regulations and directives.
Training includes the activities necessary to achieve and maintain the knowledge and
skill levels necessary to efficiently and effectively perform operations and support
functions.
• GUIDANCE
System engineering is applied during all acquisi- tion and support phases for large- and
small-scale systems, new developments or product improve- ments, and single and
multiple procurements. The process must be tailored for different needs and/or
requirements. Tailoring considerations include system size and complexity, level of
system definition detail, scenarios and missions, con- straints and requirements,
technology base, major risk factors, and organizational best practices and strengths.
For example, systems engineering of software should follow the basic systems
engineering approach as presented in this book. However, it must be tailored to
accommodate the software development environment, and the unique progress
The x-axis depicts complicated, the simplest form of complexity, at the low-end on the
left, and complex, representing the range of all higher forms of complexity on the
right.The y-axis suggests how difficult it might be to engineer (or re-engineer) the
system to be improved, using Conventional (classical or traditional) SE, at the low-
end on the bottom, and Complex SE, representing all more sophisticated forms SE, on
the top. This upper range is intended to cover system of systems (SoS) engineering
(SoSE), enterprise
Case studies have been used for decades in medicine, law, and business to help students
learn fundamentals and to help practitioners improve their practice. A Matrix of
Implementation Examples is used to show the alignment of systems engineering case
studies to specific areas of the SEBoK. This matrix is intended to provide linkages
between each implementation example to the discussion of the systems engineering
principles illustrated. The selection of case studies cover a variety of sources, domains,
and geographic locations. Both effective and ineffective use of systems engineering
principles are illustrated.
The number of publicly available systems engineering case studies is growing. Case
studies that highlight the aerospace domain are more prevalent, but there is a growing
number of examples beyond this domain.
The United States Air Force Center for Systems Engineering (AF CSE) has developed
a set of case studies "to facilitate learning by emphasizing the long-term consequences
of the systems engineering/programmatic decisions on cost, schedule, and operational
effectiveness." (USAF Center for Systems Engineering 2011) The AF CSE is using
these cases to enhance SE curriculum. The cases are structured using the Friedman-
Sage framework (Friedman and Sage 2003; Friedman and Sage 2004, 84-96), which
decomposes a case into contractor, government, and shared responsibilities in the
following nine concept areas:
• Requirements Definition and Management
• Systems Architecture Development
• System/Subsystem Design
• Verification/Validation
• Risk Management
• Systems Integration and Interfaces
• Life Cycle Support
• Deployment and Post Deployment
• System and Program Management
This framework forms the basis of the case study analysis carried out by the AF CSE.
Two of these case studies are highlighted in this SEBoK section, the Hubble Space
Telescope Case Study and the Global Positioning System Case Study.
The United States National Aeronautics and Space Administration (NASA) has a catalog
of more than fifty NASA-related case studies (NASA 2011). These case studies include
insights about both program management and systems engineering. Varying in the level
of detail, topics addressed, and source organization, these case studies are used to
enhance
learning at workshops, training, retreats, and conferences. The use of case studies is
viewed as important by NASA since "organizational learning takes place when
knowledge is shared in usable ways among organizational members. Knowledge is
most usable when it is contextual" (NASA 2011). Case study teaching is a method
for sharing contextual knowledge to enable reapplication of lessons learned. The
MSTI Case Study is from this catalog.
Value of System Design
Systems design is an interdisciplinary engineering activity that enables the realization
of successful systems. Systems design is the process of defining the architecture,
product design, modules, interfaces, and data for a system to satisfy specified
requirements.
Systems design could be seen as the application of systems theory to product
development. There is some overlap with the disciplines of systems analysis, systems
architecture and systems engineering.
A system may be denned as an integrated set of components that accomplish a defined
objective. The process of systems design includes defining software and hardware
architecture, components, modules, interfaces, and data to enable a system to satisfy a
set of well-specified operational requirements.
In general, systems design, systems engineering, and systems design engineering all
refer to the same intellectual process of being able to define and model complex
interactions among many components that comprise a system, and being able to
implement the system with proper and effective use of available resources. Systems
design focuses on defining customer needs and required functionality early in the
development cycle, documenting requirements, then proceeding with design synthesis
and system validation while considering the overall problem consisting of:
• Operations
• Performance
• Test and integration
• Manufacturing
• Cost and schedule
• Deployment
• Training and support
• Maintenance
• Disposal
Systems design integrates all of the engineering disciplines and specialty groups into a
team effort forming a structured development process that proceeds from concept to
production to operation. Systems design considerations include both the business and
technical requirements of customers with the goal of providing a quality product that
meets the user needs. Successful systems design is dependent upon project
management, that is, being able to control costs, develop timelines, procure resources,
and manage risks.
Information systems design is a related discipline of applied computer systems, which
also incorporates both software and hardware, and often includes networking and
telecommunications, usually in the context of a business or other enterprise. The
general principals of systems design engineering may be applied to information
systems design. In addition, information systems design focuses on data-centric
themes such as subjects, objects, and programs.
If the broader topic of product development "blends the perspective of marketing, design,
and manufacturing into a single approach to product development," then design is the act of
taking the marketing information and creating the design of the product to be manufactured.
Systems design is therefore the process of defining and developing systems to satisfy specified
requirements of the user.
The basic study of system design is the understanding of component parts and their
subsequent interaction with one another.[4]
Until the 1990s, systems design had a crucial and respected role in the data processing industry.
In the 1990s, standardization of hardware and software resulted in the ability
to build modular systems. The increasing importance of software running on generic
platforms has enhanced the discipline of software engineering.
Architectural design[edit]
The architectural design of a system emphasizes the design of the system architecture
that describes the structure, behavior and more views of that system and analysis.
Logical design[edit]
The logical design of a system pertains to an abstract representation of the data flows, inputs and
outputs of the system. This is often conducted via modelling, using an over-abstract (and
sometimes graphical) model of the actual system. In the context of systems, designs are
included. Logical design includes entity-relationship diagrams (ER diagrams).
Physical design[edit]
The physical design relates to the actual input and output processes of the system. This is
explained in terms of how data is input into a system, how it is verified/authenticated, how it
is processed, and how it is displayed. In physical design, the following requirements about
the system are decided.
• Input requirement,
• Output requirements,
• Storage requirements,
• Processing requirements,
• System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into
three sub-tasks:
User Interface Design is concerned with how users add information to the system and with
how the system presents information back to them. Data Design is concerned with how the
data is represented and stored within the system. Finally, Process Design is concerned with
how data moves through the system, and with how and where it is validated, secured and/or
transformed as it flows into, through and out of the system. At the end of the system design
phase, documentation describing the three sub-tasks is produced and made available for use in
the next phase.
Physical design, in this context, does not refer to the tangible physical design of an information
system. To use an analogy, a personal computer's physical design involves input via a keyboard,
processing within the CPU, and output via a monitor, printer, etc. It would not concern the actual
layout of the tangible hardware, which for a PC would be a monitor, CPU, motherboard, hard
drive, modems, video/graphics cards, USB slots, etc. It involves a detailed design of a user and a
product database structure processor and a control processor. The H/S personal specification
is developed for the proposed system.
Functional Analysis and Allocation
Functional Analysis and Allocation is a top-down process of translating system-
level requirements into detailed functional and performance design criteria. The
result of the process is a defined Functional Architecture with allocated system
requirements that are traceable to each system function.
Making changes to the process gets more and more difficult as your business
grows because of habits and investments in old methods. But in reality, you
cannot improve processes without making changes. Processes have to be
reengineered carefully since experiments and mistakes bring in a lot of
confusion
Whatisbusinessprocessre-engineering(BPR)?
Business process re-engineering is the radical redesign of business processes to
achieve dramatic improvements in critical aspects like quality, output, cost, service,
and speed. Business process reengineering (BPR) aims at cutting down enterprise
costs and process redundancies on a very huge scale.
Isbusinessprocessreengineering(BPR)sameasbusinessprocessimprove
ment(BPI)?
On the surface, BPR sounds a lot like business process improvement (BPI).
However, there are fundamental differences that distinguish the two. BPI might
be about tweaking a few rules here and there. But reengineering is an
unconstrained approach to look beyond the defined boundaries and bring in
seismic changes.
Fivestepsofbusinessprocessreengineering(BPR)
To keep business process reengineering fair, transparent, and efficient,
stakeholders need to get a better understanding of the key steps involved in
it. Although the process can differ from one organization to another, these steps
listed below succinctly summarize the process:
Below are the 5 Business Process Re-engineering Steps:
• Map the current state of your business processes
Gather data from all resources–both software tools and stakeholders.
Understand how the process is performing currently.
The telecom giant reviewed the situation and concluded that it needed
drastic measures to simplify things–a one-stop solution for all customer
queries. It decided to merge the various departments into one, let go of
employees to minimize multiple handoffs and form a nerve center of
customer support to handle all issues.
A few months later, they set up a customer care center in Atlanta and started
training their repair clerks as ‘frontend technical experts’ to do the new,
comprehensive job. The company equipped the team with new software that
allowed the support team to instantly access the customer database and
handle almost all kinds of requests.
Now, if a customer called for billing query, they could also have that erratic
dial tone fixed or have a new service request confirmed without having to call
another number. While they were still on the phone, they could also make
use of the push-button phone menu to connect directly with another
department to make a query or input feedback about the call quality.
However, once an organization grows, it will have a harder and more expensive
time to completely reengineer its processes. But they are also the ones who are
forced to change due to competition and unexpected marketplace shifts.
But more than being industry-specific, the call for BPR is always based on what
an organization is aiming for. BPR is effective when companies need to break
the mold and turn the tables in order to accomplish ambitious goals. For
such measures, adopting any other process management options will only be
rearranging the deck chairs on the Titanic.
The third phase in systems engineering is design synthesis. Before design synthesis,
all use cases are ranked according to hierarchy.
Roles:
Required tasks:
• Architectural analysis
• Architectural design
Artifacts:
The third phase in systems engineering is design synthesis. Before design synthesis,
all use cases are ranked according to hierarchy. During design synthesis, you
develop a physical architecture that can perform the functions that you derived in
the functional analysis phase. You also account for performance constraints as
you develop the physical architecture.
When you perform system architectural analysis, you merge realized use cases
into integrated architecture analysis project. This task is often based on a trade study
pertinent to the system you intend to design. During architectural analysis, use
cases are not mapped to functional interfaces. Instead, you take a black box
approach to examine functional entities and determine reuse of those entities. After
you examine functional entities, you can allocate the logical architecture into a
physical architecture.
You can use a white box activity diagram to allocate use cases to a physical or
logical architecture. Typically, this diagram is derived from a black box activity
diagram. The white box activity diagram is partitioned into swim lanes, which show
the hierarchical structure of the architecture. Then you can move system-level
operations into an appropriate swim lane.
You can use subsystem white box scenarios allow you to decompose system blocks
to the lowest level of functional allocation. At that level, you specify the operations
to be implemented in both the hardware and software of your system. You can derive
subsystem logical interfaces from white box sequence diagrams. The interfaces
belong to the blocks at the lowest level of your system.
You can derive subsystem logical interfaces from white box sequence diagrams.
The interfaces belong to the blocks at the lowest level of your system. Then, you can
define subsystem behavior, also known as leaf block behavior for each lowest level
of decomposition in your system. This type of derived behavior is the physical
implementation of decomposed subsystems and is shown in a state chart diagram.
Model execution of leaf block behavior is performed on both the leaf block behavior
itself, as well as the interaction between each of the decomposed subsystems.
Step 1
Step 2
Step 3
Step 4
Once the alternatives are developed, an action plan has to be developed. This
is essentially the implementation phase. In this phase, the decision-maker needs to
decide who would do what, where, when, how, etc. The process of arriving at these
decisions is just like the steps involved in the problem solving approach, except
that the chosen alternative becomes an input to this step. This phase would require
coordination skills to properly organize a variety of resources (human, material and
fiscal) and develop a time-phased programme for implementation.
Feedback and contingency planning
For a variety of reasons, the original decision (chosen alternative) may not work
well and the decision-maker may have to be ready with a contingency plan. This
implies devising feedback mechanisms allowing monitoring of the status of the
situation, including results of the action plan. It also implies anticipating the most
likely points of failure and devising appropriate contingency plans to handle the
possible failures.
The additional skills required in this step would be those of devising control
and feedback mechanisms.
UNIT III
ANALYSIS OF ALTERNATIVES- I
Cross-impact analysis, Structural modeling tools, System Dynamics models with case studies,
Economic models: present value analysis – NPV, Benefits and costs over time, ROI, IRR; Work
and Cost breakdown structure,
1.2 What is an AoA?
As defined in the A5R Guidebook, the AoA is an analytical comparison of the operational
effectiveness, suitability, risk, and life cycle cost of alternatives under consideration to satisfy
validated capability needs (usually stipulated in an approved ICD). Other definitions of an AoA
can be found in various official documents. The following are examples from DoDI 5000.02 and
the Defense Acquisition Guidebook:
• The AoA assesses potential materiel solutions that could satisfy validated capability
requirement(s) documented in the Initial Capabilities Document, and supports a decision on the
most cost effective solution to meeting the validated capability requirement(s). In developing
feasible alternatives, the AoA will identify a wide range of solutions that have a reasonable
likelihood of providing the needed capability.
• An AoA is an analytical comparison of the operational effectiveness, suitability, and life cycle
cost (or total ownership cost, if applicable) of alternatives that satisfy established capability
needs.
Though the definitions vary slightly, they all generally describe the AoA as a study that is used to
assess alternatives that have the potential to address capability needs or requirements that are
documented in a validated or approved capability requirements document. The information
provided in an AoA helps decision makers select courses of action to satisfy an operational
capability need.
1.3 What is the Purpose of the AoA?
According to the A5R Guidebook, the purpose of the AoA is to help decision-makers understand
the tradespace for new materiel solutions to satisfy an operational capability need, while
providing the analytic basis for performance attributes documented in follow-on JCIDS
documents. The AoA provides decision-quality analysis and results to inform the Milestone
Decision Authority (MDA) and other stakeholders at the next milestone or decision point. In
short, the AoA must provide compelling evidence of the capabilities and military worth of the
alternatives. The results should enable decision makers to discuss the appropriate cost, schedule,
performance, and risk tradeoffs and assess the operational capabilities and affordability of the
alternatives assessed in the study. The AoA results help decision makers shape and scope the
courses of action for new materiel solutions to satisfy operational capability needs and the
Request for Proposal (RFP) for the next acquisition phase. Furthermore, AoAs provide the
foundation for the development of documents required later in the acquisition cycle such as the
Acquisition Strategy, Test and Evaluation Master Plan (TEMP), and Systems Engineering Plan
(SEP).
The AoA should also provide recommended changes, as needed, to validated capability
requirements that appear unachievable or undesirable from a cost, schedule, performance, or risk
point of view. It is important to note that the AoA provides the analytic basis for performance
parameters documented in the appropriate requirements documents (e.g., AF Form 1067, Joint
DOTmLPF-P2 Change Request (DCR), AF-only DCR, Draft Capability Development Document
(CDD), Final CDD, or Capability Production Document (CPD)).
1.4 When is the AoA Conducted?
As noted earlier, the AoA is an important element of both the capability development and
acquisition processes. As presented in the A5R Guidebook, Figure 1-1 highlights where AoA is
conducted in these processes. The capability development phases are shown across the top of the
figure3 while the lower right of the figure illustrates the acquisition phases, decision points, and
milestones. In accordance with the Weapon Systems Acquisition Reform Act (WSARA) of 2009,
DoDI 5000.02, and the A5R Guidebook, for all ACAT initiatives, the AoA is typically conducted
during the Materiel Solution Analysis (MSA) phase. Follow-on AoAs, however; may be
conducted later during the Technology Maturation & Risk Reduction and the Engineering &
Manufacturing Development phases.
cross-impact analysis
Cross-impact analysis, also known as cross-impact matrix or cross-impact balance analysis, is a
method used in systems thinking and scenario planning to explore the potential interactions
between different factors or variables in a complex system. It is a tool for assessing the
interdependencies and feedback loops between different elements within a system.
The basic idea behind cross-impact analysis is to analyze how changes in one variable can affect
other variables in the system and vice versa. By understanding these interconnections, it becomes
possible to identify potential consequences, unintended effects, and critical relationships within
the system.
Here's how cross-impact analysis typically works:
Identify factors: The first step is to identify the relevant factors or variables that influence the
system being studied. These factors can be social, economic, technological, environmental,
political, or any other relevant aspect of the system.
Construct a cross-impact matrix: A cross-impact matrix is created by systematically evaluating
how each factor influences or impacts the others. The matrix is usually filled with qualitative
judgments or expert opinions regarding the strength and direction of the impact. The interactions
are usually rated using a scale (e.g., strong positive, weak positive, neutral, weak negative, strong
negative).
Analyze the matrix: Once the cross-impact matrix is completed, it can be analyzed to identify
the most critical relationships and potential feedback loops within the system. Some variables
may have significant impacts on many other variables, making them central to the system's
behavior.
Scenario development: Cross-impact analysis can be used to develop scenarios that explore
different future states of the system. By combining the interactions identified in the matrix with
different initial conditions or assumptions, multiple scenarios can be constructed to understand
the range of possible outcomes.
Policy implications: The analysis helps decision-makers understand the implications of different
policies or actions on the system. By identifying critical relationships, decision-makers can focus
on leveraging positive interactions and mitigating negative ones.
Cross-impact analysis is a valuable tool for understanding complex systems and making
informed decisions in scenarios where various factors interact in intricate ways. It is commonly
used in fields such as strategic planning, environmental assessment, technology foresight, and
risk analysis. However, it is important to note that the accuracy and reliability of the analysis
depend heavily on the quality of data and expert judgment used to construct the cross-impact
matrix.
4.1 Introduction
In today’s competitive business situations characterized by globalization, short product
life cycles, open systems architecture, and diverse customer preferences, many managerial
innovations such as the just-in-time inventory management, total quality management, Six
Sigma quality, customer–supplier partnership, business process reengineering, and supply
chain integration, have been developed. Value improvement of services based on value
engineering and systems approach (Miles, 1984) is also considered a method of managerial
innovation. It is indispensable for corporations to expedite the value improvement of
services and provide fine products satisfying the required function with reasonable costs.
This chapter provides a performance measurement system (PMS) for the value
improvement of services, which is considered an ill-defined problem with uncertainty
(Terano, 1985). To recognize a phenomenon as a problem and then solve it, it will be necessary
to grasp the essence (real substance) of the problem. In particular, for the value
improvement problems discussed in this chapter, they can be defined as complicated,
ill-defined problems since uncertainty in the views and experiences of decision makers,
called “fuzziness,” is present.
Building the method involves the following processes: (a) selecting measures and
building a system recognition process for management problems, and (b) providing the
performance measurement system for the value improvement of services based on the system
recognition process. We call (a) and (b) the PMS design process, also considered a core
decision-making process, because in the design process, strategy and vision are exactly
interpreted, articulated with, and translated into a set of qualitative and/or quantitative
measures under the “means to purpose” relationship.
We propose in this chapter a system recognition process that is based on system definition,
system analysis, and system synthesis to clarify the essence of the ill-defined problem.
Further, we propose and examine a PMS based on the system recognition process
as a value improvement method for services, in which the system recognition process
reflects the views of decision makers and enables one to compute the value indices for
the resources. In the proposed system, we apply the fuzzy structural modeling for building
the structural model of PMS. We introduce the fuzzy Choquet integral to obtain the
total value index for services by drawing an inference for individual linkages between the
scores of PMS, logically and analytically. In consequence, the system we suggest provides
decision makers with a mechanism to incorporate subjective understanding or insight
about the evaluation process, and also offers a flexible support for changes in the business
environment or organizational structure.
A practical example is illustrated to show how the system works, and its effectiveness
is examined.
4.2 System recognition process
Management systems are considered to include cover for large-scale complicated
problems.
However, for a decision maker, it is difficult to know where to start solving ill-defined
problems involving uncertainty.
In general, the problem is classified broadly into two categories. One is a problem
with preferable conditions—the so-called well-defined problem (structured or programmable),
which has an appropriate algorithm to solve it. The other one is a problem with
non-preferable conditions—the so-called ill-defined problem (unstructured or
nonprogrammable),
which may not have an existing algorithm to solve it or there may be only a partial algorithm.
Problems involving human decision making or large-scale problems with a complicated nature
are applicable to that case. Therefore, uncertainties such as fuzziness (ambiguity in decision
making) and randomness (uncertainty of the probability of an event) characterize the ill-defined
problem.
In this chapter, the definition of management problems is extended to semistructured
and/or unstructured decision-making problems (Simon, 1977; Anthony, 1965; Gorry and
Morton, 1971; Sprague and Carlson, 1982). It is extremely important to consider the way to
recognize the essence of an “object” when necessary to solve some problems in the fields
of social science, cultural science, natural science, etc.
This section will give a systems approach to the problem to find a preliminary way to
propose the PMS for value improvement of services. In this approach, the three steps taken
in natural recognition pointed out by Taketani (1968) are generally applied to the process of
recognition development. These steps—phenomenal, substantial, and essential—regarding
system recognition are necessary processes to go through to recognize the object.
With the definitions and the concept of systems thinking, a conceptual diagram of
system recognition can be described as in Figure 4.1. The conceptual diagram of system
recognition will play an important role to the practical design and development of the value
improvement system for services. Phase 1, phase 2, and phase 3 in Figure 4.1 correspond to the
respective three steps of natural recognition described above. At the phenomenal stage (phase 1),
we assume that there exists a management system as an object; for example, suppose a
management problem concerning
management strategy, human resource, etc., and then extract the characteristics of the problem.
Then, in the substantial stage, we may recognize the characteristics of the problem as available
information, which are extracted at the previous step, and we perform systems analysis to clarify
the elements, objective, constraints, goal, plan, policy, principle, etc., concerning the problem.
Next, the objective of the problem is optimized subject to constraints arising from the viewpoint
of systems synthesis so that the optimal management system can be obtained. The result of the
optimization process, as feedback information, may be returned to phase 1 if necessary,
comparing with the phenomena at stage 1. The decision maker examines whether the result will
meet the management system he conceives in his mind (mental model). If the result meets the
management system conceived in the phenomenal stage, it becomes the optimal management
system and proceeds to the essential stage (phase 3). The essential stage is considered a step to
recognize the basic laws
(Rules) and principles residing in the object. Otherwise, going back to the substantial stage
becomes necessary, and the procedure is continued until the optimal system is obtained.
graphical hierarchy with well-preserved contextual relations among measured elements. For
FSM, binary fuzzy relation within the closed interval of [0, 1] based on fuzzy set (Zadeh, 1965)
is used to represent the subordination relations among the elements, and relaxes the transitivity
constraint in contrast to ISM (Interpretive Structural Modeling) (Warfield et al., 1975) or
DEMATEL (Decision Making Trial and Evaluation Laboratory) (Gabus and Fontela, 1975). The
major advantage of those methods may be found in showing intuitive appeal of the graphical
picture to decision makers. First, the decision makers’ mental model (imagination) about the
given problem, which is the value improvement of services, is embedded in a subordination
matrix and then reflected on a structural model. Here, the measured elements are identified by
methods such as nominal group techniques (NGT) (Delbecq et al., 1975, 1995), survey with
questionnaire, or interview depending on the operational conditions. Thus, we may apply NGT in
extracting the measured elements composing the service value and regulating them, clarifying
the measurement elements and the attributes. Then, the contextual relations among the elements
are examined and represented on the assumption of “means to purpose.” The hierarchy of the
measurement system is constructed and regarded as an interpretative
structural model. Furthermore, to compare the structural model with the mental model, a
feedback for learning will be performed by group members (decision makers). If an agreement
among the decision makers is obtained, then the process proceeds to the next stage, and the result
is considered to be the outcome of stage A. Otherwise, the modeling process restarts from the
embedding process or from drawing out and representing the measurement elements process.
Then, the process may continue to make progress in the same way as illustrated in Figure 4.2
until a structural model with some consent is obtained. Thus, we obtain the models of the
function and the cost for services as the outcomes of stage A, which are useful for applying to the
value improvement of services. Further, we extract and regulate the functions used to perform
the value improvement of services by making use of the NGT method described above.
The importance degrees of service functions are computed by using the ratio between the
functions as follows:
Let F be a matrix determined by paired comparisons among the functions.
Assume that reflexive law is not satisfied in F, and only each element corresponding to
fi,i+1 (i = 1, 2, … ,n – 1) of the matrix is given as an evaluation value,
where 0 ≦ fi,i+1
≦ 1 and fi+1,i satisfies the relation fi+1,i = 1 – fi,i+1 (i = 1, 2, … , k, … , n – 1).
Then, the weight vector E(={Ei, i = 1, 2, … , n}) of functions (Fi, i = 1, 2, … , n) can be
found below,
We apply the formulas mentioned above to find the weights of functions used. Then,
the matrix is constituted with paired comparisons by decision makers (specialists) who
take part in the value improvement of services in the corporation. Figure 4.5 shows stages
B and C of the PMS.
(1) Importance degree of functions composing customer satisfaction
Suppose, in this chapter, that the functions composing customer satisfaction are
extracted and regulated as a set as follows:
F Fi i = =
=
{,,,}
{
1 2 …6
Employee’s behavior, Management of a store,
Providing customers with information, Response to customers,
Exchange of information, Delivery service}
Improvement of customer satisfaction becomes a main purpose of corporate
management, and Fi(i = 1, 2, … , 6) are respectively defined as the function to achieve
customer satisfaction.
Then, for example, let each cell of the matrix be intuitively and empirically filled
in a paired comparison manner whose values are given by the ratio method by taking
into consideration the knowledge and/or experiences of the decision makers
(specialists):
In general, the value index of object in value engineering is defined by the following
formula.
Value index = satisfaction for necessity/use of resources (4.2)
The value index is interpreted to show the degree of satisfaction to fill necessity, which
is brought by the resources when they are utilized. On the basis of this formula, in this
study, we define the value of services composing four resources as below.
Value of services = function of services/cost of services (4.3)
Therefore, the value index, which is based on importance degree and cost concerning
each resources used to give services, is obtained.
where λ is a parameter with values –1 < λ < ∞, and note that g(⋅) becomes identical to
probability measure when λ = 0. Here, since it is assumed that when the assessment of
corporation is considered, the correlations between factors are usually independent, the
fuzzy sets X1 and X2 are independent, that is, λ = 0. Then, the total value index of services
is expressed as in Equation 4.5.
where wi(0 ≤ wi ≤ 1; i = 1, 2, 3, 4) are weights for respective resources.
At stage D, if the integrated evaluation value is examined and its validity is shown, the
process goes to the final stage (stage E).
At stage E, the integrated value indices of services computed in the previous step are
ranked using the fuzzy outranking method (Roy, 1991; Siskos and Oudiz, 1986) and draw
the graphic structure of value control (Amagasa, 1986). Then the process terminates.
In this study, each of the value indices of services is represented in the graphic structure
of the value control depicted.
4.4 Simulation for value improvement system of services
In this section, we carry out a simulation of the procedure to perform the value improvement
system of services and examine the effectiveness of the proposed value improvement
system.
Here, as specific services trade, we take up a fictitious household appliance store,
DD Company. This store is said to be a representative example providing “a thing and
services” to customers. The store sells “things” such as household electrical appliances,
which are essential necessities of life and commercial items used in everyday life. In
addition, it supplies customer services when customers purchase the “thing” itself. DD
Company was established in 1947 and the capital is 19,294 million yen, the yearly turnover
is 275,900 million yen, total assets are worth 144,795 million yen, the number of the
stores is 703 (the number of franchise stores is 582 on March 31, 2007), and the number
of employees is 3401. The store is well known to the customers on the grounds that it
would make a difference with other companies, by which the management technique
is designed for a customer-oriented style, pursuing customer satisfaction. For example,
salespersons have sufficient knowledge about the products they sell and give suitable
advice and suggestions according to customers’ requirements, which often happens on
the sales floor. We conducted a survey for DD Company. The simulation was based on
the results of a questionnaire survey and performed by applying the PMS for the value
improvement of services shown in Figure 4.2.
4.4.1 Stage A: Structural modeling
Figures 4.3 and 4.4 show the structural models with respect to the functions composing
customer satisfaction, and the cost showing the use of resources.
4.4.2 Stage B: Weighting and evaluating
Table 4.2 shows the importance degrees of resources for functions of customer satisfaction,
which is obtained by consensus among decision makers (specialists) with know-how
deeply related to the value improvement of services.
By Table 4.4, it is understood that the distributed amount for four resources and the
real ratios, which are used to attain customer satisfaction related to six functions, are provided
with the four resources.
From this, each of the value indices with respect to the respective resources used to supply
customer services, for which human resources, material resources, financial resources,
and information resources are considered, is obtained by using Tables 4.1 through 4.4 and
Equation 4.4.
(1) Value index of Human resources = 45.64/42 (= 1.1)
(2) Value index of Material = 4.08/28 (= 0.14)
(3) Value index of Financial = 13.19/12 (= 1.08)
(4) Value index of Information = 36.37/18 (= 2)
From the value indices for the resources mentioned above, the chart of value control
graphic structure is depicted as shown in Figure 4.5. Thus, it may be concluded by Figure
4.6 that the following results with respect to the value improvement of services from the
viewpoints of function and cost are ascertained.
(1) In this corporation, there is no need for doing the value improvement related to each
of human resources, financial resources, and information resources because three of
all four resources are below the curved line, implying a good balance between the
cost and the function of services.
(2) For material resources, it will be necessary to exert all possible efforts for the value
improvement of the resource because the value index is 0.04, which is much smaller
than 1.00.
(3) On the whole, the total value index of services is counted 1.23 as shown below, so
that the value indices for four resources are included within the optimal zone of the
chart of value control graphic structure shown in Figure 4.5. Therefore, it could be
concluded that the corporation may not have to improve the value of services of their
organization.
4.4.3 Stage C: Integrating (value indices)
At the integrating stage, MADM based on Choquet integral (Grabish, 1995; Modave and
Grabish, 1998) can be introduced for the value improvement of services, and the total value
index of services is obtained by integrating the value indices of the four resources as follows:
About DD Company, Nikkei Business announces that the store scored high on the
evaluation. The store advocates that customers “have trust and satisfaction with buying
the products all the time.” Also, the store supplies “attractive goods” at “a reasonable
price” as well as “superior services,” as a household appliance retail trade based on
this management philosophy. Furthermore, the store realizes a customer-oriented and
community-oriented business, and supplies smooth services reflecting area features and
scales advantages by controlling the total stock in the whole group. From this, it can be
said that the validity of the proposed method was verified by the result of this simulation
experiment, which corresponds to the high assessment of DD Company by Nikkei
Business, as described above.
4.5 Conclusion
It is important for an administrative action to pursue profit of a corporation by making
effective use of four resources—capable persons, materials, capital, and information. In
addition, allowing each employee to attach great importance to services, and then hoping
that the employee would willingly improve service quality, and thus enhancing the degree
of customer satisfaction, is important in the services trade. These surely promise to bring
about profit improvement for the corporation.
We proposed in this chapter a system recognition process that is based on system
definition, system analysis, and system synthesis, clarifying the “essence” of an ill-defined
problem. Further, we suggest the PMS as a method for the value improvement of services
and examined it, in which the system recognition process reflects the views of decision
makers and enables to compute the effective service scores. As an illustrative example,
we took up the evaluation problem of a household appliance store selected from the viewpoint
of service functions, and come up with a new value improvement method by which
the value indices of services are computed. To verify the effectiveness of the new method
we suggested, we performed a questionnaire survey about the service functions for the household
appliance store. As a result, it was determined that the proposed method is significant for the
value improvement of services in corporations.Finally, the soundness of this system was verified
by the result of this simulation. With this procedure, it is possible to build PMS for services that
is based on realities. This part of the study remains a future subject.
30.3 Investment analysis
There are various tools and techniques available for investment analysis. All of those have
their own advantages and disadvantages. Some of the tools are very simple to use while some
of them are not applicable to all types of investment analysis. This section will briefly discuss
about two simple tools that are widely used in investment analysis. They are net present
value (NPV) analysis and internal rate of return (IRR) or simply rate of return (ROR) analysis.
30.3.1 Net present value analysis
The NPV criterion relies on the sound logic that economic decisions should be made on
the basis of costs and benefits pertaining to the time of decision making. We can apply our
best knowledge to forecast the future of investments and convert those into present value
to compare. Most decisions of life are made in the present assuming that the future will be
unchanged. Economic decisions should be no exception. In the NPV method, the economic
data are analyzed through the “window” of the present.
In engineering projects, cash flows are determined in the future in terms of costs and
benefits. Cost amounts are considered negative cash flows, whereas benefit amounts are
considered positive cash flows. Once all the present and future cash flows are identified
for a project, they are converted into present value. This conversion is necessary due to the
time value of money.
The NPV method of solving engineering economics problems involves determination
of the present worth of the project or its alternatives. Using the criterion of NPV, the better
of the two projects, or the best of the three or more projects, is selected. Thus, the application
of this method involves following tasks:
1. Identify all cash flows pertaining to project(s)—in most cases, this information is
furnished by the investment experts.
2. Evaluation of the NPV of project(s).
3. Decision on the basis of NPV criterion, according to which project with the highest
positive NPV is selected.
Figure 30.1 depicts the acceptable and unacceptable regions of project selection based
on NPV method.
be more attractive to investors if other project selection criteria are included in the
decision making.
30.3.2 Internal rate of return
The IRR is a widely used investment analysis tool in industry. It is easier to comprehend,
but its analysis is relatively complex, and the pertaining calculations are lengthy. In NPV
analysis, the interest rate i is known; in the IRR analysis, i is unknown.
We evaluate a project’s IRR to ensure that it is higher than the cost of capital, as
expressed by minimum acceptable rate of return (MARR). MARR is a cutoff value, determined
on the basis of financial market conditions and company’s business “health,” below
which investment is undesirable. In general, MARR is significantly higher than what
financial institutions charge for the lending capital. The MARR includes associated project
risks and other business costs. If MARR is higher only marginally, doing engineering
business does not make much of a business sense.
The IRR method of solving engineering economics problems involves determination
of the effective rate of returns of the project or its alternatives. Using the criterion of IRR,
the better of the two projects, or the best of the three or more projects, is the one with the
highest IRR. Thus, the application of this method involves following tasks:
1. Identify all cash flows pertaining to project(s)—in most cases, this information is
furnished by the investment experts.
2. Calculate the interest rate that yields the net present value of project(s) as zero.
3. Decision on the basis of IRR criterion—any project(s) with the IRR that is greater than
the MARR is acceptable and among all acceptable projects, the best one is the one
with the highest IRR.
Figure 30.4 shows the acceptable RORs and unacceptable RORs of project selection
based on the IRR method.
30.4 Summary
Engineering economics plays an increasingly important role in investment analysis. It is
basically a decision-making process. In engineering economics, one learns to solve engineering
problems involving costs and benefits. Interest can be thought of as the rent for
using someone’s money; interest rate is a measure of the cost of this use. Interest rates are
of two types: simple or compounded. Under simple interest, only the principal earns interest.
Simple interest is non-existent in today’s financial marketplace. Under compounding,
the interest earned during a period augments the principal; thus, interest earns interest.
Compounding of interest is beneficial to the lender. Owing to its capacity to earn interest,
money has time value. The time value of money is important in making decisions pertaining
to engineering projects. Among various tools and techniques available to solve engineering
economic problems, NPV analysis and IRR analysis are widely used in industry.
They are very simple but effective tools for investment analysis.
RELIABILITY
A significant advantage in working with reliability, rather than directly with human performance,
is the ability to avail ourselves of basic system models. A system’s functional and
physical decomposition can be used to construct a system-level reliability block diagram, the
structure of which is used to compute reliability in terms of component and subsystem
reliabilities.
In the I-SPY case, we considered the reliability block diagram shown in Figure 11.11.
This diagram was derived, with some adaptation, from a front-end analysis of the workflow
of an Air Force Predator pilot (Nagy et al., 2006). It was simplified such that the functions
depicted could be reasonably matched with those tasks assessed by Schreiber and colleagues
in their study: functions 3.2 and 3.4 correspond to the basic maneuvering task, function 3.3
corresponds to the reconnaissance task, and function 3.5 matches up with the landing task.
If we assume that functions 3.2 to 3.5 are functionally independent, then the set of functions
constitutes a simple series system. Thus, total system reliability was estimated by taking the
mathematical product of the three logistic regression models, meaning that we had an expression
for total system reliability that was a function of the personnel and training domains.
A good plan for choosing a source of I-SPY Predator pilots, particularly from a system
sustainability perspective, is to seek a solution that most effectively utilizes personnel
given total system reliability and training resource constraints. In such a situation, the
quality of feasible solutions might then be judged in terms of maximizing total system
reliability for the personnel and training costs expended. This approach was adopted to
answer the I-SPY selection and training question. A non-linear program was formulated
to determine the optimal solution in terms of cost-effectiveness, the latter expressed as the
ratio of system reliability to total personnel and training costs. The feasible solution space
was constrained by a lower limit (i.e., minimum acceptable value) on total system reliability
and an upper limit (i.e., maximum acceptable value) on training time.
• Availability: When searching for additional information, sources that are more easily
accessed or brought to mind will be considered first, even when other sources are
more diagnostic.
• Reliability: The reliability of information sources is hard to integrate into the decision
making
process. Differences in reliability are often ignored or discounted.
Reliability, Availability, Maintainability, and Supportability (RAMS) are critical factors in the
design, operation, and maintenance of complex systems, particularly in engineering and
technology fields. These concepts help organizations ensure that their systems or products meet
performance and operational requirements while minimizing downtime and maximizing user
satisfaction. Let's delve into each of these models:
• Reliability: Reliability refers to the ability of a system or product to perform its intended
function without failure over a specific period of time under given operating conditions.
It's often measured using metrics like Mean Time Between Failures (MTBF) or Failure
Rate. High reliability indicates a low probability of failure and is essential for systems
that must operate consistently and safely, such as medical devices, aerospace systems,
and critical infrastructure.
• Availability: Availability is the proportion of time that a system or product is operational
and able to perform its intended function when needed. It's influenced by factors such as
maintenance practices, repair times, and system redundancy. Availability can be
calculated as (MTBF / (MTBF + Mean Time To Repair (MTTR))). Maximizing
availability is crucial for systems that need to provide uninterrupted services, like data
centers and communication networks.
• Maintainability: Maintainability refers to the ease with which a system can be repaired,
restored, or updated. It encompasses factors such as how quickly faults can be diagnosed,
the availability of spare parts, and the simplicity of maintenance procedures. High
maintainability reduces downtime and repair costs, making it easier to manage and
operate systems over their lifecycle.
• Supportability: Supportability encompasses the overall ability to provide effective and
efficient support throughout a system's lifecycle, including its development, deployment,
and operation phases. This involves aspects like training, documentation, help desks, and
remote diagnostics. Supportability ensures that users and operators can receive assistance
when needed and that the system can be effectively managed by support teams.
Organizations often use various models, methodologies, and analyses to evaluate and improve
RAMS factors:
• FMEA (Failure Modes and Effects Analysis): Identifies potential failure modes, their
effects, and their likelihoods to prioritize improvement efforts.
• RBD (Reliability Block Diagram): Represents the reliability and redundancy of
components within a system using graphical diagrams.
• Fault Tree Analysis: Analyzes the combinations of events and conditions that could lead
to system failures.
• Reliability Testing: Involves subjecting systems to various stress conditions to assess
how they perform over time and identify weak points.
• Life Cycle Cost Analysis: Evaluates the total cost of ownership over a system's lifecycle,
considering factors like maintenance, repairs, and downtime.
• Spare Parts Management: Ensures that an appropriate inventory of spare parts is
maintained to minimize downtime during repairs.
• User Training and Documentation: Ensures users and operators have the necessary
knowledge and resources to operate and maintain the system effectively.
By incorporating these models and practices, organizations can design, build, and maintain
systems that meet performance expectations, minimize downtime, and deliver reliable and
efficient services to users.
Stochastic networks and Markov models are fundamental concepts in the field of probability
theory, applied mathematics, and various fields such as computer science, operations research,
telecommunications, and more. Let's explore each of these concepts:
• Stochastic Networks: Stochastic networks deal with systems that involve a certain
degree of randomness or uncertainty. These systems may include multiple interconnected
components or nodes, where the behavior of each node is subject to probabilistic
influences. Examples of stochastic networks can be found in various real-world
scenarios, such as computer networks, communication systems, transportation networks,
and manufacturing processes.
Key features of stochastic networks:
• Randomness: The behavior of the network components or the interactions
between them is subject to probabilistic or random effects.
• Queuing Theory: Stochastic networks often involve queuing systems, where
entities (e.g., customers, data packets) wait in lines or queues before being
processed by network components.
• Performance Analysis: Stochastic networks are analyzed to understand their
performance characteristics, such as throughput, delay, and resource utilization,
under various operating conditions.
There is no such thing as the best model for a given phenomenon. Thepragmatic criterion of
usefulness often allows the existence of two ormore models for the same event, but serving
distinct purposes. Considerlight. The wave form model, in which light is viewed as a continuous
flow, is entirely adequate for designing eyeglass and telescope lenses. In contrast, for
understanding the impact of light on the retina of the eye, thephoton model, which views light as
tiny discrete bundles of energy, ispreferred. Neither model supersedes the other; both are relevant
and useful.
The word "stochastic" derives from the Greed to aim, toguess) and means "random" or
"chance." The antonym is "sure," "deterministic," or "certain." A deterministic model predicts a
single outcomefrom a given set of circumstances. A stochastic model predicts a set ofpossible
outcomes weighted by their likelihoods, or probabilities. A coinflipped into the air will surely
return to earth somewhere. Whether it landsheads or tails is random. For a "fair" coin we
consider these alternativesequally likely and assign to each the probability 12.
However, phenomena are not in and of themselves inherently stochastic or deterministic.
Rather, to model a phenomenon as stochastic or deterministic is the choice of the observer. The
choice depends on the observer's purpose; the criterion for judging the choice is usefulness.
Mostoften the proper choice is quite clear, but controversial situations do arise. If the coin once
fallen is quickly covered by a book so that the outcome"heads" or "tails" remains unknown, two
participants may still usefullyemploy probability concepts to evaluate what is a fair bet between
them;that is, they may usefully view the coin as random, even though most people would
consider the outcome now to be fixed or deterministic. As a lessmundane example of the
converse situation, changes in the level of a largepopulation are often usefully modeled
deterministically, in spite of thegeneral agreement among observers that many chance events
contributeto their fluctuations. Scientific modeling has three components: (i) a natural
phenomenonunder study, (ii) a logical system for deducing implications about the phenomenon,
and (iii) a connection linking the elements of the natural systemunder study to the logical system
used to model it. If we think of thesethree components in terms of the great-circle air route
problem, the natural system is the earth with airports at Los Angeles and New York; thelogical
system is the mathematical subject of spherical geometry; and the
two are connected by viewing the airports in the physical system as pointsin the logical
system. The modern approach to stochastic modeling is in a similar spirit. Nature does not dictate
a unique definition of "probability," in the same waythat there is no nature-imposed definition of
"point" in geometry. "Probability" and "point" are terms in pure mathematics, defined only
throughthe properties invested in them by their respective sets of axioms. (SeeSection 2.8 for a
review of axiomatic probability theory.) There are, however, three general principles that are
often useful in relating or connecting the abstract elements of mathematical probability theory to
a real ornatural phenomenon that is to be modeled. These are (i) the principle ofequally likely
outcomes, (ii) the principle of long run relative frequency, and (iii) the principle of odds making
or subjective probabilities. Historically, these three concepts arose out of largely unsuccessful
attempts todefine probability in terms of physical experiences. Today, they are relevant as
guidelines for the assignment of probability values in a model, andfor the interpretation of the
conclusions of a model in terms of the phenomenon under study.
We illustrate the distinctions between these principles with a long experiment. We will
pretend that we are part of a group of people who decide to toss a coin and observe the event that
the coin will fall heads up. This event is denoted by H, and the event of tails, by T. Initially,
everyone in the group agrees that Pr{H} = ;. When asked why, people give two reasons: Upon
checking the coin construction, they believe that the two possible outcomes, heads and tails, are
equally likely;and extrapolating from past experience, they also believe that if the coinis tossed
many times, the fraction of times that heads is observed will beclose to one-half The equally
likely interpretation of probability surfaced in the works ofLaplace in 1812, where the attempt
was made to define the probability ofan event A as the ratio of the total number of ways that A
could occur tothe total number of possible outcomes of the experiment. The equallylikely
approach is often used today to assign probabilities that reflect somenotion of a total lack of
knowledge about the outcome of a chance phenomenon. The principle requires judicious
application if it is to be useful, however. In our coin tossing experiment, for instance, merely
introducingthe possibility that the coin could land on its edge (E) instantly results inPr(H) =
Pr{T} = Pr{E} = 1/3.
The next principle, the long run relative frequency interpretation ofprobability, is a
basic building block in modern stochastic modeling, madeprecise and justified within the
axiomatic structure by the law of largenumbers. This law asserts that the relative fraction of
times in which anevent occurs in a sequence of independent similar experiments approaches, in
the limit, the probability of the occurrence of the event on anysingle trial. The principle is not
relevant in all situations, however. When the surgeon tells a patient that he has an 80-20 chance
of survival, the surgeonmeans, most likely, that 80 percent of similar patients facing
similarsurgery will survive it. The patient at hand is not concerned with the longrun, but in vivid
contrast, is vitally concerned only in the outcome of his, the next, trial. Returning to the group
experiment, we will suppose next that the coin isflipped into the air and, upon landing, is quickly
covered so that no one cansee the outcome. What is Pr{H} now? Several in the group argue that
theoutcome of the coin is no longer random, that Pr{H} is either 0 or 1, andthat although we
don't know which it is, probability theory does not apply. Others articulate a different view, that
the distinction between "random" and "lack of knowledge" is fuzzy, at best, and that a person
with asufficiently large computer and sufficient information about such factorsas the energy,
velocity, and direction used in tossing the coin could havepredicted the outcome, heads or tails,
with certainty before the toss.Therefore, even before the coin was flipped, the problem was a
lack ofknowledge and not some inherent randomness in the experiment.
In a related approach, several people in the group are willing to bet witheach other, at
even odds, on the outcome of the toss. That is, they are willing to use the calculus of probability
to determine what is a fair bet, without considering whether the event under study is random or
not. The usefulness criterion for judging a model has appeared. While the rest of the mob were
debating "random" versus "lack ofknowledge," one member, Karen, looked at the coin. Her
probability forheads is now different from that of everyone else. Keeping the coin covered, she
announces the outcome "Tails," whereupon everyone mentallyassigns the value Pr{H} = 0. But
then her companion, Mary, speaks upand says that Karen has a history of prevarication. The last
scenario explains why there are horse races; different peopleassign different probabilities to the
same event. For this reason, probabil- ities used in odds making are often called subjective
probabilities. Then, odds making forms the third principle for assigning probability values
inmodels and for interpreting them in the real world. The modern approach to stochastic
modeling is to divorce the definitionof probability from any particular type of application.
Probability theoryis an axiomatic structure (see Section 2.8), a part of pure mathematics. Itsuse
in modeling stochastic phenomena is part of the broader realm of science and parallels the use of
other branches of mathematics in modelingdeterministic phenomena. To be useful, a stochastic
model must reflect all those aspects of thephenomenon under study that are relevant to the
question at hand. In addition, the model must be amenable to calculation and must allow the
deduction of important predictions or implications about the phenomenon.
• Stochastic Processes
A stochastic process is a family of random variables X where t is a parameter running over a
suitable index set T. (Where convenient, we willwrite X(t) instead of X,.) In a common situation,
the index t correspondsto discrete units of time, and the index set is T = {0, 1, 2, . . .}. In
thiscase, X, might represent the outcomes at successive tosses of a coin, repeated responses of a
subject in a learning experiment, or successive observations of some characteristics of a certain
population. Stochasticprocesses for which T = [0, c) are particularly important in applications.
Here t often represents time, but different situations also frequently arise. For example, t may
represent distance from an arbitrary origin, and X, maycount the number of defects in the
interval (0, t] along a thread, or thenumber of cars in the interval (0, t] along a highway.
Stochastic processes are distinguished by their state space, or the rangeof possible values for the
random variables X by their index set T, and bythe dependence relations among the random
variables X,. The most widelyused classes of stochastic processes are systematically and
thoroughlypresented for study in the following chapters, along with the mathematical techniques
for calculation and analysis that are most useful with theseprocesses. The use of these processes
as models is taught by example.Sample applications from many and diverse areas of interest are
an integral part of the exposition.
• Markov Models: Markov models are mathematical models used to describe systems that
exhibit a specific probabilistic property known as the Markov property or
memorylessness. This property states that the future behavior of the system depends only
on its current state and not on its past states. Markov models are widely used in the
analysis of systems that undergo transitions between a finite set of states.
Key features of Markov models:
• State Transitions: The system moves from one state to another according to
probabilistic transition probabilities, which are often represented using a transition
matrix.
• Markov Chains: A fundamental type of Markov model is the Markov chain, which
is a sequence of states with probabilistic transitions.
• Applications: Markov models are used in a wide range of applications, including
reliability analysis, queueing systems, machine learning (e.g., Hidden Markov
Models), finance (e.g., Markov Chain Monte Carlo methods), and more.
Combining stochastic networks and Markov models allows for the analysis and modeling of
complex systems with randomness and state transitions. These concepts provide valuable
insights into system behavior, performance, and optimization, making them essential tools in
various fields where uncertainty and dynamic behavior are prevalent.
Figure 47.13 Comparison of triage policies. (From Giachetti, R., Queueing theory chapter in
Design
of Enterprise Systems: Theory, Architecture, and Methods. CRC Press, Boca Raton, FL, 2010.)
Figure 47.14 Queuing network. (From Giachetti, R., Queueing theory chapter in Design of
Enterprise
Systems: Theory, Architecture, and Methods. CRC Press, Boca Raton, FL, 2010.)
classes that arrive to the queuing network at the first node with different arrival rates. Each
customer class follows a different route through the network. The customers have different
service times at each node they visit, which depends on the customer class they come from.
Finally, after being served, the customers depart from the queuing network (Figure 47.14).
y = a + bx (31.1)
where
x = independent variable
y = dependent variable
x = average of independent variable
y = average of dependent variable
The mathematical expressions used to estimate a and b in Equation 31.1 are
The correlation coefficient is a measure of the strength of the relationship between two
variables only if the variables are linearly related.
Let
Where
A positive value of r indicates that the independent and the dependent variables
increase at the same rate. When r is negative, one variable decreases as the other increases.
If there is no relationship between these variables, r will be zero.
The accuracy of the exponential costing method depends largely on the similarity
between the two projects and the accuracy of the cost-exponent factor. Generally, error
ranges from ±10% to ±30% of the actual cost.
31.2.3.5 Learning curves
In repetitive operations involving direct labor, the average time to produce an item or
provide a service is typically found to decrease over time as workers learn their tasks bet ter. As a
result, cumulative average and unit times required to complete a task will drop
considerably as output increases.
Let
DECISION ASSESSMENT
41.1 Introduction
Maintenance is a combination of all technical, administrative, and managerial actions
during the life cycle of an item intended to keep it in or restore it to a state in which it can
perform the required function (Komonen, 2002) (see Figure 41.1). Traditionally, mainte nance
has been perceived as an expense account with performance measures developed
to track direct costs or surrogates such as the headcount of tradesmen and the total dura tion of
forced outages during a specified period. Fortunately, this perception is chang ing (Tsang, 1998;
Kumar and Liyanage, 2001, 2002a; Kutucuoglu et al., 2001b). In the 21st
century, plant maintenance has evolved as a major area in the business environment and
is viewed as a value-adding function instead of a “bottomless pit of expenses” (Kaplan
and Norton, 1992). The role of plant maintenance in the success of business is crucial in
view of the increased international competition, market globalization, and the demand for
profitability and performance by the stakeholders in business (Labib et al., 1998; Liyanage
and Kumar, 2001b; Al-Najjar and Alsyouf, 2004). Today, maintenance is acknowledged as
a major contributor to the performance and profitability of business organizations (Arts
et al., 1998; Tsang et al., 1999; Oke, 2005). Maintenance managers therefore explore every
opportunity to improve on profitability and performance and achieve cost savings for
the organization (Al-Najjar and Alsyouf, 2004). A major concern has been the issue of
what organizational structure ought to be adopted for the maintenance system: should
it be centralized or decentralized? Such a policy should offer significant savings as well
(HajShirmohammadi and Wedley, 2004).
The maintenance organization is confronted with a wide range of challenges that
include quality improvement, reduced lead times, set up time and cost reductions, capac ity
expansion, managing complex technology and innovation, improving the reliability
of systems, and related environmental issues (Kaplan and Norton, 1992; Dwight, 1994,
1999; De Groote, 1995; Cooke and Paulsen, 1997; Duffua and Raouff, 1997; Chan et al.,
2001). However, trends suggest that many maintenance organizations are adopting total
productive maintenance, which is aimed at the total participation of plant personnel in
maintenance decisions and cost savings (Nakajima, 1988, 1989; HajShirmohammadi and
Wedley, 2004). The challenges of intense international competition and market glob alization
have placed enormous pressure on maintenance system to improve efficiency
and reduce operational costs (Hemu, 2000). These challenges have forced maintenance
managers to adopt tools, methods, and concepts that could stimulate performance
growth and minimize errors, and to utilize resources effectively toward making the
organization a “world-class manufacturing” or a “high-performance manufacturing”
plant.
Maintenance information is an essential resource for setting and meeting mainte nance
management objectives and plays a vital role within and outside the maintenance
organization. The need for adequate maintenance information is motivated by the fol lowing four
factors: (1) an increasing amount of information is available and data and
information are required on hand and to be accessible in real-time for decision-making
(Labib, 2004); (2) data lifetime is diminishing as a result of shop-floor realities (Labib, 2004);
(3) the way data is being accessed has changed (Labib, 2004); and (4) it helps in building
knowledge and in measurement of the overall performance of the organization. The com
puterized maintenance management system (CMMS) is now a central component of many
companies’ maintenance departments, and it offers support on a variety of levels in the
organizational hierarchy (Labib, 2004). Indeed, a CMMS is a means of achieving world class
maintenance, as it offers a platform for decision analysis and thereby acts as a guide
to management (Labib, 1998; Fernandez et al., 2003).
Consequently, maintenance information systems must contain modules that can
provide management with value-added information necessary for decision support
and decision-making. Computerized maintenance management systems are computer based
software programs used to control work activities and resources, as well as to
monitor and report work execution. Computerized maintenance management systems
are tools for data capture and data analysis. However, they should also offer the capa bility to
provide maintenance management with a facility for decision analysis (Bamber
et al., 2003).
cost. Here, the distribution of breakdown probabilities, maintenance cost, and number of
repairs over a period must be precisely computed.
41.3.2 Breakdown maintenance
Breakdown maintenance is sometimes referred to as emergency maintenance. It is main tenance
strategy on equipment that is allowed to fail before it is repaired. Here, efforts are
made to restore the equipment back to operational mode in order to avoid serious conse quences
that may result from the breakdown of the equipment. Such consequences may
take the dimension of safety, economic losses, or excessive idle time. When equipment
breaks down, it may pose safety and environmental risks to workers if it produces fumes
that may be injurious to the health of workers or excessive noise that could damage the
hearing mechanism of human beings. Other consequences may include high production
loss, which would result in economic losses for the company. Consequently, the mainte nance
manager must restore the facility to its operating condition immediately.
Many decision situations are complex and poorly understood. No one person has all the
information to make all decisions accurately. As a result, crucial decisions are made by a
group of people. Some organizations use outside consultants with appropriate expertise
to make recommendations for important decisions. Other organizations set up their own
internal consulting groups without having to go outside the organization. Decisions can
be made through linear responsibility, in which case one person makes the final decision
based on inputs from other people. Decisions can also be made through shared responsi bility, in
which case a group of people share the responsibility for making joint decisions.
The major advantages of group decision-making are:
1. Ability to share experience, knowledge, and resources. Many heads are better than one. A
group will possess greater collective ability to solve a given decision problem.
2. Increased credibility. Decisions made by a group of people often carry more weight in
an organization.
3. Improved morale. Personnel morale can be positively influenced because many people
have the opportunity to participate in the decision-making process.
4. Better rationalization. The opportunity to observe other people’s views can lead to an
improvement in an individual’s reasoning process.
Some disadvantages of group decision-making are:
1. Difficulty in arriving at a decision. Individuals may have conflicting objectives.
2. Reluctance of some individuals in implementing the decisions.
3. Potential for conflicts among members of the decision group.
4. Loss of productive employee time.
33.11.1 Brainstorming
Brainstorming is a way of generating many new ideas. In brainstorming, the decision
group comes together to discuss alternate ways of solving a problem. The members of
the brainstorming group may be from different departments, may have different back grounds
and training, and may not even know one another. The diversity of the partici pants helps create a
stimulating environment for generating different ideas from different
viewpoints. The technique encourages free outward expression of new ideas no matter
how far-fetched the ideas might appear. No criticism of any new idea is permitted dur ing the
brainstorming session. A major concern in brainstorming is that extroverts may
take control of the discussions. For this reason, an experienced and respected individual
should manage the brainstorming discussions. The group leader establishes the procedure
for proposing ideas, keeps the discussions in line with the group’s mission, discourages
disruptive statements, and encourages the participation of all members.
After the group runs out of ideas, open discussions are held to weed out the unsuitable
ones. It is expected that even the rejected ideas may stimulate the generation of other ideas,
which may eventually lead to other favored ideas. Guidelines for improving brainstorm ing
sessions are presented as follows:
• Focus on a specific problem.
• Keep ideas relevant to the intended decision.
• Be receptive to all new ideas.
• Evaluate the ideas on a relative basis after exhausting new ideas.
• Maintain an atmosphere conducive to cooperative discussions.
• Maintain a record of the ideas generated.
33.11.2 Delphi method
The traditional approach to group decision-making is to obtain the opinion of experienced
participants through open discussions. An attempt is made to reach a consensus among
the participants. However, open group discussions are often biased because of the influ ence or
subtle intimidation from dominant individuals. Even when the threat of a domi nant individual is
not present, opinions may still be swayed by group pressure. This is
called the “bandwagon effect” of group decision-making.
The Delphi method, developed in 1964, attempts to overcome these difficulties by
requiring individuals to present their opinions anonymously through an intermediary.
The method differs from the other interactive group methods because it eliminates face to-face
confrontations. It was originally developed for forecasting applications, but it has
been modified in various ways for application to different types of decision-making. The
method can be quite useful for project management decisions. It is particularly effective
when decisions must be based on a broad set of factors. The Delphi method is normally
implemented as follows:
1. Problem definition. A decision problem that is considered significant is identified and
clearly described.
2. Group selection. An appropriate group of experts or experienced individuals is formed
to address the particular decision problem. Both internal and external experts may
be involved in the Delphi process. A leading individual is appointed to serve as the
administrator of the decision process. The group may operate through the mail or
gather together in a room. In either case, all opinions are expressed anonymously on
paper. If the group meets in the same room, care should be taken to provide enough
room so that each member does not have the feeling that someone may accidentally
or deliberately observe their responses.
3. Initial opinion poll. The technique is initiated by describing the problem to be addressed
in unambiguous terms. The group members are requested to submit a list of major
areas of concern in their specialty areas as they relate to the decision problem.
4. Questionnaire design and distribution. Questionnaires are prepared to address the
areas of concern related to the decision problem. The written responses to the ques tionnaires are
collected and organized by the administrator. The administrator
aggregates the responses in a statistical format. For example, the average, mode, and
median of the responses may be computed. This analysis is distributed to the deci sion group.
Each member can then see how his or her responses compare with the
anonymous views of the other members.