Axel Requirements Engineering
Axel Requirements Engineering
I
From System Goals to UML Models to Software Specifications
Requirements Engineering
Requirements Engineering
From System Goals to UML Models to
Software Specifications
@)WILEY
A John Wiley and Sons, Ltd., Publication
Copyright © 2009 John Wiley & Sons Ltd, The Atrium, Southern Gate. Chichester,
West Sussex P019 8SQ, England
All Rights Reserved. No part of this publication may be reproduced. stored in a retrieval system or transmitted in any form or by any
means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and
Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, Safforn House, 6-10 Kirby Street,
London EClN 8TS, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the
Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex P019 8SQ. England, or emailed
to [email protected], or faxed to (+44) 1243 770620.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names
used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not
associated with any product or vendor mentioned in this hook.
This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the
understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert a..,,,istance is
required, the services of a competent professional should be sought.
John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA
Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA
Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany
John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Ausrralia
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #Q2-0l, Jin Xing Distripark, Singapore 129809
John Wiley & Sons Canada Ltd, 6045 Freemont Blvd, Mississauga, Ontario, L5R 4J3, Canada
Wiley also publishes its books in a variety of electronic formats. Some content that appears
in print may not be available in electronic books.
A catalogue record for this book is available from the British Library
ISBN 978-0-470-01270-3
Preface xxi
3 Requirements Evaluation 87
3.1 Inconsistency management 88
3.1.1 Types of inconsistency 88
3.1.2 Handling inconsistencies 89
3.1.3 Managing conflicts: A systematic process 90
3.2 Risk analysis 93
3.2.1 Types of risk 94
3.2.2 Risk management 95
3.2.3 Risk documentation 101
3.2.4 Integrating risk management in the requirements lifecycle 102
3.3 Evaluating alternative options for decision making 105
3.4 Requirements prioritization 108
3.5 Conclusion 112
Summary 113
Notes and Further Reading 114
Exercises 116
Summary 279
Notes and Further Reading 280
Exercises 283
PartD Building System Models for Requirements Engineering 287
16.1.1 Checking the structural consistency and completeness of the model 538
16.1.2 Generation of other views for dedicated analyses 540
16.1.3 Traceability management 540
16.1.4 Analogical model reuse 541
16.2 Semi-formal analysis of goal-oriented models 544
16.2.1 Conflict analysis 544
16.2.2 Heuristic identification of obstacles 549
16.2.3 Threat analysis: From goal models to anti-goal models 551
16.3 Reasoning about alternative options 557
16.3.l Qualitative reasoning about alternatives 557
16.3.2 Quantitative reasoning about alternatives 560
16.4 Model-driven generation of the requirements document 562
16.5 Beyond RE: From goal-oriented requirements to software architecture 566
16.5.1 Deriving a software data architecture from the object model 567
16.5.2 Deriving an abstract dataflow architecture from the agent and oper-
ation models 568
16.5.3 Selecting an architectural style from architectural requirements 570
16.5.4 Architectural refinement from quality requirements 571
Summary 574
Notes and Further Reading 576
Exercises 578
Bibliography 641
Index 669
m uring the past 60 years of software development for digital computers, development
technique, in one of its dimensions, has evolved in a cyclical pattern. At each successive
stage, developers recognize that their task has been too narrowly conceived: the heart
of the problem is further from the computer hardware than they had thought. Machine code
programming led to Fortran, Cobol and Algol, languages aimed at a more problem-oriented
way of programming. Then, as program size grew with increasing machine capacities, mere
program writing led to notions of program design, software architecture, and software function
specification in the large. In a culminating step, functional specification led to a more explicit
focus on system requirements - the needs and purposes that the system must serve.
As a wider range of applications embraced more ambitious systems, it gradually became
apparent that identifying and capturing system requirements was not an easy task. Published
surveys showed that many systems failed because their requirements had not been accurately
identified and analysed. Requirements defects proved enormously costly to repair at later stages.
By the mid-1980s requirements engineering became recognized as an inchoate discipline, or
~uh-discipline, in its own right. Since the early 1990s it has had its own conferences and
a growing literature. It embraces a large spectrum of activities, from discovering the needs
and purposes of stakeholders - everyone who would be in any substantial way touched by
the proposed system - and resolving the inevitable conflicts, to devising detailed human and
computer processes to satisfy the identified system requirements. Requirements engineering
must therefore include investigation and analysis of the world in which the requirements have
their meaning, because it is in, and through, that world that the computer, executing the
developed software, must bring about the desired effects .
.~ Requirements engineering is hard. It is hard to elicit human needs and purposes and to bring
them into harmony. Furthermore, there is an inherent dissonance between the quasi-formal
world of computer programs - defining the programmed machine in each system - and the
non-formal problem world of the system requirements. Programs can be treated as formal
mathematical objects, capable of being proved to satisfy a given formal specification. The
Foreword
world of system requirements, by contrast, may comprise parts drawn from the natural world,
from human participants, from engineered devices, from the built environment, and from every
context with which the system interacts directly or indirectly. The problem world is typically
heterogeneous and inherently non-formal. We implant the machine in this world, and we
program the machine to monitor and control the world through the narrow interface of states
and events that it can sense and affect directly. To the extent that the system aims at automation,
we are handing to a formally programmed machine a degree of control over a complex and
non-formal reality. The common sense and everyday practical knowledge with which human
beings can deal with the world is replaced by the formal rules embodied in the software. Even
if a system is adaptive, or intelligent or self-healing, its abilities are rigidly bounded by the
machine's programmed behaviour, and by the narrow interface which provides it with its sole
window on the problem world.
Requirements engineers, then, must be at home in both formal and non-formal worlds, and
must be able to bring them together into an effective system. Axel van Lamsweerde has been
among the leaders of the requirements engineering discipline since the 1980s, well qualified
for this role by a strong background in formal computer science - his early publications
were formal papers on concurrency - and an intense practical interest in all aspects of the
engineering of computer-based systems. This splendid book represents the culmination of
nearly two decades of his research and practical experience. He and his colleagues have
developed the KAOS method associated with his name, and have accumulated much practical
experience in developing solutions to realistic problems for its customers and users.
As we might expect, the book does what a book on requirements engineering must ideally
do. The conceptual basis of the book and the KAOS method is the notion of a goal. A goal
is a desirable state or effect or property of the system or of any part of it. This notion is
flexible enough to apply through many levels of analysis and decomposition, from the largest
ambitions of the organization to the detailed specification of a small software module. This
book brings together the most formal and the most non-formal concerns, and forms a bridge
between them. Its subject matter ranges from techniques for eliciting and resolving conflicting
requirements of stakeholders, through the structuring of system goals and their allocation to
agents in the machine and the problem world, to the definition and use of a temporal logic by
which requirements can be formally analysed and the necessary software functionality derived
from the analysis results.
The explanations are copious. Three excellent running examples, drawn from very different
kinds of system, illuminate detailed points at every level. Each chapter includes exercises to
help the reader check that what has been read has also been understood, and often to stimulate
further thought about deeper issues that the chapter has recognized and discussed. Readers
who are practising requirements engineers will find the book an excellent source for learning
or recapitulating effective approaches to particular concerns. To take one example, there is
an incisive discussion - to be found in a section of Chapter 16 - of the task of evaluating
alternative architectures and how to set about it. Another example is the crisp account of
temporal logic, given in a few pages in the following chapter. This account is so clear and
well judged that it can act as both an introduction and a reference tool for all developers
who recognize the power and utility of the formalism and want to use it. The comprehensive
Foreword
bibliographical commentaries in every chapter map out the terrain of what has by now become
a substantial literature of the requirements engineering discipline.
The author's friends and colleagues, who know him well, have been waiting for this book
with high expectations. These expectations have been amply fulfilled. Readers who have not
yet acquainted themselves deeply with the author's work should begin here, immediately. They
will not be disappointed.
Michael Jackson,
The Open University and Newcastle University
February 2008
equirements Engineering (RE) is concerned with the elicitation, evaluation, specifica-
Iii tion, analysis and evolution of the objectives, functionalities, qualities and constraints
to be achieved by a software-intensive system within some organizational or physical
environment.
The requirements problem has been with us for a long time. In their 1976 empirical study,
Bell and Thayer observed that inadequate, incomplete, inconsistent or ambiguous requirements
are numerous and have a critical impact on the quality of the resulting software. Noting this
for different kinds of projects, they concluded that 'the requirements for a system do not arise
naturally; instead, they need to be engineered and have continuing review and revision'. Some
20 years later, different surveys over a wide variety of organizations and projects in the United
States and in Europe have confirmed the requirements problem on a much larger scale. Poor
requirements have been consistently recognized to be the major cause of software problems
such as cost overruns, delivery delays, failures to meet expectations or degradations in the
environment controlled by the software.
Numerous initiatives and actions have been taken to address the requirements problem.
Process improvement models, standards and quality norms have put better requirements
engineering practices in the foreground. An active research community has emerged with
dedicated conferences, workshops, working groups, networks and journals. Requirements
engineering courses have become integral parts of software engineering curricula.
The topic has also been addressed in multiple textbooks. These fall basically into two
.,classes. Some books introduce the requirements engineering process and discuss general
principles, guidelines and documentation formats. In general they remain at a fairly high level
of coverage. Other books address the use of modelling notations but are generally more focused
on modelling software designs. Where are such models coming from? How are they built?
What are their underlying requirements? How are such requirements elaborated, organized and
analysed? Design modelling books do not address such issues.
In contrast, this book is aimed at presenting a systematic approach to the engineering of
high-quality requirements documents. The approach covers the entire requirements lifecycle
Preface
Part I of the book introduces the fundamental concepts, princip~es and techniques for
requirements engineering. It discusses the aim and scope of requirements engineering, the
products and processes involved, requirements qualities to aim at and flaws to avoid, the
critical role of requirements engineering in system and software engineering, and obstacles to
good requirements engineering practices. Key notions such as 'requirement', 'domain property'
and 'assumption' are precisely defined. State-of-the-art techniques for supporting the various
activities in the requirements lifecycle are reviewed next.
• For requirements evolution, various techniques are available for change anticipation,
traceability management, change control and on-the-fly change at system runtime.
To conclude the first part of the book and introduce the next parts, goal orientation is put
forward as a basic paradigm for requirements engineering. Key elements such as goals, agents
and scenarios are defined precisely and related to each other.
Part Il is devoted to system modelling in the specific context of engineering require-
ments. It presents a goal-oriented, multiview modelling framework integrating complementary
techniques for modelling the system-as-is and the system-to-be.
• AND/OR goal diagrams are used for capturing alternative refinements of functional and
non-functional objectives, requirements and assumptions about the system.
• AND/OR obstacle diagrams are used for modelling what could go wrong with the system
as modelled, with the aim of deriving new requirements for a more robust system. This
view is especially important for mission-critical systems where safety or security concerns
are essential.
• UML class diagrams are used for defining and structuring the conceptual objects manipu-
lated by the system and referred to in goal formulations.
• Agent diagrams are used for modelling active system components, such as people playing
specific roles, devices and software components, together with their responsibilities and
interfaces.
• Operationalization diagrams and UML use cases are used for modelling and specifying
the system's operations so as to meet the system's goals.
• UML sequence diagrams and state diagrams are used for modelling the desired system
behaviours in terms of scenarios and state machines, respectively.
f:ach modelling technique is explained separately first, with a strong emphasis on well-
grounded heuristics for model building. The full system model is obtained from those various
:views through mechanisms for view integration.
To conclude the second part of the book, a constructive method is presented for elaborating
~full, robust and consistent system model through incremental integration of the goal, object,
agent, operation and behaviour sub-models. Goals and scenarios drive the elaboration and
integration of these sub-models. The elaboration proceeds both top down, from strategic
objectives, and bottom up, from operational material available. The requirements document
is then generated systematically by mapping the resulting model into some textual format
annotated with figures. The document produced preserves the goal-oriented structure and
~9ifitt~nt of the model, and fits prescribed standards if required.
, The model-based requirements engineering approach described in Part II, known as KAOS,
has been developed and refined over more than 15 years of research, tool development and
experience in multiple industrial projects. KAOS stands for 'Keep All Objectives Satisfied'.
(J(aos happens to be the name of an allegorical movie by the Taviani brothers based on Luigi
Pirandello's five tales on the multiple facets of our world.)
Preface
Part ill reviews goal-based reasoning techniques that support the various steps of this
requirements engineering approach. The transition from requirements to software architecture
is discussed as well. The analysis techniques fall into three complementaty classes:
• Queiy-based techniques can be used for checking model well-formedness, for managing
traceability among model items, and for retrieving reusable model fragments.
• Qualitative and quantitative techniques help evaluate alternative options arising during
the requirements engineering process. Such options correspond to alternative goal
refinements, responsibility assignments, conflict resolutions or countermeasures to the
identified hazards or threats. The evaluation of options is based on the non-functional
goals identified in the goal model.
• Formal techniques can be used incrementally and locally, where and when needed,
to support goal refinement and operationalization, conflict management, analysis of
obstacles to goal achievement, analysis of security threats for countermeasure exploration,
synthesis of behaviour models, and goal-oriented model checking and animation. Such
techniques require the corresponding goals, operations and domain properties to be
specified formally.
Approach
The book presents both a comprehensive state of the art in requirements engineering (Part I)
and a systematic method for engineering high-quality requirements (Parts II and III), anchored
on this state of the art.
Like the method and supporting tools, this book is 'two-button' in nature. The material
covering formal methods for requirements engineering is optional and is concentrated near
the end of the book; the 'formal button' is mostly pressed in Chapters 17 and 18. Formal
techniques are useful in requirements engineering to enforce higher precision in specifications
and to support much richer forms of analysis for requirements quality assurance. They turn
out to be essential for reasoning about critical goals concerning system safety and security.
Formal techniques are, however, mostly hidden from Chapters 1 to 16, even though they are
to some extent involved at different places here and there. The aim is to make solid modelling
techniques more accessible to a much wider audience. For example, formal refinement patterns
are seen in Chapter 18 to produce goal refinements that are provably correct and complete
(Section 18.1). They are introduced informally in Chapter 8 to support the critical task of
refining goals in a systematic way (see the model-building heuristics in Section 8.8). Similarly,
obstacle analysis is handled formally in Chapter 18 but introduced informally in Chapter 9.
Extensive experience with students, tutorial attendees and practitioners over the years shows
that this way of hiding the underlying mathematical apparatus works remarkably well. Like
Moliere's Monsieur Jourdain, who is writing prose without being aware of it, they are using
temporal logic without really knowing it.
On the other hand, other readers with some background in formal methods might be
interested in a more formal treatment of model-based RE from the beginning. Such readers can
Preface
press the 'formal button' earlier, as they will have no difficulty in making the hidden formal
apparatus visible. The semi-formal techniques and numerous examples presented in Parts II
and III can easily be translated into the simple formalism based on temporal logic introduced
in Section 4.4.2 and further detailed in Chapter 17.
Unlike many books consisting of a mere exposition of a catalogue of notations and
illustrations of their use, this book puts a strong emphasis on constrnctive techniques for
building high-quality system models using a coherent subset of notations. A rich variety
of heuristic rules is provided that combines model-building strategies, tactics and patterns,
common pitfalls and bad smells. Much more than specific notations, what matters here is the
quality and usefulness of the models and documents elaborated, and the process according
to which such artefacts are built. Experience in teaching modelling for more than 20 years to
students and practitioners has convinced us that effective guidance in model building is what
is needed most - in the same way as good programming methods, techniques and patterns are
known to be much more important than the use of a specific programming language.
Speaking of notations, we will use standard ones wherever we can. In particular, we will see
how UML class diagrams, use cases, sequence diagrams and state diagrams can be systematically
derived from goal models, and vice versa. The only new notations introduced in the book refer
to abstractions that are crucially missing in the UML for requirements engineering; namely, goal
diagrams, obstacle diagrams and context diagrams.
The concepts, principles and techniques throughout the book are illustrated by numerous
examples from case studies to give the reader more concrete insights into how they can be
used in practical settings. The wide applicability of the techniques is demonstrated through
running examples from completely different domains: an information system, an embedded
control system and a distributed collaborative application to be developed as a product family.
These running examples arise from simplifications of real systems for library management,
train control and meeting scheduling, respectively. The method is also shown in action in
the stepwise elaboration of an entire multi-view model of a mine safety control system. The
requirements document generated semi-automatically from the latter model is shown in the
book's accompanying website.
For more active reading, each chapter ends with a series of exercises, problems and
bibliographical notes. Some of the exercises provide additional case studies for more sub-
stantial experimentation, in particular in student projects. The bibliographical notes are
intended to open the window on past achievements in the field and directions for further
study.
. A professional modelling tool that supports the goal-oriented RE method in this book is
fr(!ely accessible to the reader for building limited-size models and requirements documents
p}ttp://www.objectiver.com). The tool includes, among other components, a graphical model
~~itor, an HTML generator for navigation and zooming in/out through large models, a
model database query engine with pre-defined model consistency checks, and a requirements
document generator. The book does not assume that the reader will use this tool. However,
playing with it for building models involved in the book's exercises and case studies, and
generating requirements documents semi-automatically from the models, will result in more
Preface
active and enjoyable learning. As a side effect, further insight will be gained on the benefits of
using tools for requirements engineering.
Readership
The book is primarily intended for two categories of reader:
Parts I and II, covering the fundamentals of requirements engineering and model building,
have no real prerequisite. The more advanced techniques in Part III, and Chapters 17 and 18
in particular, assume some elementary background in the logical foundations of computing
science together with more analytical reasoning skills.
Part I
Fundamentals
Part II
Modelling
Part Ill
Reasoning
Reading graph
The material in the book has been organised to meet different needs. Multiple tracks can
therefore be followed corresponding to different selections of topics and levels of study. Such
tracks define specific paths in the book's reading graph. Arrows in this graph denote reading
precedence, whereas dotted circles indicate partial reading of the corresponding chapter by
skipping some sections.
• Track 1: Model-free introduction to RE. Part I of the book can be used for an RE course
with very little modelling. Along this track, students are expected to follow or have fol-
lowed another course on system modelling. Section 4.3 is provided to summarize popular
Preface
modelling notations for RE, defining each of them concisely, highlighting their comple-
mentarity and illustrating their use in the running case studies. Optionally, Section 4.4 on
formal specification and Section 5.4 on formal verification may be skipped for shorter
courses or students with no background in the logical foundations of computing.
• Track 2: Model-based introduction to RE. This track is intended for an RE course with
substantial coverage of modelling techniques. The material in Part I up to Section 4.2
is taken. Section 4.3 is provided as a contextual entry point to subsequent chapters
emphasizing model construction. Section 4.4 (formal specification), Chapter 5 (require-
ments inspection, validation and verification) and/or Chapter 6 (requirements evolution)
are skipped depending on course length or reader focus. The track then proceeds with
Chapter 7 and key chapters from Part II; namely, Chapter 8 on goal modelling, Chapter 10
on object modelling, Chapter 12 on operation modelling and Chapter 15 showing how
the techniques introduced in these chapters fit together to form a systematic model-
building method. Ideally, Chapters 9, 11 and 13 should be included as well to cover risk,
responsibility and behaviour models.
• Track 3: Introduction to early model building for model-driven software engineering. This
track is intended for the RE part of a software engineering course. (I used to follow it
for the first third of my SE course.) It consists of Chapter 1, introducing the basics of RE,
Chapter 7, introducing system modeling from a RE perspective, and then Chapters 8-13
on model building, concluded by Chapter 15 showing a fully worked-out case study. For
shorter coverage, Chapter 11 may be skipped, as key material there is briefly introduced
in Chapters 7, 8 and 10 and briefly recalled in Chapter 12.
• Tracks 4.n: Hybrid RE tracks. Depending on student profile, teacher interests and
course length, multiple selections can be made out of Parts I and II so as to
cover essential aspects of RE and model-based RE. Chapter 1 is required in any
selection. Typical combinations include Chapter 1, Chapter 2, [Chapter 3], Chapter 4
limited to Sections 4.1 and 4.2, [Chapter 5], [Chapter 6], Chapter 7, Chapter 8, [Chapter 91,
Chapter 10, Chapter 12, [Chapter 13] and Chapter 15, where brackets indicate optional
chapters. (I have used such combinations on several occasions.)
• Track 5: The look-ahead formal track. Students with some background in formal methods
do not necessarily have to wait until Part III to see formal modelling and analysis in action.
They will have no difficulty making the material in Part II more formal by expressing the
specifications and patterns there in the temporal logic language introduced in Section 4.4
and detailed in Chapter 17.
• Track 6: The advanced track. A more advanced course on RE, for students who have
had an introductory course before, can put more emphasis on analysis and evolution
by in-depth coverage of the material in Chapter 3, Section 4.4 in Chapter 4, Chapter 5,
Chapter 6, Chapter 9 (if not covered before), Chapter 14, Chapter 16, Chapter 17 and
Chapter 18. This track obviously has prerequisites from preceding chapters.
Preface
Additional resources
Lecture slides, additional case studies, solutions to exercises and model-driven requirements
documents from real projects will gradually be made available on the book's Web site.
Acknowledgement
I have wanted (and tried) to write this book for a long time. This means that quite a few people
have been involved in some way or another in the project.
My first thanks go to Emmanuel Letier. The book owes much to our joint work over
10 years. Emmanuel contributed significantly to some of the techniques described in Parts
II and III, notably the techniques for agent-based refinement and goal operationalization. In
addition to that, he created initial models and specifications for several case studies, examples
and exercises in the book. Emmanuel was also instrumental in making some of the pillars of
the modelling framework more solid.
Robert Darimont deserves special thanks too. He initiated the refinement pattern idea and
provided initial insights on goal conflicts. Later he gave lots of feedback from his daily use of the
method and supporting tools in industry. This feedback had a large influence on enhancements,
notably through considerable simplification and polishing of the original framework.
Speaking of the original framework, Steve Fickas and Martin Feather had a strong influence
on it through their work on composite system design. I still believe that Martin's simple but
precise semantics for agent responsibility is the one to rely on.
Many people joined the research staff in the KAOS project and contributed in some way
or another. I wish to thank in particular Christophe Damas, Anne Dardenne, Renaud De
Landtsheer, Bruno Delcourt, Emmanuelle Delor, Frarn;oise Dubisy, Bernard Lambeau, Philippe
Massanet, Cedric Neve, Christophe Ponsard, Andre Rifaut, Jean-Luc Roussel, Marie-Claire
Schayes, Hung Tran Van and Laurent Willemet.
Quite a few students provided valuable feedback from using some of the techniques in their
MS thesis or from studying draft chapters. I would like to acknowledge in particular Nicolas
Accardo, Pierre-Jean Fontaine, Olivier Haine, Laurent Hermoye, Jonathan Lewis, Florence
Massen, Junior F. Monfils, Alessandra de Schrynmakers and Damien Vanderveken.
Many thanks are also due to all those who provided helpful comments and suggestions on
earlier drafts of the book, including Alistair Suttcliffe, Klaus Pohl, Steve Fickas, Bill Robinson
and the Wiley reviewers. Martin Feather gave substantial feedback on my attempts to integrate
his DDP approach in the section on risk analysis. I am also very much indebted to Michael
Jackson for taking time to read the manuscript and write such a nice foreword.
Earlier colleagues at Philips Research Labs provided lifetime stimulation for technical
precision, highly needed in RE, including Michel Sintzoff, Philippe Delsarte and Pierre-Jacques
Courtois. Fran~ois Bodart at the University of Namur opened a window on the area for me and
excited my attraction to real-world case studies.
Writing a book that in places tries to reconcile requirements engineering (RE) and formal
methods (FM) is quite a challenge. I am indebted to the many RE researchers and practitioners
I met for their scepticism about formal methods, and to the many FM researchers I met for their
scepticism about RE as a respectable area of work. Their combined scepticism contributed a
great deal to the never-ending quest for the Holy Grail.
Preface
Besides the multiple laptops and typesetting systems I used during the painful process of
book writing, I would like to acknowledge my cellos and Johann Sebastian Bach's genial suites,
which helped me a great deal in recovering from that pain.
Last but not least, the real thanks go to Dominique for her unbounded patience and
,.;;nuu1.u1.._.._ through years and years - she would most probably have written this sort of book
Zi:'/,_U•-- times faster; to Nicolas, Florence and Celine for making me look ahead and for joking
book completion on every occasion; and to Agathe, Ines, Jeanne, Nathan, Louis and
for reminding me constantly that the main thing in life cannot be found in books.
The purpose of this part of the book is twofold:
.
/,:,,',
'
• To introduce the motivation, conceptual background and terminology on which
the rest of the book will rely.
• To provide a comprehensive account of state-of-the-art techniques for require-
ments engineering.
Chapter 1 defines what requirements engineering (RE) is about, its aim and scope, its critical
~ole in system and software engineering, and its relationship to other disciplines. We will see
\liiere what requirements are, what they are not, and what are 'good' requirements. The chapter
reviews the different categories of requirements found in a project, and the different types of
ptojects in which such requirements may need to be engineered. The requirements lifecycle
is also discussed together with the various products, activities and actors involved in the RE
process.
The next chapters explain the main techniques available for supporting this process. The
.:.1>tese1rl.tation is structured by the activity that such techniques support in the requirements
1 •t-P•rm~i,,.. namely, domain understanding and requirements elicitation, requirements evaluation
· ~ which the software project takes place and for eliciting the right requirements for a new
m. Some techniques are based on artefacts to help acquire relevant information, such
as questionnaires, scenarios, prototypes or reusable knowledge sources. Other techniques are
~·~ed on specific kinds of interaction with system stakeholders to drive the acquisition process,
. such as interviews, observations or group sessions.
<;hapter 3 addresses the process of evaluating the elicited objectives, requirements and
mptions about the new system. The evaluation techniques discussed there may help
Fundamentals of Requirements Engineering
us manage conflicting concerns, analyse potential risks with the envisaged system, evaluate
alternative options for decision making and prioritize requirements for incremental development
under limited resources.
Once the system objectives, requirements and assumptions have been elicited and evaluated,
we must make them fully precise and organize them into some coherent structure to produce
the requirements document. Chapter 4 overviews specification techniques that may help us in
this task, such as templates in structured natural language, diagrammatic notations for capturing
specific aspects of the system, and formal specification of critical aspects for more sophisticated
analysis.
Chapter 5 reviews the main techniques available for requirements quality assurance. Such
techniques may help us check the requirements document for desired qualities such as
completeness, consistency, adequacy or measurability of statements. They range from informal
to semi-formal to formal techniques. The chapter discusses inspections and reviews, queries
we may submit on a requirements database, requirements validation through specification
animation and requirements verification through formal checks.
Chapter 6 addresses the important problem of managing requirements evolution. As the
world keeps changing, the system objectives, requirements and assumptions may need to be
frequently revised or adapted. The chapter discusses evolution along revisions and variants,
and reviews a variety of techniques for change anticipation, traceability management, change
control and dynamic adaptation at system runtime.
To conclude this first part of the book and introduce the second part, Chapter 7 introduces
goal orientation as a basic paradigm for RE. It defines what goals are and explains why goals
are so important in the RE process. The chapter also relates goals to other key ingredients
of this process, such as requirements regarding the software to be developed, assumptions
about its environment, domain properties, scenarios of interaction between the software and
the environment, and agents involved in such interactions.
This first part of the book provides a framework on which the model-driven techniques
detailed in Parts II and III will be anchored.
~,:
rt ·····a
jfi:>'
his chapter introduces requirements engineering (RE) as a specific discipline
'. . in relation to others. It defines the scope of RE and the basic concepts, activities,
£t actors and artefacts involved in the RE process. In particular, it explains what
fequirements there are with respect to other key RE notions such as domain properties
and environment assumptions. Functional and non-functional requirements will be
ken to play specific roles in the RE process. The quality criteria according to which
ftqUirements documents should be elaborated and evaluated will be detailed. We will
also see why a careful elaboration of requirements and assumptions in the early stages
of the software lifecycle is so important, and what obstacles may impinge on good RE
practice.
The chapter also introduces three case studies from which running examples will be taken
throughout the book. These case studies will additionally provide a basis for many exercises
at the end of chapters. They are taken from quite different domains to demonstrate the wide
applicability of the concepts and techniques. Although representative of real-world systems,
the case study descriptions have been simplified to make our examples easily understandable
~thout significant domain expertise. The first case study is a typical instance of an information
system. The second captures the typical flavour of a system partly controlled by software. The
third raises issues that are typical of distributed collaborative applications and product families.
Item delivered only if paid Payment notification sent to seller Payment record created in database
................... -·- .~·
• The system-as-is, the system as it exists before the machine is built into it.
• The system-to-be, the system as it should be when the machine will be built and operated
in it.
ln the previous example of an auction world, the system-as-is is a standard auction system
with no support for electronic bidding. The system-to-be is intended to provide such support
in order to make items biddable from anywhere at any time. In a flight management world, the
System-as-is might include some autopilot software with limited capabilities; the system-to-be
would then include autopilot software with extended capabilities. In the former example the
system-to-be is the outcome of a new software project, whereas in the latter example it results
from a software evolution project.
Note that there is always a system-as-is. Consider a project aimed at developing control
sCiftware for a MP4 player, for example. The system-as-is is the conventional system allowing
you to listen to your favourite music on a standard hi-ft subsystem. The system-to-be is intended
to mimic the listening conditions of the system-as-is while providing convenient, anywhere and
any-time access to your music.
As we are concerned with the problem world, we need to consider both the system-as-is, to
understand its objectives, regulating laws, deficiencies and limitations, and the system-to-be,
to elaborate the requirements on the software-to-be accordingly together with assumptions on
the environment.
The systems-to-be-next
If we want to build an evolvable machine in our problem world, we need to anticipate likely
changes at RE time. During software development or after deployment of the system-to-be,
new problems and limitations may arise. New opportunities may emerge as the world keeps
changing. We may then even need to consider more than two system versions and foresee
what the next system versions are likely to be. Beyond the system-as-is and the system-to-be,
there are systems-to-be-next. Requirements evolution management is an important aspect of the
RE process that will be discussed at length in Chapter 6.
*Source: Adapted from S. Fickas, A. Finkelstein, M. Feather and A Van Lamsweerde, 1997, with kind permission of Springer Science
Business Media.
Fundamentals of Requirements Engineering
Setting the scene
Fundamentals of Requirements Engineering
System-as-is System-to-be
WHY?
WHAT?
WHO?
Environment
As we will see more thoroughly in subsequent chapters, such analysis along the WHY
dimension is generally far from simple.
Evaluating alternative options in the problem world There can be alternative ways of
satisfying the same identified objective. We need to assess the pros and cons of such alternatives
in order to select the most preferable one. (Chapters 3 and 16 will present techniques to support
this task.)
Handling conflicts The objectives that the system-to-be should satisfy are generally identified
from multiple sources which have conflicting viewpoints and interests. As a result, there
may be different perceptions of what the problems and opportunities are; and there may be
different views on how the perceived problems should be addressed. In the end, a coherent
set of objectives needs to emerge from agreed trade-offs. (Chapters 3, 16 and 18 will present
techniques to support that task.)
• Example: Library management. All parties concerned with the library system-to-be
will certainly agree that access to state-of-the-art books and journals should be made
more effective. There were sufficient cdmplaints reported about this in the system-as-is.
setting the scene II
Conflicts are likely to arise, though, when this global objective is refined into more
concrete objectives in order to achieve it. Everyone will acclaim the objective of improving
the effectiveness of bibliographical search. However, university authorities are likely to
emphasize the objective of cost reduction through integration of department libraries.
Departments might be reluctant to accede to the implications of this, such as losing their
autonomy. On the other hand, library staff might be concerned by strict enforcement of
rules limiting library opening periods, the length of loan periods or the number of loans to
the same patron. In contrast, library patrons might want much more flexible usage rules.
• Example: Train control. All parties will agree on the objectives of faster and
safer transportation. Conflicts will, however, appear between the railway company
management and the unions while exploring the pros and cons of alternative options
with or without drivers, respectively.
• Example: Train control. For the WAX train system-to-be, we must define the service of
computing train accelerations in terms that allow domain experts to establish that the
objective of avoiding collisions of successive trains will be guaranteed. There should be
critical constraints on maximum delays in transmitting acceleration commands to trains,
on the readability of such commands by train drivers so as to avoid confusion and
so forth. Assumptions about the train-tracking subsystem should be made explicit and
validated.
• Example: Library management. The objective of accurate book classification will not
be achieved if department faculty members, who might be made responsible for it, do
not provide accurate keywords when books are acquired in their area. The objective
of limited loan periods for increased availability of book copies will not be achieved if
borrowers do not respond to warnings or threatening reminders, or if the software that
might be responsible for issuing such reminders in time fails to do so.
• Example: Train control. The objective of safe train acceleration will not be achieved if
the software responsible for computing accelerations produces values outside the safety
range, or if the driver responsible for following the safe instructions issued by the software
fails to do so.
Responsibility assignments may also require the evaluation of alternative options. The same
responsibility might be assignable to different system components, each alternative assignment
having its pros and cons. The selected assignment should keep the risks of not achieving
important system objectives, services or constraints as small as possible.
l';rescriptive statements state desirable properties about the system that may hold or not
<Jepending on how the system behaves. Such statements need to be enforced by system
· They are in the optative mood. For example, the following statements are
• Train doors shall always remain closed when the train is moving.
• A patron may not borrow more than three books at the same time.
• The meeting date must fit the constraints of all important participants.
·The distinction between descriptive and prescriptive statements is essential to make in the
context of engineering requirements. We may need to negotiate, weaken, change or find
,alternatives to prescriptive statements. We cannot negotiate, weaken, change or find alternatives
Zt() descriptive statements.
Statement scope
Section 1.1.1 introduced a partition of phenomena into world, machine and shared phenom-
~na to make the point that RE is concerned with the problem world only. If we focus
attention on the software part of the machine we want to build, we obtain a similar
Fundamentals of Requirements Engineering
System Software
requirements requirements
Train Moving-· - · - · - . -
TrainAtStation - · - ·
-· - ·- errorCode =013
Environmental Shared Software
phenomena phenomena phenomena
Figure 1.3 Phenomena and statements about the environment and the software-to-be
• All train doors shall always remain closed while a train is moving.
• Patrons may not borrow more than three books at a time.
• The constraints of a participant invited to a meeting should be known as soon as possible.
Satisfying system requirements may require the cooperation of other system components in
addition to the software-to-be. In the first example above, the software train controller might
Setting the scene fll
be in charge of the safe control of doors; the cooperation of door actuators is also needed,
tiowever (passengers should also be required to refrain from opening doors unsafely).
As we will see in Section 1.1.6, the system requirements are to be understood and agreed
QY all parties concerned with the system-to-be. Their formulation in terms of environmental
·.phenomena, in the vocabulary used by such parties, will make this possible.
A software requirement is a prescriptive statement to be enforced solely by the software-
tq-be and formulated only in terms of phenomena shared between the software and the
environment. For example:
• The doorsstate output variable shall always have the value 'closed' when the measuredspeed input
variable has a non-null value.
• The recorded number of loans by a patron may never exceed a maximum number x.
• A request for constraints shall be e-mailed to the address of every participant on the meeting
invitee list.
•.•·..~not true (see Figure 1.3). When no ambiguity arises, we will often use the term requirement
as a shorthand for 'system requirement'.
The notion of system requirement is sometimes referred as 'user requirement' or 'customer
requirement' in the literature or in descriptions of good practice. The notion of software
~lequirement is sometimes referred as 'product requirement', 'specification' or even, mislead-
'system requirement'. We will avoid those phrases in view of possible confusion. For
le, many 'user requirements' do not come from any software user; a 'system' does not
ly consist of software; a 'specification' may refer in the software engineering literature both
process and to a variety of different products along the software lifecycle (requirement
cification, design specification, module specification, test case specification etc.).
··A domain property is a descriptive statement about the problem world. It is expected to
invariably regardless of how the system will behave - and even regardless of whether
.there· will be any software-to-be or not. Domain properties typically correspond to physical
~s that cannot be broken. For example:
• A train's measured speed is non-null if and only if its physical speed is non-null.
• The recorded number of loans by a borrower is equal to the actual number of book copies physically
borrowed by him or her.
• Borrowers who receive threatening reminders after the loan deadline has expired will return books
promptly.
• Participants will promptly respond to e-mail requests for constraints.
• A participant is on the invitee list for a meeting if and only if he or she is invited to that meeting.
Assumptions are generally prescriptive, as they constrain the behaviour of specific environmen-
tal components. For example, the first assumption in the previous list constrains speedometers
in our train control system.
The formulation of requirements, domain properties and assumptions might be adequate
or not. We will come back to this throughout the book. The important point here is their
difference in mood and scope.
Definitions are the last type of statement involved in the RE process. They allow domain
concepts and auxiliary terms to be given a precise, complete and agreed meaning - the same
meaning for everyone. For example:
• TrainMoving is the name for a phenomenon in the environment that accounts for the fact that the
train being considered is physically moving on a block.
• A patron is any person who has registered at the corresponding library for the corresponding period
of time.
• A person participates in a meeting if he or she attends that meeting from beginning to end.
Unlike statements of other types, definitions have no truth value. It makes no sense to say
that a definition is satisfied or not. However, we need to check definitions for accuracy,
completeness and adequacy. For example, we might question the above definition of what it
means for a person to participate in a meeting; as a result, we might refine the concept of
participation into two more specialized concepts instead - namely, full participation and partial
participation.
In view of their difference in mood and scope, the statements emerging from the RE process
should be 'typed' when we document them (we will come back to this in Section 4.2.1). Anyone
using the documentation can then directly figure out whether a statement is a requirement, a
domain property, an assumption or a definition.
· These different types of variable yield a more explicit framework for control systems, known
as the four-variable model (Parnas and Madey, 1995); see Figure 1.4. As we can see there,
~input/output devices are highlighted as special interface components between the control
;; Software and its environment.
In this framework, we can define system requirements and software requirements as distinct
.· mathematical relations. Let us use the standard notations £;; and x for set inclusion and set
Cartesian product, respectively.
SysReq £;; M x C
SojR.eq c I x 0
Monitored variables
trainSpeed measuredSpeeo
A software requirement SojReq 'translates' the corresponding system requirement SysReq in the
vocabulary of the software's input/output variables.
Satisfaction arguments
Such translation of a system requirement into a software requirement is not a mere reformulation
obtained by mapping the environment's vocabulary into the software's one. Domain properties
and assumptions are often required to ensure the 'correctness' of the translation; that is, the
satisfaction of the system requirement when the corresponding software requirement holds.
Let us illustrate this very important point. We first introduce some shorthand notations:
A-+ B for 'if A then B', A~ B for 'A if and only if B'.
We may express the above examples of system requirement, software requirement, domain
property and assumption for our train system in the shorter form:
(SysReq:) TrainMoving -+ DoorsClosed
(SojReq:) measuredspeed ::f. o-+ doorsstate ='closed'
(Dom:) TrainMoving - trainSpeed ::f. o
(Asm:) measuredSpeed ::f. o - trainSpeed ::f. o
Doorsstate ='closed' - DoorsClosed
To ensure that the software requirement SojReq correctly translates the system requirement
SysReq in this simple example, we need to identify the domain property Dom and the
assumptions Asm, and make sure that those statements are actually satisfied. If this is the case,
we can obtain SysReq from SojReq by the following rewriting: (a) we replace measuredSpeed ::j:.
0 in SojReq by TrainMoving, thanks to the first equivalence in the assumptions Asm and then the
equivalence in the domain property Dom; and Cb) we replace doorsState = 'closed' in SojReq by
DoorsC/osed thanks to the second equivalence in Asm.
The assumptions in Asm are examples of accuracy statements, to be enforced here by
the speedometer and door actuator, respectively. Accuracy requirements and assumptions
form an important class of non-functional statements to be considered in the RE process (see
Section 1.1.5). Overlooking them or formulating wrong ones has sometimes been the cause of
major software disasters. We will come back to this throughout the book.
Our job as requirements engineers is to elicit, make precise and consolidate requirements,
assumptions and domain properties. Then we need to provide satiefaction arguments taking
the following form:
if the software requirements in set SOFREQ are satisfied by the software, the assumptions in set ASM are
satisfied by the environment, the domain properties in set DOM hold and all those statements are consistent
with each other,
then the system requirements SysReq are satisfied by the system.
setting the scene Ill
Such a satisfaction argument could not be provided in our train example without the statements
;tism and Dom previously mentioned. Satisfaction arguments require environmental assumptions
and domain properties to be elicited, specified and validated. For example, is it the case that
.the speedometer and door actuator will always enforce the first and second assumptions in
/~m, respectively?
In Chapter 6, we will see that satisfaction arguments play an important role in managing
~lhe traceability among requirements and assumptions for requirements evolution. In Part II of
·this book, we will extend them to higher-level arguments for goal satisfaction by requirements
.5 categories of requirements
In· the above typology of statements, the requirements themselves are of different kinds.
~oughly, functional requirements refer to services that the software-to-be should provide,
!Vhereas non-functional requirements constrain how such services should be provided.
'Functional requirements define the functional effects that the software-to-be is required to have
. on its environment. They address the 'WHAT' aspects depicted in Figure 1.2. Here are some
. examples:
,;<;
• The bibliographical search engine shall provide a list of all library books on a given subject.
• The train control sofware shall control the acceleration of all the system's trains.
• The meeting scheduler shall determine schedules that fit the diary constraints of all invited
participants.
~ctional requirements characterize units of functionality that we may want to group into
· r-grained functionalities that the software should support. For example, bibliographical
cli, loan management and acquisition management are overall functionalities of the library
~ftware-to-be. Units of functionality are sometimes called features in some problem worlds;
for example, call forwarding and call reactivation are features generally provided in telephony
Fundamentals of Requirements Engineering
Non-functional requirements
Non-functional requirements define constraints on the way the software-to-be should satisfy its
functional requirements or on the way it should be developed. For example:
• The format for submitting bibliographical queries and displaying answers shall be accessible to
students who have no computer expertise.
• Acceleration commands shall be sent to every train every 3 seconds.
• The diary constraints of a participant may not be disclosed to any other invited participant.
The wide range of such constraints makes it helpful to classify them in a taxonomy (Davis,
1993; Robertson & Robertson, 1999; Chung et al., 2000). Specific classes can then be char-
acterized more precisely. Browsing through the taxonomy may help us acquire instances of
the corresponding classes that might have been overlooked (Section 2.2.7 will come back to
this).
Figure 1.5 outlines one typical classification. The taxonomy there is not meant to be
exhaustive, although it covers the main classes of non-functional requirements.
Quality requirements
Quality requirements state additional, quality-related properties that the functional effects of the
software should have. They are sometimes called 'quality attributes' in the software engineering
literature. Such requirements complement the 'WHAT' aspects with 'HOW WELL' aspects. They
appear on the left-hand side in Figure 1.5.
Safety requirements are quality requirements that rule out software effects that might result
in accidents, degradations or losses in the environment. For example:
• The controlled accelerations of trains shall always guarantee that a worst-case stopping distance is
maintained between successive trains.
Quality of service
/
Compliance Architectural constraint Development constraint
SaIB•~~"~=- ,,~L~"°""°" LI ~~ •
/~ ~Co~ Deadline Variability
1
Confidentiality Integrity Availability Time Space User Device Software
interaction interaction interoperability
Sub-class link /~
Useability
Convenience
Security requirements are quality requirements that prescribe the protection of system assets
·~·against undesirable environment behaviours. This increasingly critical class of requirements is
'ftraditionally split into subcategories such as the following (Amoroso, 1994; Pfleeger, 1997).
Confidentiality requirements state that some sensitive information may never be disclosed
jp unauthorized parties. For example:
• A non-staff patron may never know which books have been borrowed by others.
~·v1,,5these, privacy requirements state that some private information may never be disclosed
'thout the consent of the owner of the information. For example:
• The diary constraints of a participant may never be disclosed to other invited participants without
· · his or her consent.
Integrity requirements state that some information may be modified only if correctly done and
authorization. For example:
• The return of book copies shall be encoded correctly and by library staff only.
~ailabilityrequirements state tha't some information or resource can be used at any point in
;~e when it is needed and its usage is authorized. For example:
• A blacklist of bad patrons shall be made available at any time to library staff.
• Information about train positions shall be available at any time to the vital station computer.
· bi.lity requirements constrain the software to operate as expected over long periods of time.
ervices must be provided in a correct and robust way in spite of exceptional circumstances.
e43-mple:
..· .~ The train acceleration control software shall have a mean time between failures of the order of
109 hours.
curacy requirements are quality requirements that constrain the state of the information
essed by the software to reflect the state of the corresponding physical information in the
ironment accurately. For example:
. •• A copy of a book shall be stated as available by the loan software if and only if it is actually available
· ·· on the library shelves.
• The information about train positions used by the train controller shall accurately reflect the actual
position of trains up to X metres at most.
Ill Fundamentals of Requirements Engineering
• The constraints used by the meeting scheduler should accurately reflect the real constraints of
invited participants.
Peiformance requirements are quality requirements that constrain the software's operational
conditions, such as the time or space required by operations, the frequency of their activation,
their throughput, the size of their input or output and so forth. For example:
Performance requirements may concern other resources in addition to time or space, such as
money spent in operational costs. For example:
Inteiface requirements are quality requirements that constrain the phenomena shared by the
software-to-be and the environment (see Figure 1.3). They refer to the static and dynamic
aspects of software-environment interactions; input/output formats and interaction sequences
should be compatible with what the environment expects. Interface requirements cover a wide
range of concerns depending on which environmental component the software is interacting
with.
For human interaction, useability requirements prescribe input/output formats and user
dialogues to fit the abstractions, abilities and expectations of the target users. For example:
• The format for bibliographical queries and answers shall be accessible to students from any
department.
Other human interaction requirements may constrain software effects so that users feel them
to be 'convenient' in some system-specific sense. For example:
• To ensure smooth and comfortable train moves, the difference between the accelerations in two
successive commands sent to a train should be at most x.
• To avoid disturbing busy people unduly, the amount of interaction with invited participants for
organizing meetings should be kept as low as possible.
• The meeting scheduling software should be interoperable with the wss Agenda Manager product.
Setting the scene
Figure 1.5 covers other categories of non-functional requirements in addition to quality require-
ments.
compliance requirements
Compliance requirements prescribe software effects on the environment to conform to national
laws, international regulations, social norms, cultural or political constraints, standards and the
like. For example:
• The value for the worst-case stopping distance between successive trains shall be compliant with
international railways regulations.
• The meeting scheduler shall by default exclude official holidays associated with the target market.
Architectural requirements
Architectural requirements impose structural constraints on the software-to-be to fit its environ-
ment, typically:
• The on-board train controllers shall handle the reception and proper execution of acceleration
commands sent by the station computer.
• The meeting scheduling software should cooperate with email systems and e-agenda managers of
participants distributed worldwide.
• The meeting scheduling software should run on Windows version X.x and Linux version Y.y.
Architectural requirements reduce the space of possible software architectures. They may guide
developers in the selection of an appropriate architectural style, for example an event-based
style. We will come back to this in Section 16.5.
Def/e/1anr11eirit requirements
1ev·el(>0rne1nt requirements are non-functional requirements that do not constrain the way the
ov1uiv:01n-should satisfy its functional requirements but rather the way it should be developed
the right-hand part of Figure 1.5). These include requirements on development costs,
delivery schedules, variability of features, maintainability, reusability, portability and the like.
For example:
Fundamentals of Requirements Engineering
• The overall cost of the new UWON library software should not exceed x.
• The train control software should be operational within two years.
• The software should provide customized solutions according to variations in type of meeting
(professional or private, regular or occasional), type of meeting location (fixed, variable) and type of
participant (same or different degrees of importance).
• The safety injection signal shall be on whenever there is a loss of coolant except during normal
start-up or cool down.
• There are requirements that rule out unacceptable behaviours. For example, any train
controller behaviour that results in trains being too dose to each other must be avoided.
Many safety, security and accuracy requirements are of this kind.
• There are requirements that indicate preferred behaviours. For example, the requirement
that 'participants shall be notified of the scheduled meeting date as soon as possible' states a
preference for scheduler behaviours where notification is sooner over behaviours where
notification is later. Likewise, the requirement that 'interactions with participants should
be kept as limited as possible' states a preference for scheduler behaviours where there
are fewer interactions (e.g. through e-agenda access) over behaviours where there are
more interactions (e.g. through e-mail requests and pestering). Many performance and
'-ility' requirements are of this kind, for example useability, reuseability, portability or
maintainability requirements. When alternative options are raised in the RE process,
we will use such requirements to discard alternatives and select preferred ones (see
Chapters 8 and 16).
c. Differentiation between con.fined and cross-cutting concerns. Functional requirements tend
to address single points of functionality. In contrast, non-functional requirements tend
to address cross-cutting concerns; the same requirement may constrain multiple units of
functionality. In the library system, for example, the useability requirement on accessibility of
input/output formats to non-expert users constrains the bibliographical search functionality.
It may, however, constrain other functionalities as well, for example user registration or
book reservation. Similarly, the non-disclosure of participant constraints might affect multiple
points of functionality such as meeting notification, information on the current status of
planning, replanning and so on.
4. Basis for RE heuristics. The characterization of categories in a requirements taxonomy yields
helpful heuristics for the RE process. Some heuristics may help elicit requirements that were
overlooked, for example:
• Is there any accuracy requirement on information x in my system?
• Is there any confidentiality requirement on information Y in my system?
Other heuristics may help discover conflicts among instances of requirements categories
known to be potentially conflicting, for example:
• Is there any conflict in my system between hiding information on display for better useability
and showing critical information for safety reasons?
• Is there any conflict in my system between password-based authentication and useability
requirements?
• Is there any conflict in my system between confidentiality and accountability requirements?
We will come back to such heuristics in Chapters 2 and 3 while reviewing techniques for
requirements elicitation and evaluation. As we will see there, conflict detection is a prerequisite
for the elaboration of new requirements for conflict resolution.
Fundamentals of Requirements Engineering
Domain understanding
This activity consists of studying the system-as-is within its organizational and technical context.
The aim is to acquire a good understanding of:
More specifically, we need to get an accurate and comprehensive picture of the following
aspects:
• The organization within which the system-as-is takes place: its structure, strategic objec-
tives, business policies, roles played by organizational units and actors, and dependencies
among them.
• The scope of the system-as-is: its underlying objectives, the components forming it, the
concepts on which it relies, the tasks involved in it, the information flowing through it,
and the constraints and regulations to which the system is subject.
• The set of stakeholders to be involved in the RE process.
Setting the scene
• The strengths and weaknesses of the system-as-is, as perceived by the identified stake-
holders.
The product of this activity typically consists of the initial sections in a preliminary draft
proposal that describe those contextual aspects. This proposal will be expanded during the
elicitation activity and then used by the evaluation activity that comes after.
In particular, a glossary of tenns should be established to provide definitions of key concepts
on which everyone should agree. For example, in the library system-as-is, what precisely is a
patron? What does it mean to say that a requested book is being reserved? In the train system,
what precisely is a block? What does it mean to say that a train is at a station? In the meeting
scheduling system, what is referred to by the term 'participant'? What does it mean to say that
a person is invited to a meeting or participates in it? What precisely are participant constraints?
A glossary of terms will be used throughout the RE process, and even beyond, to ensure
that the same term does not refer to different concepts and the same concept is not referred to
under different terms.
Domain understanding is typically performed by studying key documents, investigating
similar systems and interviewing or observing the identified stakeholders. The cooperation of
the latter is obviously essential for our understanding to be correct. Chapter 2 will review
techniques that may help us in this task.
Requirements elicitation
This activity consists of discovering candidate requirements and assumptions that will shape
the system-to-be, based on the weaknesses of the system-as-is as they emerge from domain
; Jll1derstanding. What are the symptoms, causes and consequences of the identified deficiencies
·~ ~rid limitations of the system-as-is? How are they likely to evolve? How could they be addressed
· · ill the light of new opportunities? What new business objectives could be achieved then?
The aim is thus to explore the problem world with stakeholders and acquire the following
information:
• The opportunities arising from the evolution of technologies and market conditions that
could address the weaknesses of the system-as-is while preserving its strengths.
• The improvement objectives that the system-to-be should meet with respect to such
weaknesses and opportunities, together with alternative options for satisfying them.
• The organizational and technical constraints that this system should take into account.
• Alternative boundaries that we might consider between what will be automated by the
software-to-be and what will be left under the responsibility of the environment.
e Typical scenarios illustrating desired interactions between the software-to-be and its
environment.
• The domain properties and assumptions about the environment that are necessary for
the software-to-be to work properly.
Fundamentals of Requirements Engineering
• The requirements that the software-to-be should meet in order to conform to all of the
above.
The requirements are by no means there when the project starts. We need to discover them
incrementally, in relation to higher-level concerns, through exploration of the problem world.
Elicitation is a cooperative learning process in which the requirements engineer and the system
stakeholders work in close collaboration to acquire the right requirements. This activity is
obviously critical. If done wrong, it will result in poor requirements and, consequently in poor
software.
The product of the elicitation activity typically consists of additional sections in the
preliminary draft proposal initiated during the domain understanding activity. These sections
document the items listed above. The resulting draft proposal will be used as input to the
evaluation activity coming next.
The elicitation process can be supported by a variety of techniques, such as knowledge
reuse, scenarios, prototyping, interviews, observation and the like. Chapter 2 will discuss these.
• Conflicting concerns must be identified and resolved. These often arise from multiple
viewpoints and different expectations.
• There are risks associated with the system that is being shaped. They must be assessed
and resolved.
• The alternative options identified during elicitation must be compared with regard to
quality objectives and risks, and best options must be selected on that basis.
• Requirements prioritization is often necessary for a number of reasons:
a. Favouring higher-priority requirements is a standard way of resolving conflicts.
b. Dropping lower-priority requirements provides a way of integrating multiple wishlists
that would together exceed budgets and deadlines.
c. Priorities make it easier to plan an incremental development process, and to replan
the project during development as new constraints arise such as unanticipated delays,
budget restrictions, deadline contractions etc.
The product of this activity typically consists of final sections in the preliminary draft proposal
initiated during the preceding activities. These sections document the decisions made after
assessment and negotiation. They highlight the agreed requirements and assumptions about
setting the scene II
the selected system-to-be. The system proposal thereby obtained will serve as input to the
specification activity coming next.
The evaluation process can be supported by a variety of qualitative and quantitative
techniques. Chapters 3 and 16 will provide a comprehensive sample of these.
Requirements consolidation
The purpose of this activity is quality assurance. The specifications resulting from the preceding
activity must be carefully analysed. They should be validated with stakeholders in order to
f}mpoint inadequacies with respect to actual needs. They should also be verified against each
other in order to find inconsistencies and omissions before the software requirements are
transmitted to developers. Any error found must be fixed. The sooner an error is found, the
cheaper the fix will be.
The main product of this activity is a consolidated requirements document, where the
detected errors and flaws have been fixed throughout the document. Other products may
include a prototype or mock-up built for requirements validation, additional test data coming
out of verification, a proposed development plan, the contract linking the client and the
,;,,software developer, and a call for tenders in the case of development subcontracting.
Section 1.1.7 will detail the quality criteria addressed by this activity more precisely, together
with the various types of errors and flaws that may need to be fixed. Section 1.2 will discuss the
consequences of not fixing them. Chapter 5 will present techniques for requirements quality
assurance.
Fundamentals of Requirements Engineering
• Within the RE process itself, as such statements are found during consolidation to be
missing, inadequate or inconsistent with others.
• During software development, as such statements tum out to be missing, unfeasible or
too costly to implement, incompatible with new implementation constraints, or no longer
adequate as the problem world has evolved in the meantime.
• After software deployment, as the problem world has evolved or must be customized to
specific contexts.
'Late' iterations of the RE process will be further discussed in Chapter 6 on evolution manage-
ment and in Section 16.5 where the interplay between RE and architectural design will appear
more clearly.
The spiral process model depicted in Figure 1.6 is fairly general and flexible. It may need
to be specialized and adapted to the specificities of the problem world and to the standards
Alternative proposals
Consolidated Agreed
requirements requirements
Quality Specification
assurance and documentation
Documented requirements
of the host organization, for example by further defining the nature of each increment or
the intertwining with software development cycles. The important points, though, are the
range of issues to consider, the complementarity and difference among RE activities, their data
dependencies and the iterative nature of the RE process.
• Feasibility. The requirements must be realizable in view of the budget, schedule and
technology constraints.
• Comprehensibility. The formulation of requirements, assumptions and domain properties
must be comprehensible by the people who need to use them.
• Good structuring. The requirements document should be organized in a way that
highlights the structural links among its elements - refinement or specialization links,
dependency links, cause-effect links, definition-use links and so forth. The definition of
a term must precede its use.
• Modifiability. It should be possible to revise, adapt, extend or contract the requirements
document through modifications that are as local as possible.
• Traceability. The context in which an item of the requirements document was created,
,,
modified or used should be easy to retrieve. This context should include the rationale
for creation, modification or use. The impact of creating, modifying or deleting that
item should be easy to assess. The impact may refer to dependent items in the require-
ments document and to dependent artefacts subsequently developed - architectural
descriptions, test data, user manuals, source code etc. (Traceability management will be
discussed at length in Section 6.3.)
Note that critical qualities such as completeness, adequacy and pertinence are not defined in an
absolute sense; they are relative to the underlying objectives and needs of a new system. The
latter may themselves be implicit, unclear or even unidentified. Those qualities can therefore
be especially hard to enforce.
Section 1.2.1 will review some facts and figures about project failures that are due to
poor-quality requirements. Elaborating a requirements document that meets all of the above
qualities is essential for the success of a software project. The techniques described in this book
are aimed at supporting this task. As a prerequisite, we should be aware of the corresponding
types of defect to avoid.
• Omissions may result in the software failing to implement an unstated critical requirement,
or failing to take into account an unstated critical assumption or domain property.
• We cannot produce a correct implementation from a set of requirements, assumptions
and domain properties that contradict each other.
setting the scene II
in addition to errors, there are flaws whose consequences are in general less severe. In the
best cases they result in a waste of effort and associated risks:
• Useless effort in finding out that some noisy or overspecified aspects are not
needed - with the risk of sticking to overspecified aspects that may prevent better
solutions from being taken.
• Useless effort in determining what requirements to stick to in unfeasible situations - with
the risk of dropping important requirements.
Fundamentals of Requirements Engineering
The various types of defect in Table 1.1 may originate from any RE activity - from elicitation
to evaluation to documentation to consolidation (see Figure 1.6). Omissions, which are the
hardest errors to detect, may happen at any time. Contradictions often originate from conflicting
viewpoints that emerged during elicitation and were left unresolved at the end of the RE
process. Inadequacies often result from analyst-stakeholder mismatch during elicitation and
negotiation. Some flaws are more likely to happen during documentation phases - such as
noise, unintelligibility, forward reference and remorse, poor structuring, poor modifiability and
opacity.
Overspecifications are frequently introduced in requirements documents written by devel-
opers or people who want to jump promptly to technical solutions. They may take the form of
Setting the scene
, flowcharts, variables that are internal to the software (rather than shared with the environment,
·cf. Figure 1.3), statements formulated in terms of programming constructs such as sequential
composition, iterations or go-tos. 'Algorithmic requirements' implement declarative require-
. ments that are left implicit. They might incorrectly implement these hidden requirements. They
cannot be verified or tested against them. They may preclude some alternative 'implementa-
,tion' of the hidden requirements that might prove more effective with respect to other quality
requirements.
In view of their potentially harmful consequences, requirements errors and flaws should
be detected and fixed in the requirements document. Chapter 5 will review a variety of
techniques for requirements quality assurance. In particular, Table 1.1 may be used as a basis
Fundamentals of Requirements Engineering
for requirements inspection checklists (see Section 5.1.3). Model-based quality assurance will
be discussed at length in Parts II and III.
Greenfield projects are sometimes specialized further into normal design vs radical design
projects (Vicenti, 1993). In a normal design project, engineers solve problems by making
improvements to existing technologies or by using them in new ways. They have a good
idea of what features the target artefact will provide. In contrast, radical design projects
result in fundamentally new technologies. The creators of the target artefact have little
idea at the beginning of how this artefact will work and how its components should
be arranged. Radical design projects are much less common. They are exploratory by
nature.
• The WAX train transportation system is a single-product project (at least at inception).
• The meeting scheduler is a product-line project. Variability might refer to the type of
customer or the type of meeting.
• If we consider the in-car light-control software for a car manufacturer, variability might
refer to different car categories where the software should be installed.
J... software project is generally multi-type along the above dimensions. For example, the
.,ineeting scheduler might be a greenfield, market-driven, in-house, product-line project.
·r.s far as RE is concerned, these project types have commonalities and differences. On the
. , .ifunonality side, they all need to be based on some form of requirements document at some
development stage or another. For example, there is no way of developing a high-quality
software product in a brownfield, market-driven, in-house, product-line project without any
\ormulation of the requirements for the software and the assumptions on the environment.
~Pifferences from one project type to the other may lie in the following aspects of the RE
~, {>iocess:
• Use of specific techniques to support RE activities. For example, greenfield projects may
require prototyping techniques for requirements elicitation and risk-based evaluation
techniques for decision making (see Chapters 2 and 3). Product-line projects may require
feature diagrams for capturing multiple system variants (see Chapter 6).
• Intertwining between requirements engineering and product design. In greenfield
projects, and in radical design projects in particular, requirements might emerge only
once critical design decisions have been made or a product prototype is available.
• Respective weights offunctional and non-functional requirements. Brownfield projects
are often concerned with improving product quality. Non-functional requirements are
therefore prominent in such projects.
• Types of stakeholder involved in the process. A market-driven project might involve
specific types of stakeholder such as technology providers, service providers, retailers,
consumers, legislator and the like.
• Types ofdeveloper involved. The skills required in an outsourced project might be limited
to implementation skills, whereas an in-house, greenfield project might require advanced
analysis skills.
• Specific uses of the requirements document. In an outsourced project, the RD is often
used as an annex to the call for tenders, as a reference for evaluating submitted proposals
and as a basis for progress monitoring and product evaluation.
Architectural design A software architecture defines the organization of the software in terms
of configurations of components, connectors capturing the interactions among components,
and constraints on the components, connectors and configurations (Shaw & Garlan, 1996;
Bosch, 2000). The architecture designed must obviously meet the software requirements. In
Setting the scene
I /~up~=..
Software prototype, ( ) < ) Software architecture
mock-up
~~
Acceptance test data - / /
Quality assurance
checklists
\
Implementation User manual
directives
~
Software evolution
directives
Software documentation
Impacts on
)
particular, architectural choices may have a deep impact on non-functional requirements (Perry
& Wolf, 1992). The requirements document is therefore an essential input for architectural
design activities such as:
Software quality assurance The requirements document provides the ultimate reference for
quality assurance activities. In particular:
• Requirements provide the basis for elaborating acceptance test data that cover them.
• They are used to define checklists for software inspections and reviews.
Implementation and integration These later steps of the software lifecyde must take non-
functional requirements such as interface and installation requirements into account.
Maintenance The requirements document, together with problem reports and approved mod-
ffi.cation requests, provides the input material for revising, adapting, extending or contracting
the software product.
Fundamentals of Requirements Engineering
Project management The requirements provide a solid basis for project management tasks
such as:
• Estimating project size, cost and schedules, e.g. through function points (Low & Jeffery,
1990).
• Planning development activities.
• Writing a call for tenders and evaluating proposals (for outsourced projects).
• Writing the contract linking the developer and the customer.
• Reviewing progress during an incremental development.
• Assessing development team productivity.
• Evaluating the final product.
Many software process models and development methodologies recognize the important role
of requirements throughout the software lifecyde. For example, the diagrams summarizing the
RUP Unified Process show how the requirements document permeates all project phases from
inception to elaboration to construction to transition Qacobson et al., 1999)'.
The inevitable intertwining of RE, system design and software architecture design
We might think of RE and design ideally as two completely separate processes coming one after
the other in a waterfall-like fashion. This is rarely the case in practice. A complex problem is
solved by identifying subproblems, specifying them and solving them, which recursively yields
new subproblems (Nilsson, 1971). The recursive nature of problem solving makes the problem
and solution spaces intertwined. This applies, in particular, when we elaborate requirements,
a corresponding system-to-be and a corresponding software architecture.
Such intertwining occurs at places where we need to make decisions among alternative
options based on quality requirements, in particular:
• When we have elicited a system objective and want to decompose it into sub-
objectives - different decompositions might be envisioned, and we need to select a
preferred one.
• When we have identified a likely and critical risk - different countermeasures might be
envisioned, and we need to select a preferred one.
• When we have detected a conflict between requirements and want to resolve it - different
resolutions might be envisioned, and we need to select a preferred one.
• When we realize a system objective through a combination of functional services,
constraints and assumptions different combinations might be envisioned, and we need
to select a preferred one.
Setting the scene
All these situations involve system design decisions. Once such a decision has been made,
we need to recursively elicit, evaluate, document and consolidate new requirements and
assumptions based on it. Different decisions may result in different proposals for the system-
to-be, which, in tum, are likely to result in different software architectures. Conversely, while
elaborating the software architecture we might discover new requirements or assumptions that
had been overlooked thus far.
Let us illustrate this intertwining of RE and design in our case studies.
In the meeting scheduler, the objective of knowing the constraints of invited participants
might be decomposed into a sub-objective of knowing them through e-mail requests or,
alternatively, a sub-objective of knowing them through access to their electronic agenda. The
.architecture of a meeting scheduler based on e-mail communication for getting constraints
will be different in places from one based on e-agendas. Likewise, there will be architectural
differences between an alternative where meeting initiators are taking responsibility for handling
constraint requests and a more automated version where a software component is responsible
for this.
In our train control system, the computation of train accelerations and the transmis-
sion of acceleration commands to trains might be under responsibility of software com-
·ponents located at specific stations. Alternatively, this responsibility might be assigned,
for the acceleration of a specific train, to the on-board software of the train preced-
ing it. These are system design options that we need to evaluate while engineering the
system requirements, so that preferred options can be selected for further requirements
elaboration. Those two alternatives result in very different software architectures - a semi-
.centralized architecture and a fully distributed one. The alternative with an ultra-reliable
component at specific stations is likely to be selected in order to better meet safety require-
Requirements evolution
From a managerial perspective, this area intersects with the area of change management in
management science. From a technical perspective, it intersects with the area of version control
and con.figuration management in software engineering.
Tue ·requirements problem is among the oldest in software engineering. An early empirical
~dy of a variety of software projects revealed that incomplete, inadequate, inconsistent
()r. ambiguous requirements are numerous and have a critical impact on the quality of the
resulting software (Bell & Thayer, 1976). These authors concluded that 'the requirements for
":.,~system do not arise naturally; instead, they need to be engineered and have continuing
·~ew and revision'. This was probably the first reference to the phrase 'requirements
J~gineering', suggesting the need for systematic, repeatable procedures for building high-quality
.artefacts.
A consensus has been rapidly growing that such engineering is difficult. As Brooks noted in
landmark paper on the essence and accidents of software engineering, 'the hardest single
part of building a sofware system is deciding precisely what to build ... Therefore, the most
Fundamentals of Requirements Engineering
important function that the software builder performs for the client is the iterative extraction
and refinement of the product requirements' (Brooks, 1987).
in simplified form as follows (Jackson, 1995a). The autopilot had the system requirement that
reverse thrust be enabled if and only if the plane is moving on the runway:
The software requirement given to developers in terms of software input/output variables was:
An argument that this software requirement entails the corresponding system requirement had
to rely on assumptions on the wheels sensor and reverse thrust actuator, respectively:
This domain property proved to be inadequate on the waterlogged Warsaw runway. Due to
aquaplaning, the plane there was moving on the runway without wheels turning.
A similar case occurred recently where a car driver was run over by his luxurious
computerized car while opening a gate in front of it. The software controlling the handbrake
release had the system requirement:
'The handbrake shall be released if and only if the driver wants to start.'
'The handbrake control shall be "off" if and only if the normal running of the motor is raised.'
'The driver wants to start if and only if he presses the acceleration pedal'
'The normal running of the motor is raised if and only if the acceleration pedal is pressed'
proved to be inadequate on a hot summer's day. The car's air conditioner started automatically,
due to the car's door being open while the driver was opening the gate in front, which resulted
in the normal running of the motor being raised and the handbrake being released.
setting the Scene
In addition to cases of wrong assumptions or wrong domain properties, there are cases
where failure originates from environmental changes that render the original assumptions no
longer adequate. A concrete example showing the problems with changing environments,
in the context of our train control case study, is the June 1995 New York subway crash.
The investigation revealed that the distance between signals was shorter than the worst-case
stopping distance of trains; the assumption that a train could stop in the space allowed after
. the signal was adequate for 1918 trains but inadequate for the faster, longer and heavier trains
···•running in 1995 (16 June 1995 New York Ttmes report, cited in Hammond et al., 2001).
The well-known Ariane 5 rocket failure is another example where environmental assump-
• P:ons, set for requirements satisfaction, were no longer valid. Software components were reused
:,: from the Ariane 4 rocket with ranges of input values that were different from the expected
pnes due to changes in rocket features (Lions, 1996). In the same vein, the Patriot anti-missile
system that hit US military barracks during the first Gulf War had been used for more than
iOO hours. The system was assuming missions of 14 hours at most (Neumann, 1995) .
.. ·.. Missing or inadequate requirements/assumptions may have harmful consequences in
~ecurity-critical systems as well. For example, a Web banking service was reported to have no
~dequate requirements about how the software should behave when a malicious user is search-
ing for all bank accounts that match some given 4-digit PIN number (dos Santos et al., 2000).
As we will see in Chapter 5, there are fortunately techniques for spotting errors in
· >irequirements and assumptions. For example, such techniques uncovered several dangerous
:8hiissions and ambiguities in TCAS II, a widely used aircraft collision-avoidance system
· (Heimdahl & Leveson, 1996). This important topic will be covered in depth in Chapters 5, 9, 16
and 18.
"
1.2.2 The role and stakes of requirements engineering
/'
The bottom line of the previous section is that engineering high-quality requirements is essential,
iis·errors in requirements, assumptions and domain properties tend to be numerous, persistent,
. tostly and dangerous. To support that conclusion, we may also observe the prominent role
•· that RE plays with respect to multiple stakes.
Technical stakes As we saw in Section 1.1.9, the requirements document (RD) provides a
Communication stakes The RD provides the main reference through which the various
parties involved in a software project can communicate with each other.
Project management stakes The RD provides a basis for determining the project costs,
required resources, development steps, milestones, review points and delivery schedules.
Legal stakes The RD forms the core of the contract linking the software provider, customers
and subcontractors (if any).
Certification stakes Quality norms are increasingly enforced by law or regulations on projects
in specific domains such as medical, transportation, aerospace or nuclear. They may also be
requested by specific customers in other domains. Such norms constrain the development
process and products. At the process level, maturity models such as CMMI, SPICE or IS09001
require RE to be taken seriously. For example, CMMI Maturity Level 2 imposes a requirements
management activity as a necessary condition for process repeatability; Level 3 requires
a repeatable requirements development process (Ahern et al., 2003). At the product level,
standards such as IEEE-STD-830 or ESA PSS-05 impose a fairly elaborate structure on the
requirements document (see Section 4.2.2).
Economic stakes The consequences of numerous, persistent and dangerous errors related to
requirements can be economically devastating.
Social stakes When not sufficiently user centred, the RE process may overlook important
needs and constraints. This may cause severe deteriorations in working conditions, and a
wide range of reactions from partial or diverted use of the software to mere rejection of it.
Such reactions may have severe consequences beyond user dissatisfaction. For example, in the
London ambulance system mentioned in the previous section, misuse and rejection of the new
system by ambulance drivers were reported to be among the main causes of failure.
• Such effort generally needs to be spent before the project contract is signed, without a
guarantee that a contract will be signed.
Setting the scene
• There might be stronger concerns and pressure on tight schedules, short-term costs and
catching up on the latest technology advances.
• Too little research work has been devoted to RE economics. On one hand, the benefits
and cost saving from using RE technology have not been quantified. They are hard to
measure and not enough evidence has been gained from large-scale empirical studies.
On the other hand, progress in RE activities is harder to measure than in design or
implementation activities.
• Practitioners sometimes feel that the requirements document is exceedingly big and
complex (Lethbridge et al., 2003). In such cases it might not be maintained as the project
evolves, and an outdated document is no longer of any use.
• The requirements document may be felt to be too far away from the executable product
for which the customer is paying. In fact, the quality of requirements does not indicate
much about the quality of the executable product.
• RE technology is sometimes felt to be too heavyweight by some practitioners, and too
vague by others.
• Beyond general guidelines, the transfer of effective RE techniques through courses,
textbooks and pilot studies has been much more limited than in other areas of software
engineering.
We need to be aware of such obstacles to find ways of overcoming them. Chapters 2-6 will
'review standard techniques to support the RE process more effectively. In this framework,
Vie next parts of the book will detail a systematic method for building a multifaceted system
ijtl,'ocieI from which a well-structured requirements document can be generated. This method
will make the elicitation, evaluation, documentation, consolidation and evolution efforts more
;; focused and more effective.
• A RE cycle is shortened by eliciting some useful functional increment directly from the
user, and by shortcutting the evaluation, specification and consolidation phases; or by
making these very rudimentary to expedite them. For example, the specification phase
may amount to the definition of test cases that the implementation must pass.
• The implementation cycle next to a RE cycle is shortened as (a) the functional increment
from this RE cycle is expected to be small; and (b) this increment is implemented by a
Fundamentals of Requirements Engineering
small team of programmers working at the same location, following strict programming
rules, doing their own unit testing and staying close to the user to get instant feedback
for the next RE cycle.
The functional increment elicited at a RE cycle is sometimes called user story. It captures some
unit of functionality of direct value that the user can write and deliver easily to the programming
team.
Agile processes have emerged in certain development communities and projects as a
reaction against overly heavyweight practices, sometimes resulting from the misinterpretation
of process models and the amount of 'ceremony' and reporting they require. However, it is
important to highlight the underlying assumptions that a project must fulfil for an agile process
to work successfully. Such assumptions delimit the applicability of agile processes:
• All stakeholder roles, including the customer and user roles, can be reduced to one single
role.
• The project is sufficiently small to be assignable to a single, small-size, single-location
development team.
• The user can be made available at the development site or can interact promptly and
effectively.
• The project is sufficiently simple and non-critical to disregard or give little consideration
to non-functional aspects, environmental assumptions, underlying objectives, alternative
options and risks.
• The user can provide functional increments quickly, consistently (so that no conflict
management is required) and gradually from essential to less important requirements (so
that no prioritization is required).
• The project requires little documentation for work coordination and subsequent product
maintenance. Precise requirements specification before coding is not an issue.
• Requirements verification before coding is less important than early release.
• New or changing requirements are not likely to require major code refactoring and
rewrite, and the people in charge of product maintenance are likely to be the product
developers.
These assumptions are quite strong. Many projects obviously do not meet them all, if any - in
particular, projects for mission-critical systems. We would obviously not like our air traffic
control, transportation, power plant, medical operation or e-banking systems to be obtained
through agile development of critical parts of the software.
Agility is not a binary notion, however. Depending on which of the preceding assumptions
can be fulfilled and which cannot, we can achieve more or less agility by paying more or Jess
attention to the elicitation, evaluation, specification and consolidation phases of an RE cycle,
making it longer or shorter.
setting the scene
From this perspective, the approach discussed in Parts II and III is intended to make RE
cycles shorter by:
summary
• The focus of RE is the investigation, delineation and precise definition of the problem
world that a machine solution is intended to improve. The scope of investigation
is broad. It involves two system versions. Next to the system-as-is, the system-to-be
comprises the software to be developed and its environment. The latter may comprise
people playing specific roles, physical devices operating under physical laws, and
pre-existing software. The questions to be addressed about the system-to-be include
WHY, WHAT, HOW WELL and WHO questions. Such questions can be answered in
a variety of ways, leading to a range of alternative options to consider, each having
associated strengths and risks.
• Requirements engineers are faced with multiple transitions to handle: from the problem
world to the machine interface with it; from a partial set of conflicting concerns
to a complete set of consistent statements; from imprecise formulations to precise
specifications; from unstructured material to a structured document; from informal
wishes to a contractual document. There are multiple levels of abstraction to consider,
with strategic objectives at the top and technical requirements at the bottom. Multiple
abstraction levels call for satisfaction arguments, as we need to show that the higher-
level concerns are satisfied by the lower-level ones.
• The RE process is an iteration of intertwined activities for eliciting, evaluating, doc-
umenting, consolidating and changing the objectives, functionalities, assumptions,
qualities and constraints that the system-to-be should meet based on the opportunities
and capabilities provided by new technologies. Those activities involve multiple stake-
holders that may have conflicting interests. The relative weight of each activity may
depend on the type of project.
• The RE process involves different types of statements. Requirements are prescrip-
sra.rernerns about software functionalities, qualities and development constraints.
are expressed in the vocabulary of the problem world. Domain properties are
Fundamentals of Requirements Engineering
descriptive statements about this world. Assumptions are statements about expected
behaviours of environmental components. We need to make appropriate assumptions
and identify correct domain properties to elaborate the right requirements.
• These different types of statements have to be specified and structured in the require-
ments document. Their specification must meet multiple qualities, among which
completeness and adequacy are most critical. The requirements document is a core
artefact in the software lifecycle, as many software engineering activities rely on it. Its
quality has a strong impact on the software project - notably, its successful completion,
the development and maintenance costs, the rate of user acceptance and satisfaction,
system security and safety. Studies on the requirements problem have consistently
shown that requirements errors are numerous, persistent, costly and dangerous. Wrong
hidden assumptions can be the source of major problems.
• There are a few misconceptions and confusions about RE to avoid:
a. The target of investigation is not the software but a system of which the software is
one component.
b. RE does not amount to some translation of pre-existing problem formulations.
c. RE and design are not sequentially composed in a waterfall-like fashion. RE involves
system design. In view of the alternative options arising in the RE process, we need
to make decisions that may subsequently influence software design. Conversely,
some requirements might sometimes emerge only in the later stages of software
design.
d. Unlike domain properties, requirements may need to be negotiated, weakened or
changed.
e. 'Precise' does not mean 'formal'. Every statement must have a unique, accurate
interpretation without necessarily being machine processable.
f. A set of notations may be a necessary condition for a RE method but certainly not a
sufficient one. A method should provide systematic guidance for building complex
requirements documents.
later, Zave consistently argued that the relationship between objectives, functionalities,
constraints and software requirements is a key aspect of the RE process (Zave, 1997).
Requirements evolution along variants and revisions is also discussed there.
The important distinction between descriptive and prescriptive statements appeared
first inJackson & Zave (1993) and was echoed in Jackson (1995a) and Zave and]ackson
(1997). The differentiation between system requirements and software requirements is
discussed in Jackson (1995a), where the latter are called 'specifications'. Similar distinctions
were made in the more explicit setting of the four-variable model in Parnas and Matley
(1995).
Satisfaction arguments have been known for a long time in programming methodology.
When we build a program P in some environment E the program has to satisfy its
specification S. Therefore we need to argue that P, E !== S. Such argumentation was first
lifted up to the RE phase in Yue (1987). The need for satisfaction arguments at RE time
is discussed in Jackson (1995a) and convincingly illustrated in Hammond et al. (2001) in
the context of the REVEAL methodology for requirements engineering. Such arguments
were made explicit in terms of goal refinement and goal operationalization in Dardenne
et al. (1991), Dardenne et al. (1993) and van Lamsweerde (2000b).
The spiral model of software development is described in Boehm (1988). An adaptation
to requirements development was suggested first in Kotonya & Sommerville 0997). Agile
processes in the context of RE are briefly introduced in Leffingwell and Widrig (2003).
The need for early delivery of useful subsystems was recognized in Pamas 0979).
Numerous books and papers propose requirements taxonomies, notably Thayer and
Dorfman 0990), Davis 0993), Robertson and Robertson 0999) and Chung et al. (2000).
A thorough discussion of specification errors will be found in Meyer's paper on the
specifier's "seven sins" (Meyer, 1985). Those 'sins' are illustrated there on a published
specification of a text formatting problem, where most defects are found in a few
lines! Yue was probably the first to define requirements completeness and pertinence
with respect to underlying objectives (Yue, 1987). The best discussion on requirements
measurability is in Robertson and Robertson (1999), which proposes so-called fit criteria
a way of checking whether a requirement is measurable (we come back to this in
Sections 4.2 and 5.1). Some of the qualities expected for a requirements document are
also presented in Davis (1993).
The distinction between customer-specific and market-driven projects is discussed
from an RE perspective in Lubars et al. (1993). Radical design projects are contrasted with
normal design ones from an engineering perspective in Vicenti (1993).
The view of RE as a composite system design activity is elaborated technically in
Feather 0987) and Fickas and Helm 0992). The inevitable intertwining of RE and
architectural design is argued in Nuseibeh (2001). To some extent it is a transposition, to
the earlier phases of the software lifecycle, of an argument made before for specification
and implementation (Swartout & Balzer, 1982).
Fundamentals of Requirements Engineering
This preliminary phase of the RE process involves a great deal of knowledge acquisition. We
need to acquire the contextual knowledge under which the system-to-be will be elaborated.
This knowledge generally covers the following:
• Knowledge about the organization - its structure, business objectives, policies, roles and
responsibilities.
• Knowledge about the domain in which the problem world is rooted - the concepts
involved in this domain, the objectives specific to it, the regulations that may be imposed
in it.
• Knowledge about the system-as-is - its objectives, the actors and resources involved, the
tasks and workflows, and the problems raised in this context.
The output of the understanding and elicitation phase typically consists of a preliminary
draft proposal report describing the system-as-is, its surrounding organization, the underlying
domain, the problems identified in it, the opportunities to be exploited, and alternative ways
in which the problems might be addressed in view of such opportunities. This draft proposal
Fundamentals of Requirements Engineering
will be used as input to the evaluation phase coming next. A glossary of terms should be
appended to it.
An effective process for domain understanding and requirements elicitation combines
different techniques that vary by their degree of interaction with stakeholders:
• Artefact-driven techniques rely more on specific types of artefact to support the elicitation
process. They are described in Section 2.2.
• Stakeholder-driven techniques rely more on specific types of interaction with stakeholders.
They are described in Section 2.3.
As a prerequisite, we must identify the right stakeholders for effective knowledge acquisition.
stakeholder analysis
For comprehensive understanding and exploration of the problem world, the determination
of a representative sample of stakeholders should be based on their respective roles, stakes,
interests and type of knowledge they can contribute. The following criteria may be used for this:
The set of stakeholders may need to be updated during the RE process as new perspectives
emerge.
• Distributed and conflicting knowledge sources. There are in general many different
sources to consider - multiple stakeholders and large volumes of documents and data.
Such sources are often spread out. They may conflict with each other for a variety of
reasons: competition among representatives of different departments, diverging intcerestS
and perceptions, different priorities and concerns, outdated documents and the like.
Domain Understanding and Requirements Elicitation
• Difficult access to sources. Knowledge sources may not be easily available. Key people
are generally very busy. They may not be convinced that it is worth spending time on
the elicitation process. Others are sometimes reluctant to provide important information
as they feel not free to do so, or are suspicious about the consequences of moving from
one system to another. Relevant data may be hard to collect.
• Obstacles to good communication. There may be significant communication barriers
originating from people with different backgrounds, terminology and cultures.
• Tacit knowledge and hidden needs. Getting key information from stakeholders may be
quite hard. Knowledge is often tacit; it is implicit in the stakeholder's mind or felt to
be common sense. For example, expert people might not explain details or connections
among particular elements as they assume that we know what they are. People involved
in routine tasks may have a hard time explaining things from a distance. On the other
hand, stakeholders often don't know what they really want, or have difficulties expressing
what they want and why they want it. They may jump straight into solutions without
being able to make explicit what the underlying problems are. They may be unable to
distinguish between essential aspects and subsidiary details. They may find it difficult to
map hypothetical descriptions of the system-to-be onto real working conditions in the
future. They may also have unrealistic expectations.
• Sociopolitical factors. External factors may interfere significantly with the process, such
as politics, competition, resistance to change, time/cost pressures etc.
• Unstable conditions. The surrounding world may be volatile. The structure of the
organization may change, people may appear or disappear, the perceived needs or
priorities may change and so forth.
• Communication skills. The ability for effective interaction with a variety of people is
essential. We must be able to address the right issues for the people we interact with, in
view of their specific role in the knowledge-acquisition process. We need to use the right
terminology for them in view of their specific background. We must be able to listen
carefully and ferret out the key points. We should be able to form trusting interpersonal
relationships in order to be accepted by stakeholders and appear as a partner.
• Knowledge reformulation. Review meetings must be organized where the relevant
knowledge about the problem world, acquired from multiple sources, is presented in an
integrated, structured way. This is essential for validating and refining such knowledge,
for keeping stakeholders involved and for increasing their confidence in the way the
system-to-be is shaping up. Review meetings should take place at appropriate milestones
during the elicitation process in order to redirect it if necessary.
Fundamentals of Requirements Engineering
• To learn about the organization, we may study documents such as organizational charts,
business plans, policy manuals, financial reports, minutes of important meetings, job
descriptions, business forms and the like.
• To learn about the domain, we may study books, surveys and published articles. We
should study regulations enforced within the domain, if any. We may also look at reports
on similar systems in that domain.
• To learn about the system-as-is specifically, we may study reports that document infor-
mation flows, work procedures, business rules, forms exchanged between organizational
units and so forth. When available, reports about defects, complaints and change
requests are especially helpful in spotting problems with the system-as-is. If the system
is already software based, relevant software documentation such as user manuals should
be considered as well.
Background study is sometimes called content analysis. An obvious strength of this tech-
nique is that it supplies basic information that will be needed afterwards, in particular the
terminology used, the objectives and policies to be taken into account, the distribution of
responsibilities among stakeholders and so forth. This technique allows us to prepare before
meeting stakeholders. It therefore appears as a prerequisite to other elicitation techniques.
The main problem with background study is the amount of documentation that we may
need to consider. There can be many documents, some of which can be quite voluminous.
Key information has to be extracted from a mass of irrelevant details. Some documents can
also be inaccurate or outdated.
To address this data-mining problem, we should first acquire some meta-knowledge for guid-
ing the background reading process; that is, we should first know what we need to know and
what we don't need to know. We may then use such meta-knowledge to prune the documenta-
tion space and focus on relevant aspects only. As we will see later on, a model-driven approach
provides a solution to this problem. When we know what kind of model should emerge from
the elicitation process, we may use that information to drive the process accordingly.
Domain Understanding and Requirements Elicitation
2.2.3 Questionnaires
This technique consists of submitting a list of specific questions to selected stakeholders.
Each question may be given a brief context and requires a short, standardized answer from
a pre-established list of possible answers. Stakeholders just need to return the questionnaire
marked with the answers that they feel are most appropriate. There can be different types of
question (Whitten & Bentley, 1998):
• A multiple-choice question merely requires selecting one answer from the associated list
of possible answers.
• A weighting question provides a list of statements that need to be weighted by the
respondent to express the perceived importance, preference or risk of the corresponding
statement. Weights may be qualitative values (such as 'very high', 'high', 'low' etc.) or
quantitative values (such as percentages).
On the plus side, questionnaires may allow us to acquire subjective information promptly, at
low cost (in terms of elicitation time), remotely and from a large number of people.
On the minus side, the acquired information is likely to be biased on several grounds: the
8'lmple of people to whom the questionnaire was sent, the subset of people who were willing
tq respond, the set of questions and the set of predetermined answers. There is no direct
c:n11e1rncnnn with respondents and little room for providing context underlying the questions.
Respondents may not comprehend the implication of their answers. Different respondents may
•· mterpret the same question or answer in different ways. As a consequence, some answers may
provide inaccurate, inadequate or inconsistent information.
, The bottom line is that we need to design our questionnaires very carefully. We have to
::'.!'«w.ua•c: them prior to their use in order to make sure that such pitfalls are avoided or mitigated .
.:va11oabon criteria include:
We may use some tricks to discard inconsistent answers, such as the use of implicitly redundant
questions that a respondent might answer differently. We should favour closed-ended questions
with accurate answers to ensure reliable input from respondents.
High-quality questionnaires are generally considered as a useful complement to interviews.
They are typically used prior to interviews to prepare for them. The factual information
and perceptions acquired through questionnaires may allow us to better target subsequent
interviews (see Section 2.3.1).
Repertory grids
In this technique, stakeholders are given a set of domain concepts that have already been
elicited. They are asked to further characterize each of them through attributes and correspond-
ing value ranges, to be provided in a concept-attribute matrix.
For example, a grid associated with the concept of Meeting might be filled in with attributes
such as Date, Location and Attendees together with corresponding value ranges, e.g. Mon-Fri for
Date.
card sorts
Stakeholders are given a set of cards. Each card is associated with a specific domain concept.
The card may represent this concept textually (by a word or phrase), graphically (by a picture),
or a mix of these. Each stakeholder is asked to partition the set of cards into subsets based on
his or her own criteria. For each subset, he or she is then asked the reason for grouping the
cards together in this subset. We may thereby obtain implicit properties as classification criteria.
For each of these we may further ask whether the property is descriptive or prescriptive in order
to consider it as a candidate domain property or requirement, respectively (see Section 1.1.4).
The process may be repeated with the same cards, yielding new groupings resulting in new
properties explaining them.
For example, a stakeholder might group the Meeting and Participant cards together. The
elicited reason for such grouping might be, on the first iteration, the underlying property that
participants need to be invited to a meeting; this property would be classified as prescriptive.
On the second iteration, we might obtain another reason for the Meeting and Participant cards
to be again grouped together; namely, the prescriptive property that the scheduler must know
the constraints of invited participants to attend the meeting.
t
Domain understanding and Requirements Elicitation
conceptual laddering
To complement the previous techniques, we may also ask stakeholders to arrange some of
the concepts submitted into taxonomical trees. For example, the RegularMeeting and Sporadic
Meeting concepts might be categorized as subclasses of the Meeting concept.
These three concept-acquisition techniques are simple, cheap, easy to use and sometimes
effective in prompt elicitation of missing information about domain concepts. However, they
may produce subjective results with no guarantee of accuracy and relevance to the problem
world. They may also become fairly complex to manage for large sets of concepts.
• In passive mode, stakeholders are told the story. The storyboard is used for explanation
or validation.
• In active mode, stakeholders contribute to the story. The storyboard is used for joint
exploration.
'A storyboard can be made more structured by making the following aspects explicit alongside
ithe story:
'This kind of question-based structuring provides complementary dimensions for exploring the
Jl:>roblem world.
st'<~
A scenario illustrates a typical sequence of interactions among system components that
meets an implicit objective. It amounts to a structured form of storyboard covering the who,
what and how dimensions.
Fundamentals of Requirements Engineering
Scenarios are widely used throughout the software lifecycle (Weidenhaupt et al., 1998). In
requirements engineering there are two main uses:
• Explaining how the system-as-is is running is often made simpler through concrete
examples of real-life interaction sequences.
• Exploring how the system-to-be should be running is often made easier through concrete
examples of hypothetical interaction sequences. Such examples provide a good basis for
further elicitation:
a. We may ask specific questions about them.
b. We may in particular elicit the underlying objectives.
c. We may generalize them into models of desired system behaviour (as we will see it in
Chapters 13 and 18).
Let us consider our meeting scheduling case study. One typical scenario for organizing a
meeting in the system-to-be might involve the following interactions between a meeting
initiator, the software scheduler and meeting participants:
• Positive vs negative scenarios. A positive scenario illustrates what should happen in terms
of one behaviour that the system should cover. A negative scenario is a counter-example:
it illustrates what may not happen in terms of one behaviour that the system should
exclude. For example, the previous scenario is a positive one. A negative scenario might
be the following:
Domain understanding and Requirements Elicitation
1. A participant returns a list of constraints covering all dates within the prescribed date range.
2. The scheduler forwards this message to all participants asking them for alternative constraints
within an extended date range.
Note that the reason for this scenario to be a negative one is kept implicit - in this
case, the scenario illustrates the implicit requirement that information about participant
behaviours and constraints should not be disclosed to others.
• Normal vs abnonnal scenarios. A normal scenario captures a course of interaction
where everything proceeds as normally expected. Abnormal scenarios capture desired
interaction sequences under exceptional circumstances that depart from the normal
course of interaction. Normal and abnormal scenarios are positive. For example, the
previous six-step scenario is a normal one. There should be abnormal scenarios to cover
cases such as the following:
The meeting initiator is not among the authorized ones.
A participant's constraints are not valid (in some sense to be made precise).
- The participant constraints are not all received in due time.
Scenarios have strengths and limitations as elicitation vehicles. On the positive side, they
are concrete and support a narrative style of description. Examples of desired or undesired
behaviour naturally arise during the elicitation process. Scenarios can be used easily by
stakeholders with different backgrounds to build a shared understanding of how components
do interact in the system-as-is or how they should or should not interact in the system-to-
be. Moreover, their usage extends beyond the elicitation phase - in particular, as animation
sequences when we validate requirements, as counter-examples when we verify behavioural
requirements, and as test cases when we define acceptance tests from the requirements (see
Chapter 5).
On the downside, scenarios are inherently partial. As they are just examples, they do not
rover all possible system behaviours under all possible circumstances. This is somewhat similar
.to the coverage problem for test cases. A reasonably comprehensive set of scenarios requires us
enumerate multiple combinations of individual component behaviours; this inevitably results
a combinatorial explosion problem. On the other hand, multiple stakeholders may state their
:;cenarios at different levels of granularity, which raises integration problems. Such scenarios
. may contain details that are irrelevant to the point the scenario is trying to make. Complex
.::~enarios may also be hard to comprehend for lack of structure. Moreover, too early use of
., f~enarios may introduce some risk of overspecification; the sequencing of some interactions
':;f~ght not be strictly required, or the allocation of responsibilities among interacting components
~might be premature. Last but not least, scenarios keep properties about the system implicit.
£They capture interaction sequences, but not the reasons why such sequences should or should
;not take place; that is, the requirements underlying them (see the preceding negative scenario
. ~example). In the end, implicit properties need to be made explicit to support negotiation,
:'.J;malysis, implementation and evolution.
Fundamentals of Requirements Engineering
Despite these numerous limitations, we cannot live without scenarios. They spontaneously
jump in during elicitation and provide useful information that we might perhaps not obtain
otherwise. Easy-to-use notations are therefore required to express scenarios unambiguously,
together with dedicated techniques for infering useful properties and models from them. We
will come back to this in Chapters 4, 8, 13, 15 and 18.
For prototyping to be cost effective, we must be able to build prototypes very quickly. To
achieve this we may use executable specification languages, very high-level programming
languages (such as functional or logic programming languages), program generators (such as
simulation generators or user interface generators), generic services and the like.
The prototyping process is generally iterative and combines requirements validation and
elicitation as follows:
repeat
build next prototype version from selected requirements;
show prototype executions;
get feedback from stakeholders;
update requirements from feedback
until prototype gets full agreement from stakeholders
Domain Understanding and Requirements Elicitation
-~T
i
Elaborate Prototype
requirements requirements
Demonstrate proto
and get feedback
[not Proto_OK]
[Proto_OK]
!
!Figure 2.1 gives a pictorial representation of this process as a UML activity diagram. Require-
!ments elaboration and prototyping are shown there as parallel activities. The process terminates
1on an agreed prototype together with a corresponding set of requirements. Some of the
!requirements may have been revised during the process; others may have been elicited from
stakeholder feedback.
The resulting set of requirements is the primary target of prototyping. In general, it is
the only product kept for subsequent development. The prototype is then called mock-up
or throwaway prototype. Alternatively, the prototype may be converted into a final software
product through a series of semantics-preserving optimizations (Balzer et al., 1982). The term
evolutionary prototype is used in such a case.
Prototyping as an elicitation vehicle has a built-in advantage: experimenting with some
concrete flavour of what the system might look like helps us understand the implications of
Some requirements, clarify others, tum inadequate requirements into adequate ones, and elicit
requirements that are hidden in the stakeholder's mind. We can use the prototype for other
purposes beyond elicitation and validation, such as user training before the final product is
available and simulation of stubb components during integration testing.
, . . Prototyping has limitations too. By definition, a prototype does not cover all aspects
'ofithe software-to-be. Most often it leaves some functionalities aside and restricts itself to
i few non-functional requirements (such as useability requirements). Requirements about
~rformance, cost, reliability, real-time constraints or interoperability with other software
are ignored. Prototypes can therefore be misleading and set stakeholder expectations too
high. Moreover, throwaway prototypes are often built through 'quick and dirty' means;
resulting code is generally inefficient and poorly structured. Converting throwaway
Fundamentals of Requirements Engineering
prototypes into evolutionary ones may therefore be quite hard. Conversely, evolutionary
prototypes are costly and may take too much time to develop with regard to the primary
concern of eliciting adequate requirements promptly. In all cases, there can be inconsis-
tencies between the updated requirements and the prototype code; the confidence in the
results obtained through prototyping will be decreased in case of requirements-code mis-
match.
Main
. ~""
--------
Space
Secondary
----- /
Response time
Time
~
Throughput
memory storage / ~
OffPeakThroughput PeakThroughput
Sub-class link /~
PeakMeanThroughput PeakUniformThroughput
}'/hat the corresponding average response time should be. This might result in assumptions
about expected response time for participants to send back their constraints, or requirements
on the deadline before which the meeting schedule should be notified.
Here are a few more examples of taxonomy-based questions to elicit requirements or
assumptions that might have been overlooked:
Elicitation then proceeds by traversing the meta-model, in an order prescribed by the modelling
method, in order to acquire corresponding meta-class instances (Dardenne et al., 1993). We
just state the principle here as we will come back to this in detail in Chapter 14.
For our target library system, we can specialize abstract elements such as:
,
."'j
Domain Understanding and Requirements Elicitation
~Wide
~ GetUnit
/ accessibility 1-----< --~ Abstract domain
...
Specialization I
l
i
•••••••.••••••.••••..•
!
:
!
·····•····
Concrete domain
,----.1---i / Wide / ~ •••••• . •••••••
: Book :- - - - - - - 1· availability /- - - - - -- ~- - - - -:••!3orrowCopr••:
·--------J ,...................... ·················
Figure 2.4 Reusing an abstract domain to drive elicitation
respectively (see Figure 2.4). It is indeed the case that any instance of a book copy that can
be borrowed is an instance of a useable resource unit, for example. When the proposed
specializations are agreed, we select the features attached to the abstract counterpart for
inheritance and library-specific renaming. The abstract specifications of tasks, objectives,
requirements and domain properties on resources are thereby transposed to the library
<lpmain, for example:
'A patron may not borrow more than x book copies at a time.'
We must then validate the transposed specifications for adequacy and adapt them where
necessary. Only those features that are relevant to our target domain need to be retrieved,
~posed and adapted, of course.
An abstract domain can thereby be reused for multiple target domains. In the previous
example, we might retrieve, specialize and adapt relevant abstract elements for different systems
s,~ch as a CD/DVD loan management system, seat allocation in a flight reservation system, seat
allocation in a concert booking system and so forth.
.. To increase the adequacy of reused knowledge, the abstract domain should be made more
sl:ructured and more accurate. For example, suppose that the resource management domain is
. ~trUctured in terms of multiple specializations of the resource concept. We might distinguish
~een returnable and consumable resource units, sharable and non-sharable resource units
~d so forth. Each abstract specialization would have specific tasks, objectives, requirements
and domain descriptions associated with it. The reusable features for one specific target domain
f-re t:l,ien likely be more accurate and more adequate.
For example, a copy of a book turns out to be a returnable and non-sharable resource unit .
.·•. would then get a specific domain property stating that 'a copy of a book can be borrowed by at
. most one patron at a time'. In a stock management system, stock items tum out to be consumable
resources; no requirement about limiting item usage time would then be considered.
We can also increase the effectiveness of reuse by considering different parts of our target
domain as specializations of different abstract domains. Multiple inheritance of appropriate
features from different domains is then supported.
Fundamentals of Requirements Engineering
would then be inherited, after appropriate renaming, from a requirement in the data manage-
ment domain stating:
'The managed database must accurately reflect the state of the corresponding environment data.'
Reuse-based elicitation has strengths and limitations. On the plus side, the elicitation effort
may be considerably reduced when the target system is sufficiently 'close' to the known
systems being reused, where 'close' refers to some conceptual, intentional or behavioural
distance. Arguments about the benefits of design patterns apply equally here (Gamma et al.,
1995; Buschmann et al., 1996). In particular, the reused knowledge fragments may codify
high-quality RE done in the past. The result is therefore likely to be of better quality and
obtained through a more guided process. Reuse-based elicitation also encourages abstraction
and a common terminology for recurring patterns of organizations, concepts, objectives, tasks,
behaviours and problems.
On the downside, it may be sometimes hard to identify the right abstractions, to structure
them and to specify them appropriately for significant reusability. It may also be hard to
determine whether a candidate fragment is worth reusing; similarity distances are not easy to
define and measure. The composition of fragments from multiple domains and their integration
in the target system may raise problems of consistency and compatibility, especially in the case
of a complex domain. Too much time might be spent in validation of inappropriate features
and tricky adaptations. Last but not least, the scaleability of reuse-based techniques is bound
by the availability of effective tool support.
Section 16.1 will further discuss the analogical reuse of models along the lines introduced
here.
The three case study descriptions in Section 1.1.2 illustrate the kind of report that might
summarize a first series of stakeholder interviews for the library, train control and meeting
scheduling systems, respectively.
Two kinds of interview are traditionally distinguished:
Structured and unstructured interviews have their respective merits. A structured interview
supports more focused discussion and avoids rambling among unrelated issues. An unstructured
interview allows for exploration of issues that might otherwise be overlooked. Effective
interviews should therefore mix the two modes, starting with structured parts, followed by
unstructured ones.
The effectiveness of an interview can be measured by a weighted ratio between:
Interviews sometimes involve multiple stakeholders. This may help save people's time. Mul-
tistakeholder interviews are, however, likely to be less effective due to weaker interpersonal
communication, more limited involvement of individuals and potential barriers on speaking
freely.
Fundamentals of Requirements Engineering
Interviews have strengths and limitations. On the plus side, they support the elicitation of
potentially important information that cannot be obtained through background study - typically,
descriptions of how things proceed really in practice, personal complaints, suggestions for
improvement, perceptions and feelings and the like. Interviews also allow for a direct,. flexible,
on-the-fly search for relevant information through new questions triggered from answers to
previous ones.
On the downside, it is sometimes hard to compare what different interviewees are saying
and integrate their input into a coherent body of knowledge. Subjective information has to be
interpreted, and the borderline between subjective and objective information is not necessarily
obvious to establish. Last but not least, the effectiveness of an interview is fairly dependent on
the interviewer's attitude and the appropriateness of questions.
Interviewing guidelines
Some practical rules can therefore be followed to increase the effectiveness of interviews:
• Identify the right sample of people to interview, in order to build a complete and reliable
picture - people with different responsibilities, expertise, tasks and exposure to potential
problems.
• Come prepared to the interview so that you can focus on the right issue for that
interviewee at the right time. Keep control over the interview - without making it too
obvious in order to avoid the impression of everything being preset.
• Make the interviewee feel comfortable from the very beginning. The starting point is
especially critical. Find an appropriate trick to break the ice; consider the person first
rather than the role; ask permission to record. Introduce context and motivation. Ask
easy questions first.
• Ask why questions about decisions already made, about pre-established 'solutions' or any
other questionable aspect without appearing to offend.
• In view of the goal of acquiring as much useful information as possible from the
interviewee, there are some types of questions to be banished:
a. Avoid opiniated or biased questions, in which you express your opinion or bias on
an issue.
One big question still remains: how should the structured parts of an interview actually be
structured? As we will see in Part II of the book, a model-driven approach may provide
an answer to this question. When the target of the elicitation process is a comprehensive,
multifaceted model of the system-as-is and the system-to-be, we can structure our interviews
according to the structure of the underlying meta-model; that is, the model in terms of which
the system model has to be built.
• In the case of passive observation, the requirements engineer does not interfere with the
people involved in the task. He or she is just watching from outside and recording what
is going on through notes, video cameras etc. As in data collection, these records must
then be sorted out and interpreted correctly.
a. Protocol analysis is a particular case of passive observation where a subject is
performing a task and concurrently explaining it.
b. Ethnographic studies are another particular case of passive observation where the
requirements engineer tries, over long periods, to discover emergent properties of the
social group involved in the observed process (Hughes et al., 1995). The observation
does not only refer to task performance but also to attitudes of task participants, their
reactions in specific situations, their gestures, conversations, jokes etc.
• In the case of active observation, the requirements engineer gets involved in the task,
sometimes to the point where he or she becomes a member of the work team.
;rl:ie main strength of observation techniques is their ability to reveal tacit knowledge that
}YQuld. not emerge through other techniques. (The tacit knowledge problem was discussed in
ction 2.1.) There has been limited experience to substantiate this argument, notably in the
Fundamentals of Requirements Engineering
air traffic control domain. Ethnography-based observation was applied there to analyse how
controllers handle paper strips representing flight plans. The observation revealed an implicit
mental model of air traffic that an automated version of the system needed to preserve (Bentley
et al., 1992). More generally, the anthropological roots of ethnographic techniques make them
especially suited to complex organizational systems where tacit, culture-specific features need
to be discovered and taken into account. Another obvious strength of such techniques is their
contextualization of the acquired information.
However, observation-based techniques have serious limitations. First of all, they are costly
to deploy. To reach meaningful conclusions, observation must take place over significant
periods, at different times and under different workload conditions. Even so, the conclusions
can be inaccurate, as people tend to behave differently when they are being observed. The
observer must be accepted by the group of observed people, which may be difficult and require
extra time. Analysing records to infer emerging features may also be quite hard. Pointing out
relevant features from a mass of irrelevant details may be far from trivial and subject to
interpretation errors. Last but not least, observation-based techniques are by essence oriented
towards the understanding of how the system-as-is is working. They are weaker at pointing
out problems and opportunities to be addressed by the system-to-be.
Some of the guidelines for interviewing people apply here as well, in particular:
Looking at tricky ways of doing things may also result in discovering problems that the working
person is trying to overcome.
• In strnctured group sessions, the role of each participant is clearly defined, for example
leader, moderator, reporter, user, manager or developer. Each participant has to contribute
to the joint elaboration of requirements according to his or her specific role and viewpoint.
Such elaboration is generally focused on high-level features of the target product. Group
synergies are expected to emerge at some point. Techniques such as focus groups, ]AD
(Joint Application Development) or QFD (Quality Function Deployment) are variants
of this approach that differ by the definition of the roles and document templates
used to support and document the joint elaboration process (Wood & Silver, 1995;
Macaulay, 1996).
Domain Understanding and Requirements Elicitation
• In unstrnctured group sessions, also called brainstorniing sessions, the respective roles
of participants are less clearly established:
a. In the first stage, each participant must spontaneously generate as many ideas as
, possible to improve a task or address a recognized problem. Idea generation must be
free from prejudice, censorship or criticism by others.
b. In the second stage, the participants need jointly to evaluate each idea with respect to
agreed criteria such as effectiveness, feasibility and cost, in order to prune out some of
the ideas and prioritize the others according to these criteria (Robertson & Robertson,
1999).
Group sessions have several benefits. Their less formal style of interaction can reveal aspects
of the system-as-is or issues about the system-to-be that might remain hidden under formal
interactions during interviews. Synergies in structured groups may result in better and much
easier resolution of conflicting viewpoints. Freedom of thought in brainstorming sessions may
result in more inventive ways of addressing the problems recognized. A broad range of ideas
may also be rapidly collected.
Group-based techniques raise problems and difficulties as well. The composition of the
group is critical. Key actors need to be involved. Such people in general are very busy
and may be unable to spend significant time in successive workshops. The leader must
have a high profile, both technically and in terms of communication skills. There are risks
associated with group dynamics that may result in biased, inadequate or incomplete information
being elicited - in particular, dominance by some individuals and difficulty for others in
communicating. A lack of focus and structure in sessions may result in a paucity of concrete
:results and a waste of time. Last but not least, more technical issues are likely to be addressed
only superficially in view of the time allotted and the average level of expertise of the group
in such issues.
2.4 conclusion
:Getting the right system-to-be is critically dependent on domain understanding and require-
ments elicitation. The more support that can be provided for these intertwined activities, the
better.
One single technique does not do the job. Each was seen to have strengths and limitations.
A combination of techniques based on their respective strengths is therefore needed to get
a complete, adequate and accurate picture. Which combination to consider may depend on
the organization, the domain and the specific project. In any case we should use a mix of
artefact-driven and stakeholder-driven techniques in view of their complementarity.
Some reported examples of such combinations include the following:
• RAD (Rapid Application Development) combines JAD group sessions, where the reporter
role is played by the software development team, evolutionary prototyping and code-
generation tools (Wood & Silver, 1995).
mechanisms may include specialization with single or multiple inheritance (Reubenstein &
Waters, 1991), traversal of a specialization hierarchy of domains (Sutcliffe & Maiden, 1998);
or structural and semantic matching based on analogical reasoning techniques (Maiden &
Sutcliffe, 1993; Massonet & van Lamsweerde, 1997). Knowledge reuse is closely related to
analogical reasoning, an area studied extensively in artificial intelligence (Prieditis, 1988;
Hall, 1989).
Gause and Weinberg provide a comprehensive coverage of issues related to
stakeholder~based elicitation techniques (Gause & Weinberg, 1989).
Principles and guidelines for effective interviews are discussed extensively in textbooks
on user-centred system analysis (Beyer & Holtzblatt, 1998; Whitten & Bentley, 1998) and
knowledge acquisition (Carlisle Scott et al., 1991; Hart, 1992).
Observation-based approaches to task understanding for requirements elicitation
are discussed in greater detail in Goguen and Linde 0993), Goguen and Jirotka
(1994), Hughes et al. 0995) and Kotonya and Sommerville 0997).
Macaulay provides a thorough coverage of requirements elicitation from group ses-
sions (Macaulay, 1996), including focus groups, workshops and approaches such as Joint
Application Design (JAD), Quality Function Deployment (QFD) and Cooperative Require-
ments Capture (CRC). Guidelines for effective brainstorming are proposed in Robertson
and Robertson 0999). A detailed account of the process of designing and running
effective workshops for requirements elicitation will be found in Gottesdiener (2002).
The ACRE framework is intended to assist requirements engineers in the selection of
the most appropriate combinations of elicitation techniques (Maiden & Rugg, 1996). The
selection there is based on a series of questions driven by a set of facets associated with
the strengths and weaknesses of each technique.
Fundamentals of Requirements Engineering
he techniques discussed in the previous chapter help us identify stakeholder
needs together with alternative ways of addressing these in the system-to-be.
< . Following the spiral model of the RE process introduced in Chapter 1 (see
'~ ~igure 1.6), we now need to evaluate the elicited requirements and assumptions on
era1 grounds:
• Some of them can be inconsistent with each other, especially in cases where they
originate from multiple stakeholders having their own focus and concerns. We
need to detect and resolve such inconsistencies. Conflicting viewpoints must be
managed in order to reach a compromise agreed by all parties.
Some requirements or assumptions can be overexposed to risks, in particular
safety hazards, security threats or development risks. We need to analyse such
risks carefully and, when they are likely and critical, overcome or mitigate them
through more realistic and robust versions of requirements or assumptions.
• The alternative options we may have identified must be compared in order to
select the 'best' options for our system. As introduced in Chapter 1, alternative
options may arise from different ways of satisfying the same objective or from
different responsibility assignments in which more or less functionality is auto-
mated They may also arise from different ways of resolving conflicts or managing
risks. Alternative options should be evaluated in terms of their contribution to
non-functional requirements and their reduction of risks and conflicts.
• ln the selected alternatives, the requirements might not all be able to be imple-
mented in the first place in view of development constraints such as budgets,
project phasing and the like. We need to prioritize requirements in such cases.
• Terminology clash. The same concept is given different names in different statements.
For example, one statement states some condition for 'participating' in a meeting whereas
another statement states an apparently similar or related condition for 'attending' a meeting.
• Designation clash. The same name designates different concepts in different statements.
For example, one stakeholder interprets 'meeting participation' as full participation until
the meeting ends, whereas another interprets it as partial participation.
• Structure clash. The same concept is given different structures in different statements. For
example, one statement speaks of a participant's excluded dates as 'a set of time points',
whereas another speaks of it as 'a set of time intervals'.
• Strong conflict. There are statements that cannot be satisfied when taken together; their
logical conjunction evaluates to false in all circumstances. This amounts to classical
inconsistency in logic. In our meeting scheduler, there would be a strong conflict
between one statement stating that 'the constraints of a participant may not be disclosed
to anyone else' and another stating that 'the meeting initiator should know the participants'
constraints'. (Those statements might originate from stakeholders having the participant's
and initiator's viewpoint, respectively.)
• Weak conflict or divergence. There are statements that are not satisfiable together
under some condition. This condition, called a boundary condition, captures a particular
combination of circumstances that makes the statements strongly conflicting when it
becomes true. The boundary condition must be feasible; that is, it can be made true
Requirements Evaluation
• Multiple stakeholders have different objectives and priorities. Such objectives are some-
times incompatible. Conflicts between requirements should therefore be analysed in terms
of differences between their underlying objectives. Once such differences are resolved,
the resolution is to be propagated down to the requirements level (Robinson, 1989).
• In addition to incompatibilities between multiple viewpoints, there are inherent incompat-
ibilities between non-functional requirements, or between functional and non-functional
requirements. For example:
- Password-based authentication for increased security often conflicts with useability
requirements.
Confidentiality and accountability requirements tend to conflict.
- Performance requirements about system throughput may conflict with safety require-
ments.
Increasing system maintainability may result in increasing development costs.
Conflict resolution often includes some form of negotiation. The resolution process may then
'.<t>roceed iteratively as follows (Boehm et al., 1995):
• Stakeholders are identified together with their personal objectives with regard to the
system-to-be (these are called win conditions).
• Differences between these win conditions are captured together with their associated
risks and uncertainties.
Fundamentals of Requirements Engineering
• The differences are reconciled through negotiation to reach a mutually agreed set of
objectives, constraints and alternatives to be considered at the next iteration.
Documenting conflicts Once they have been detected, conflicts should be documented for
later resolution. Documentation tools can record conflicts and point out statements involved
in multiple conflicts, most conflicting statements, non-conflicting statements, overlapping state-
ments and so on. This may be useful for impact analysis.
A standard documentation technique consists of building an interaction matrix (Kotonya &
Sommerville, 1997). Each row/column in the matrix is associated with a single statement. The
matrix element SiJ has a value 1 if statement Si conflicts with statement SJ, 0 if these statements
are distinct and do not overlap, and 1000 (say) if they overlap without conflicting.
A simple spreadsheet can then count the number of non-conflicting overlaps and the
number of conflicts involving a single statement, by:
If we now consider all statements together, the overall number of non-conflicting overlaps and
conflicts is obtained by a similar division on the sum across the bottom total line. Table 3.1
shows an interaction matrix. The total number of non-conflicting overlaps and conflicts is given
by the quotient and remainder of the integer division of 2006 by 1000, respectively; that is, 2
and6.
A more scaleable technique can be used when the statements are recorded as objectives,
requirements and assumptions in a requirements database. Conflict links are then created
between conflicting items, and the previous type of analysis is performed through a standard
database query engine. This kind of use of a requirements database will be detailed in
Sections 5.2 and 16.1.
Yet another approach consists of using specific notations for recording multiple stakeholder
viewpoints and inter-viewpoint consistency rules. Conflicts are then documented by marking
the rules being violated (Nuseibeh et al., 1994).
• Not too late - that is, before software development starts - otherwise we could develop
anything from inconsistent statements.
• Not too soon, to allow for further elicitation of useful information within individual
viewpoints in spite of their inconsistency with others (Hunter & Nuseibeh, 1998).
Using elicitation techniques We may use the techniques reviewed in Chapter 2 to elicit
alternative conflict resolutions with stakeholders - notably, stakeholder-based techniques such
as interviews and group sessions (see Section 2.3). The target resolutions should capture a
reasonable compromise for all parties involved in the conflict. One extreme alternative to
consider in the resolution space is the appeal to some appropriate authority.
Using resolution tactics We may also produce resolutions systematically by use of operators
that encode conflict-resolution tactics. Some operators transform the conflicting statements or
the objects involved in such statements. Other operators introduce new requirements (Robinson
& Volkov, 1997; van Lamsweerde et al., 1998). Let us give a sample of such operators:
• Avoid boundary condition. Ensure in some way or another that the boundary condition
for conflict can never become true. For example, consider again the divergence between
the statements 'A borrower should return a borrowed book copy within two weeks' and 'A
borrower should keep a borrowed book copy as long as he or she needs it'. The boundary
condition for strong conflict was seen to be 'Needing the borrowed book copy for more than
two weeks'. Avoiding this boundary condition might be achieved by keeping some copies
of popular books always unable to be borrowed; such copies are available for direct use
Requirements Evaluation
in the library at any time when needed (this tactic is often implemented in university
libraries).
• Restore conflicting statements. Ensure in some way or another that the conflicting
statements become together satisfiable again reasonably soon after the boundary condition
has occurred. This might be achieved in the previous example by forcing borrowers to
return book copies even if they are still needed and then allowing them to borrow the
required book copies again soon after.
• Weaken conflicting statements. Make one or several of the conflicting statements less
restrictive so that the conflict no longer exists. This tactic is frequently used. In general, the
statements being weakened are those that have lower priority. For example, the statement
'A borrower should return a borrowed book copy within two weeks' might be weakened into
'A borrower should return a borrowed book copy within two weeks unless he or she gets explicit
permission to keep it longer for some good reason'. The divergence would then disappear.
• Drop lower-priority statements. This is an extreme case of the previous tactic where one
or several lower-priority statements involved in the conflict are weakened to the point
that they are universally true.
• Specialize conflict source or target. Identify the source (or target) objects involved in
conflicting statements and specialize these so that the conflict disappears. For example,
let us come back to the conflict between the statements 'Allow users to be informed about
the loan status of books' and 'Do not allow students to know which user has borrowed what'. This
conflict can be resolved by specializing the conflict source object 'user' into 'staff user' so
that the first statement is transformed into the conflict-free version 'Allow staff users to be
informed about the loan status of books'. Alternatively, the conflict target object 'loan' can be
specialized into an anonymized version in which status information no longer covers the
identity of borrowers.
engineers and stakeholders tend to make unrealistic assumptions - the environment and the
software will behave as expected, the development project will run as planned. However,
moving from the system-as-is to the system-to-be inevitably raises several types of risk. If risks
go unrecognized or underestimated, the requirements will be incomplete or inadequate as they
will not take such risks into account.
This section presents principles and techniques for early risk management at RE
time. Section 3.2.1 defines the notion of risk and introduces the various types of risk
found in an RE project. Section 3.2.2 reviews techniques that may help us along the
various steps of risk management; namely, risk identification, risk assessment and risk
control. Risk documentation is briefly discussed in Section 3.2.3. A systematic method
integrating risk identification, assessment and control in the RE process is presented in
Section 3.2.4.
Risk identification
An obvious prerequisite to risk control is the awareness of possible risks impacting negatively
on the objectives of our project. We can use several techniques for this.
Risk checklists We may consider checklists of common risks for instantiation to the project's
specifics. Such checklists can be built from risk categories that negatively impact on corre-
sponding requirements categories introduced in Section 1.1.5:
The product- and process-related risk categories listed here cover Boehm's list of top ten risks
(Boehm, 1989). In a similar spirit, the Software Engineering Institute has elaborated a process-
oriented risk taxonomy together with a comprehensive list of questions to help in spotting
project-specific risks along this taxonomy (Carr et al., 1993). Note that poor risk management
is the most important risk as it results in all other types of risks.
Fundamentals of Requirements Engineering
Figure 3.3 Portion of fault tree for train door control system
Risk trees The identification of risks through component inspection can be made more
systematic by the use of risk trees. Such trees organize failures, causes and consequences along
causal links. They are sometimes called fault trees when the failures relate to safety hazards
(Leveson, 1995) or threat trees when they relate to security threats (Amoroso, 1994). Figure 3.3
shows a simple fault tree for our train control system.
Risk trees have two kinds of node. Failure nodes capture independent failure events or
conditions. They are represented by circles or rectangles depending on whether they are basic
or decomposed further into causes. Logical nodes are AND or OR nodes that capture causal
links. In the case of an AND node, the causing child nodes must all occur for the parent node
to possibly occur as a consequence. In the case of an OR node, only one of them needs to
occur.
Such trees may be used to capture process-related risks as well. In the most general case,
they are directed acyclic graphs where one child failure node may be causally linked to multiple
parent failure nodes.
To identify failure nodes in a risk tree, we may use risk checklists and guidewords (Jaffe
et al., 1991; Leveson, 1995; Pfleeger, 2001). Guidewords capture patterns of failure through
specific words such as:
Requirements Evaluation
• MORE: 'there are more things than expected'; LESS: 'there are fewer things than expected'
• BEFORE: 'something occurs earlier than expected'; AFTER: 'something occurs later than expected'
Once a risk tree has been built, we can enumerate all minimal AND combinations of leaf events
or conditions, each of which is sufficient for causing the root failure node. The set of such
combinations is called the cut set of the risk tree. This set is obtained by taking all leaf nodes
of another tree, called the cut-set tree, derived top down from the risk tree as follows:
• The top node of the cut-set tree is the top logical node of the risk tree.
• If the current node in the cut-set tree is an OR node, it is expanded in as many child
nodes as there are alternative child nodes in the risk tree; if it is an AND node, it is
expanded into one single aggregation node composed of all conjoined child nodes in
the risk tree.
• The process terminates when the child nodes obtained are all basic events or conditions
or aggregations of basic events or conditions.
Figure 3.4 shows a fault tree together with its associated cut-set tree. The fault tree
corresponds to the one given in Figure 3.3, where all leaves are assumed to represent basic
conditions. The cut set is given by the set of leaf nodes of the cut-set tree; that is, the set
{{TM, WR}, {TM, WA}, {TM, WS}, {TM, WI}, {TM, DAF}, {TM, SF}, {TM, PFDO}}.
Using elicitation techniques The elicitation techniques reviewed in Chapter 2 can also be
used to identify system-specific risks:
• Scenarios may be used to raise WHAT IF questions and point out failure situations. For
some given scenario, we may systematically explore potential deviations, for example
expected interactions that do not occur, that occur too late, that occur under different
conditions and so forth.
• Knowledge reuse techniques may be applied to risks previously experienced with similar
systems or within the same organization.
• Group sessions may be specifically dedicated to the identification of project-specific risks.
Risk assessment
The identified risks should be assessed in terms of the likelihood of their occurrence and
the severity of their possible consequences (see Figure 3.2). We need to do this in order to
prioritize the risks and determine an appropriate response for likely risks that have severe
consequences.
Qualitative assessment In general it is hard to estimate the likelihood and severity of a risk
in a precise way. Risk levels based on qualitative estimations are therefore often used. Such
estimations typically range over qualitative scales, for example:
• From 'very unlikely' to 'very likely' for the likelihood of a risk or consequence.
• From 'low' to 'catastrophic' for the severity of a consequence.
We may then elaborate a risk assessment table for each identified risk to support the subsequent
risk control step. For example, the result of assessing the risk 'Doors open while train is moving'
in Figure 3.3 might be captured by the assessment table in Table 3.2.
Risk assessment tables provide the basis for a rough prioritization of risks. Having defined
one such table for every identified risk, we may compare them and give higher consideration
to risks that have higher severity levels.
Table 3.2 Severity of consequences by risk likelihood levels for 'Doors open while train moving'
Requirements Evaluation
This technique is quite easy to use, but its conclusions are limited. The severity values are
coarse-grained and may be subjective; the likelihood of consequences is not taken into account.
Quantitative assessment Alternatively, we may use numerical scales for risk estimation and
comparison:
• The likelihood of a risk and the likelihood of a consequence are estimated in a discrete
range of probability values, such as (0.1, 0.2, .. ., 0.9, 1.0), or in a discrete range of
probability interoals, such as (0-0.3, 0.3-0.5, 0.5-0.7, 0.7-1.0).
• The severity of a consequence is estimated on a scale of 1 to 10, say.
We may then estimate the risk exposure for a risk r with independent consequences c as
follows:
Exp(r) = LL(c) x S(c),
c
where L(c) and S(c) are the likelihood and severity of consequence c, respectively. We may then
compare the exposures of the various identified risks, possibly weighted by their likelihood of
occurrence, and give higher consideration to risks with higher exposure.
Qualitative and quantitative scales share a common weakness: the scores used for risk
assessment and comparison may be inaccurate, because they are based on subjective values.
Such values cannot be measured and validated in terms of physical phenomena in the
environment. What does it really mean to say that the risk 'Doors open while train moving' has
a likelihood of 0.3, that the likelihood of the consequence 'Serious injuries' is 0.4, or that
the severity of the consequence 'no. of airport passengers decreased' is 6 on a 1-10 scale? For
comparison purposes, however, the problem is attenuated as long as the scores are assigned
consistently from one risk being compared to the other.
Still, the question remains of where such scores are coming from. The elicitation techniques
r~viewed in Chapter 2 might be used to obtain them from domain experts. A historical database
of accumulated measurements might be helpful as well. Even though the accuracy of score
values may remain questionable, risk-based decision making based on such expert estimates
will be much more effective than decision making without any basis.
Risk control
Once we have identified and assessed product- and process-related risks, we need to address
these in some way or another (see Figure 3.2). High-exposure risks must be reduced through
countermeasures. This reduction should be cost-effective.
Countermeasures yield new requirements or modified versions of elicited requirements.
For product-related risks, the effectiveness of countermeasures should ideally be monitored
at system runtime. If alternative countermeasures are anticipated at RE time, the system can
then shift from one countermeasure to the other at runtime in case the currently selected one
appears ineffective (see Section 6.5).
Fundamentals of Requirements Engineering
Exploring countermeasures
We may identify countermeasures through several means.
Using elicitation techniques The techniques in Chapter 2 can be applied for eliciting
countermeasures as well; in particular, stakeholder-based techniques such as interviews or
group sessions.
• Reduce risk likelihood. Introduce new requirements to ensure that the likelihood
of occurrence of the risk is significantly reduced. For example, let us assume that
train drivers were assigned the responsibility of executing the acceleration commands
generated by the software controller. Consider the risk of drivers failing to do so, for
example because they fall asleep or are unduly distracted by some other activity. The
likelihood of occurrence of this risk might be reduced by requiring prompts for driver
reaction to be generated regularly by the software.
• Avoid risk. Introduce new requirements ensuring that this specific risk may never occur.
This is a boundary case of the previous strategy, where the likelihood is reduced to
zero. For example, the risk of passengers forcing doors to open might be avoided by
requiring that (a) the doors actuator reacts to the software controller exclusively, and
(b) the software controller checks the train's speed before responding to any opening
request from passengers.
• Reduce consequence likelihood. Introduce new requirements ensuring that the likelihood
of occurrence of this consequence of the risk is significantly reduced. For example, the
likelihood of severe injuries or loss of life in the case of unexpected door opening might
be reduced by requiring that the software controller generates an alarm within train cars
in the case of door opening during train moves.
Requirements Evaluation
• Avoid risk consequence. Introduce new requirements prescribing that a severe conse-
quence of this tolerated risk may never occur. For example, new requirements might
be introduced to ensure specifically that train collisions cannot occur in case the risk of
inaccurate train position or speed information does occur.
• Mitigate risk consequence. Introduce new requirements to reduce the severity of
consequences of this tolerated risk. For example, consider the risk of important meeting
participants having a last-minute impediment. The absence of such participants can
be mitigated in the system by integrating new facilities such as videoconferencing,
appointment of proxies and the like.
where Exp(rl,cm) denotes the new risk exposure if the countermeasure cm is selected.
The countermeasures with highest risk-reduction leverages should then normally be
selected.
The comparison of countermeasures can be refined by considering:
• Cumulative countermeasures in the preceding definition of RRL, to account for the fact
that the same risk may be reduced by multiple countermeasures.
• Cumulative RRLs, to account for the fact that the same countermeasure may reduce
multiple risks.
This documentation can be organized around risk trees. We will come back to this in Chapter 9.
Risks are thereby prioritized by critical impact on all objectives. The last column of the impact
matrix yields the overall loss of proportion of attainment of each objective, obtained by
A DDP risk-consequence table for the library management case study. Internal cells give the severity of
consequences measured in proportion of objective being lost if the risk occurs
individual reductions down the corresponding column, according to the following for-
mula:
The terms 1 - Reduction(cm, r) in the above product represent reduction rates of the likelihood
that the risk r still occurs in spite of the corresponding countermeasure cm. Risks are thereby
compared by their global reduction through the combined application of countermeasures. The
last column of the effectiveness matrix yields the overall single effect of each countermeasure.
It is obtained by multiplications across the corresponding row, according to the following
formula:
The most globally effective countermeasures are thereby highlighted. In this calculation of
single effect, risk criticality is determined in terms of the risk's initial likelihood, as if no other
countermeasure were applied that could reduce this likelihood. DDP offers a more refined
option for overall effect of a countermeasure based on risks with their likelihoods as already
reduced by whichever of the other countermeasures have already been selected. It also allows
for the possibility that a countermeasure, while reducing some risks, increases others. Those
more refined calculations are detailed in Feather and Cornford, 2003.
Table 3.4 shows an effectiveness matrix for the risks in Table 3.3.
Table 3.4 A DDP risk-countermeasure table for the library management case study. Internal cells give the fractional
reduction of risk likelihood
Requirements Evaluation
Step 3: Determine an optimal balance between risk reduction and cost of countermeasure
Each countermeasure has a benefit in terms of risk reduction, but also some cost associated
with it (as introduced before). We need to estimate costs with domain experts. The DDP tool
may then visualize the effectiveness of each countermeasure together with its cost. A risk
balance chart shows the residual impact of each risk on all objectives if the corresponding
countermeasure is selected. We can then explore optimal combinations of countermeasures
that achieve risk balance with respect to cost constraints. In general it may be worth considering
an 'optimal' combination of countermeasures to select. In the simple example of Table 3.4,
there would be 16 possible combinations to explore (ranging from none to all four). DDP has
a simulated annealing optimization procedure to find near-optimal selections. The optimality
citerion can be set by the user, for example 'maximize the total expected attainment of
objectives under some given cost threshold' or 'minimize the total cost for remaining above
some given level of attainment'.
The DDP approach provides a good illustration of the kind of technology supporting
the risk Identify-Assess-Control cycle during requirements evaluation. It covers most of the
risk management concepts discussed in this section; links risks explicitly to objectives and
requirements; exhibits typical quantitative reasoning schemes that are available for requirements
evaluation; and has convenient tool support for carrying out such reasoning and for visualizing
the results.
Risk management is an essential aspect of RE. The goal-based models in Part II of the book
will allow for more in-depth coverage of this topic in Chapters 9, 16 and 18.
For each type of alternative option, we need to make decisions based on specific evaluation
criteria.
The main criterion for comparing options is their respective degree of contribution to
the various non-functional requirements that were elicited (see Section 1.1.5, Figure 1.5) or,
equivalently, their degree of reduction of the risks of not meeting such requirements. Other
criteria may need to be considered, such as the degree of resolution of identified conflicts.
Fundamentals of Requirements Engineering
Non~fuhction.al requlrelllents
Minimal ina::>nvienie:nre
Once the evaluation criteria have been set up, we need to compare the various optiom
order to select the most preferred one.
• The NFR framework assumes a structured model like the ones we will introduce in Pa
of the book. A goal model captures, among other things, alternative ways of refining
system objectives into sub-objectives and requirements. The resulting AND/OR graph
also represent degrees of satisfaction of objectives and degrees of contribution, posi·
or negative, of lower-level objectives to higher-level ones.
• Rules for propagating the qualitative labels along the contribution paths in this gr;
allow us to qualitatively evaluate the degree to which the higher-level goals are satisJ
in each alternative option. The option where the critical higher-level objectives are be
satisfied is then selected.
Requirements Evaluation
We can thereby compare options by their overall score with respect to all criteria. The best-score
option may then be considered for selection.
Table 3.6 shows a weighted matrix for two alternative ways of getting a participant's
constraints in our meeting scheduling system. Three non-functional requirements are used as
evaluation criteria:
See Figure 1.5 in Chapter 1. In Table 3.6, the e-agenda alternative is estimated to satisfy the
Minimal inconvenience requirement perfectly, whereas it appears fairly poor with respect to the
Reliable response requirement; it is felt that e-agendas may not be perfectly up to date in 70%
of the cases. The option of asking participants' constraints through e-mail is seen to emerge
according to such estimates. As in the previous section, an objective comparative conclusion is
reached from subjective estimates of weights and contributions. The latter may be tuned up in
a spreadsheet-like manner.
Table 3.6 Weighted matrix for evaluating alternative options in the meeting scheduler
Fundamentals of Requirements Engineering
• The development of all the features desired by stakeholders may exceed resource
limitations in terms of available budget, manpower or time to delivery.
• The development may need to be planned by successive increments and releases, and
replanned in the case of unexpected circumstances arising during development, such
as unanticipated delays, budget restrictions, personnel shortages or pressure on time to
deliver.
• Priority information may be used in conflict management to weaken or even drop
lower-priority requirements (see Section 3.1.3).
In such cases, we need to decide which requirements are mandatory, which are superfluous (at
least in the first project phase) and which would be nice to have if resource constraints allow.
a. Priorities should be ordered by levels, each level containing requirements of equal priority.
For easier prioritization, the number of such levels should be kept small.
b. The characterization of levels should be qualitative rather than quantitative, and relative
rather than absolute, e.g. 'higher than' rather than 'high', 'medium' or 'low'.
c. The requirements being compared should be comparable. They should refer to the same
level of granularity and abstraction.
d. The requirements being compared should be independent, or at least not mutually depen-
dent, so that one requirement can be kept while the other is discarded or deferred.
e. The classification of requirements by priority level should be negotiated with stakeholders
so that everyone agrees on it.
Premises (c) and (d) are satisfied when a goal-based model is used to support the prioritization
process. We can then select nodes with a common parent node in the goal refinement graph,
or at least at the same refinement level, as candidate items for comparison (see Chapters 7
and 8).
Prioritization techniques
A simple, straightforward way of setting priorities among requirements is to gather key players
in the decision process and ask them to rank requirements under the above constraints.
Requirements Evaluation
The result of this ranking might be highly subjective and produce arbitrary, inadequate
or inconsistent results in some cases. The value-cost comparison method provides a more
systematic approach for requirements prioritization (Karlsson & Ryan, 1997). This method
meets premises (a) to (c) and globally works as follows:
• We calculate the relative contribution of each requirement to the project's overall value.
• We calculate the relative contribution of each requirement to the project's overall cost.
• We plot the result on a value-cost diagram partitioned into subareas associated with
priority levels. In this diagram, the x axis represents cost percentage whereas the y axis
represents value percentage (see Figure 3.6).
To calculate the relative contribution of each requirement to the project's overall value and
cost, we use a standard technique in decision theory (Saati, 1980). This technique, known as
Analytic Hierarchy Process (AHP), is applied twice - once for the case where the comparison
criterion is value, once for the case where it is cost.
Given the comparison criterion and a set of requirements R1 ,R2, ... ,Rn contributing to
it, the AHP procedure determines in what proportion each requirement contributes to the
criterion. The procedure has two basic steps.
10 20
• 30 40 50
Cost percentage
Figure 3.6 Value-cost requirements prioritization for the meeting scheduler: outcome of the AHP process
Fundamentals of Requirements Engineering
Step 2: Estimate how the criterion distributes among all requirements The criterion
distribution is given by the eigenvalues of the comparison matrix. These eigenvalues are
estimated by averaging over normalized columns as follows:
• Normalize columns of the comparison matrix. Each element of the comparison matrix is
replaced by the result of dividing this element by the sum of the elements in its column.
• Average across lines. The estimated proportion in which Rt contributes to the criterion
is then obtained by taking the sum of elements on the 1th line of the normalized matrix,
divided by the number of elements along the line.
Table 3.7 shows a comparison matrix resulting from Step 1 applied to our meeting scheduler
case study. The criterion there is instantiated to the project's overall value. For example, the
requirement of determining a best possible schedule fitting the excluded/preferred dates of
invited participants is estimated to contribute very strongly more to the project's overall value
than the requirement of providing a meeting assistant that would help manage the meeting
agenda, minutes, attendance list and so on.
Table 3.8 shows the result of Step 2 applied to the comparison matrix in Table 3.7 (the
values were rounded to two significant digits). The last column appended to the normalized
Table 3.7 AHP comparison matrix with relative values of requirements on the meeting scheduler •
Requirements Evaluation
Table 3.8 AHP normalized matrix and relative contributions of requirements to the project's overall value
Table 3. 9 AHP comparison matrix with relative costs of requirements on the meeting scheduler
matrix shows each requirement's relative contribution to the overall value of the project. For
example, the requirement of determining a best possible schedule is seen to account for 49% of
the project's overall value, whereas the requirement of providing a meeting assistant accounts
for 7% of it.
Replaying now Step 1 and Step 2 of the AHP process for the case where the criterion
is requirements cost - that is, the cost for implementing the corresponding requirement - we
obtain Tables 3.9 and 3.10, respectively.
The resulting relative contributions to the project's value and cost may now be plotted on
a value-cost diagram partitioned in three priority levels, say. Figure 3.6 shows how the five
Fundamentals of Requirements Engineering
Table 3.10 AHP normalized matrix and relative contributions of requirements to the project's overall cost
requirements on the meeting scheduler are prioritized accordingly. The requirements 'Produce
optimal dates' and 'Handle preferred locations' are seen to emerge at the higher-priority level,
the requirement 'Parameterize conflict resolution strategy' is of medium priority, whereas the
requirements 'Support multilingual communication' and 'Provide a meeting assistant' are relegated to
the lower-priority level.
One difficulty with this prioritization technique is the potential for inconsistent estimations
in the comparison matrix built at Step 1 of the AHP process. For consistent comparison, the
pairwise requirements ordering must be transitive; that is, if R1 is estimated to contribute to
the criterion x more than Rz and R2 is estimated to contribute to it y more than R3, then R1
must contribute z more than R3 , with x,y,z in the ordered set {slightly, strongly, very strongly,
extremely} and x ~ y ~ z. The AHP process also provides means for assessing consistency
ratios and comparing them with acceptability thresholds (Saati, 1980).
3.5 conclusion
The evaluation techniques in this chapter support the early identification of potential problems
with elicited material, the exploration of alternative options to address them, and the selection
of best options. To determine the relative value of the options being compared, the techniques
often involve some form of qualitative or quantitative assessment. An objective conclusion
is reached from subjective estimates of weighted contributions of options to evaluation
criteria.
The adequacy and accuracy of such estimates are critical. Their determination requires
judgement and experience. We need to obtain them from domain experts, and may therefore
use some of the elicitation techniques reviewed in Chapter 2 to get adequate and accurate
estimates. Such estimates should ideally be cross-checked by other stakeholders and validated
from empirical data. In any case, the outcome of the evaluation process should be discussed
with stakeholders to reach a common agreement.
One recurring issue raised by evaluation techniques is the identification and comparability
of the items to be evaluated. These items should be overlapping (in the case of conflict
Requirements Evaluation
• The structure of the RD should make it easy to understand it, retrieve and analyze its
items, follow dependency links, trace items back to their rationale and make appropriate
changes.
This chapter reviews the wide range of techniques that we may use for requirements specifi-
cation and documentation, from informal to semi-formal to formal.
The semi-formal and formal techniques will provide a basis for the techniques detailed in
Parts II and III, respectively. The focus here is on the notations and constructs that we can use
Fundamentals of Requirements Engineering
in the specification process, whereas in the next parts of the book we will see how we can use
these to build and analyse useful models for RE.
'Full braking shall be activated by any train that receives an outdated acceleration command or that enters
a station block at a speed higher than X m.p.h. and to which the preceding train is closer than Y metres.'
This safety-critical requirement might be interpreted in two ways. In the case of a train entering
a station block too fast:
• The first interpretation requires full braking to be activated when an outdated command
is received or when the preceding train is too close.
• The second interpretation requires full braking only in the case where the preceding train
is too close.
Whatever the right interpretation might be, taking the wrong one is clearly harmful in this
example.
There are other frequent problems with poor use of natural language, notably confusion
between the 'and' and 'or' connectives. A frequent mistake arises in case analysis situations
where people write:
instead of:
Assuming that the two cases do not overlap and cover all possible cases, we can easily see
that formulation (Fl) does not require anything as it reduces to universal truth. By standard
manipulations in propositional logic, formulation (Fl) reduces to
that is,
which reduces to true. Similar manipulations show that formulation (F2) is what we want as it
amounts to:
In addition to such problems with natural language, there are problems with unstructured
prose. Forward references and remorse are frequent. Specific information is hard to localize.
There is no guidance for organizing the requirements document. Last but not least, the absence
of formalization precludes any form of automated analysis.
• Never include more than one requirement, assumption or domain property in a single
sentence.
• Keep sentences short.
• Use 'shall' for prescriptive statements that are mandatory and 'should' for desirable ones.
• Avoid unnecessary jargon and acronyms.
• Use suggestive examples to clarify abstract statements.
• Use bulleted lists for explaining related items that detail a preceding statement.
• Annotate text with diagrams to express complex relationships among items.
• Introduce figures to provide visual overviews and emphasize key points.
• Use tables to collect related facts.
• Use equations to relate quantitative information.
• Avoid complex combinations of conditions with nested or ambiguously associated
conditions.
The upper and lower parts of a decision table are associated with atomic input and output
conditions, respectively. The upper part of columns is filled in with truth values (Tor F) for the
corresponding input conditions; the filling is made systematic through binary decomposition
of groups of adjacent cells. The lower part of the table indicates which output conditions must
hold in the corresponding case. Cases are combined through conjunction down a column and
disjunction across columns.
In general, the table can be reduced through two kinds of simplification:
• A column has to be eliminated when the AND combination of its input conditions turns
out to be impossible in view of known domain properties.
Requirements Specification and oocumentatlon II
• Two columns may be merged when their input conditions result in the same combination
of output conditions. For example, the first and third columns above may be merged,
with the truth value for the second input condition becoming·-', meaning 'Tor F'.
• The tables can be checked for completeness and redundancy. We can easily detect
missing or redundant cases just by counting columns before the table is simplified and
reduced. Assuming N input conditions, there are missing cases if the number of columns
with truth values is less than 2N. Detecting such incompleteness at specification time is
obviously beneficial. If this number is greater than 2N, there are redundant cases. (We
will come back to this in Section 5.1.)
• Decision tables provide acceptance test data almost for free. Each column defines a
class of input-output test data. Selecting representatives for each such class ensures
a satisfactory coverage criterion known as cause-effect coverage in the literature on
black-box testing (Myers, 1979).
• Statement identifier for unique reference throughout the RD; it might be a suggestive
name or a hierarchically numbered identifier to express the decomposition of statement
St into statements Sy.
Complementing some statements with a fit criterion ensures that they are measurable (Robert-
son & Robertson, 1999). The importance of making requirements, assumptions and domain
II Fundamentals of Requirements Engineering
properties measurable was introduced in Section 1.1.7. A .fit criterion associated with a state-
ment quantifies the extent to which this statement must be satisfied. It is often associated
with non-functional requirements but can complement other types of statements as well. A fit
criterion can be used for assessing alternative options against it, and for checking whether the
associated statement is adequately satisfied by the implementation. Here are a few examples
for our running case studies:
Specification: The bibliographical search facility shall deliver prompt responses to queries.
Fit criterion: Responses to bibliographical queries should take less than 2 seconds in 90%
of cases and no more than 5 seconds in other cases.
Specification: Information displays inside trains shall be informative and easy to understand.
Fit criterion: A survey after 3 months of use should reveal that at least 75% of trav-
ellers experienced in-train information displays as helpful for finding their
connection.
Specification: The scheduled meeting dates shall be convenient to invited participants.
Fit criterion: Scheduled dates should fit the diary constraints of at least 90% of invited
participants in at least 80% of cases.
Specification: The meeting scheduling system shall be easy for secretaries to learn.
Fit criterion: X% of secretaries shall successfully complete a meeting organization after a
Y-day training.
Grouping rules
For greater document cohesion, RD items that directly relate to a common factor should be
grouped within the same section (Davis, 1993). A common factor might be, for example:
• a system objective
• a conceptual object
• a task
• a subsystem
• a system component
• an environmental condition
• a software feature
Requirements Specification and Documentation II
Requirements document templates
Templates may also be used for imposing a standard structure on RDs. Figure 4.1 shows a
well-known example of such a template (IEEE, 1998).
1. Introduction
1.4 References
2. General description
3. Specific requirements
Appendices
Index
Figure 4.1 The IEEE Std-830 standard template for organizing a requirements document
II Fundamentals of Requirements engineering
To build a requirements document according to the IEEE Std-830 template, we first need
to write an Introduction to the document (Section 1.1 and Section 1.5) and to the system-to-be:
its domain, scope and purpose (Section 1.2). We need to make the terminology precise and
define all domain-specific concepts (Section 1.3). The elicitation sources have to be listed as
well (section 1.4).
In the General description part, the relationship of the software-to-be to its environment has
to be specified in terms of interfaces and modes of interaction with users, devices and other
software (Section 2.1). Then we need to overview the expected functionalities of the software-
to-be (Section 2.2). The assumptions about expected software users must be made explicit, for
example in terms of experience and expertise (Section 2.3). The next section must overview
constraints that will restrict development options, such as hardware limitations, implementation
platform, critical concerns, regulations and the like (Section 2.4). Then we need to document
environmental factors that might affect the requirements if they change (Section 2.5). This General
description part ends by identifying which requirements are optional and might be delayed until
future versions (Section 2.6).
Next comes the core of the RD; all requirements are to be detailed there (Section 3). The
IEEE Std-830 standard provides alternative templates for this section. The specifier may select
the one felt most appropriate for the domain and type of system. Figure 4.1 shows one
of those. Note the structuring in terms of functional requirements (Section 3.1) and variow
categories of non-functional requirements (Section 3.2- Section 3.6); see Figure 1.5 in Chapter 1
The last section gathers quality requirements related to security, availability, reliability anc
maintainability (Section 3.6).
Numerous similar templates are used by practitioners. They are usually specific to com
panies, government agencies (e.g. MIL-STD-498) or international organizations (e.g. NASA':
SMAP-DID-P200-SW or ESA's PSS-05).
The VOLERE documentation template is another variant of the IEEE Std-830 structun
for organizing requirements documents (Robert8on & Robertson, 1999). It makes an explici
distinction between users, clients and other stakeholders. It also proposes additional section:
for other relevant RD items such as:
• costs
• risks
• development work plan
• procedures for moving from the system-as-is to the system-to-be
The combined use of strict rules on natural language usage and RD organization addresses som
of the problems with free documentation in unrestricted natural language while preservin
expressive power and high accessibility. Ambiguities and noise may be reduced. Fit criteri
increase measureability. A predefined RD structure provides some guidance in writing th
documentation and ensures document standardization. It also makes it easier to locali2
Requirements Specification and Documentation IJ
specific RD items. However, the absence of formalized information still precludes any form of
automated analysis.
Context diagrams
As shown in Figure 4.2, a context diagram is a simple graph where nodes represent system
components and edges represent connections through shared phenomena declared by the
labels (DeMarco, 1978; Jackson, 2001). For example, the Initiator component in Figure 4.2
controls the meetingRequest event, whereas the Scheduler component monitors it; the Scheduler
component controls the constraintsRequest event, whereas the Participant component controls
the constraintsSent event.
II Fundamentals of Requirements Engineering
A component in general does not interact with all other components. A context diagram
provides a simple visualization of the direct environment of each component; that is, the set of
'neighbour' components with which it interacts, together with their respective interfaces.
Problem diagrams
A context diagram can be further detailed by indicating explicitly which component controls a
shared phenomenon, which component constitutes the machine we need to build, and which
components are affected by which requirements. The resulting diagram is called a problem
diagram Qackson, 2001).
Figure 4.3 shows a problem diagram excerpt for the meeting scheduling system. A rectangle
with a double vertical stripe represents the machine we need to build. A rectangle with a
single stripe represents a component to be designed. An interface can be declared separately;
the exclamation mark after a component name prefixing a declaration indicates that thi~
component controls the phenomena in the declared set. For example, the f label declaratior
in Figure 4.3 states that the Scheduler machine controls the phenomena determineDate anc
determineLocation.
A dashed oval represents a requirement. It may be connected to a component through ~
dashed line, to indicate that the requirement refers to it, or by· a dashed arrow, to indicate
that the requirement constrains it. Such connections may be labelled as well to indicate whid
corresponding phenomena are referenced or constrained by the requirement. For example, th<
g ,.-·-·-·-·-·-·-·-·-·-
- · - · - ·."·'Meeting date and location.
~ shall be convenient to
;: - . invited participants . , -'
.-·-h· -·-·-·-·-----·-·-·-·'
Frame diagrams
Instead of writing problem diagrams from scratch for every problem world we need to
delimit, we might predefine a number of frequent problem patterns. A specific problem
diagram can then be obtained in matching situations by instantiating the corresponding pattern
(Jackson, 2001). This is another illustration of the knowledge reuse technique discussed in
Section 2.2.7.
A frame diagram is a generic problem diagram capturing such a problem pattern (called
a frame). The interface labels are now typed parameters; they are prefixed by 'C', 'E' or 'Y',
depending on whether they are to be instantiated to causal, event or sympbolic phenomena,
respectively. A generic component in a frame diagram can be further annotated by its type:
• A causal component, marked by a 'C', has some internal causality that can be enforced,
e.g. it reacts predictably in response to external stimuli. A machine component is
intrinsically causal.
• A biddable component, marked by a 'B', has no such enforceable causality, e.g. it consists
of people.
• A lexical component, marked by an 'X', is a symbolic representation of data.
The upper part of Figure 4.4 shows two frame diagrams. The one on the left-hand side
represents the Simple Workpieces frame. It captures a problem class where the machine is a tool
allpwing a user to generate information that can be analysed and used for other purposes.
The frame diagram on the right-hand side represents the Information Display frame. It captures a
problem class where the machine must present information in a required form to environment
components. The frame diagram specifies that the lnformationMachine component monitors a
causal phenomenon C1 from the RealWorld component and produces an event phenomenon
E2 for a Display component as a result. The requirement constraining the latter component is a
generic accuracy requirement, as indicated by the ' .....,• symbol; it prescribes that the information
displayed should accurately reflect a causal phenomenon C3 from the RealWorld component.
(Accuracy requirements were introduced in Section 1.1.5.)
The lower part of Figure 4.4 shows corresponding frame instantiations yielding problem
diagrams. The phenomenon instantiations, compatible with the corresponding p~rameter type,
are shown on the bottom. The component instantiations, compatible with the corresponding
Component type, are annotated with the name of the generic component to indicate their role in
the frame instantiation. For example, the instantiated right-hand side requirement states that the
notified meeting date and location must be the one determined by the Scheduler component.
Other frames can be similarly defined and instantiated, for example for problems where
the environment behaviours must be controlled by the machine in accordance with commands
issued by an operator, or for problems where the machine must transform intput data into
output data (Jackson, 2001).
II Fundamentals of Requirements Engineering
User Display
E1: {determineDate, determineLocation} C1: {Date, Location}
E3: {meetingRequest} E2: {Notification}
Y4: {Date, Location} C3: {Date, Location}
Y4: {NotifiedDate&Locatlon}
Context and problem diagrams provide a simple, convenient notation for delimiting the
scope of the system-to-be in terms of components relevant to the problem world and their
static interconnections. There is a price to pay for such simplicity. The properties of the
interaction among pairs of components are not made precise. The granularity of components
and the criteria for a component to appear in a diagram are not very clear either. For example,
a Network component might be part of the problem world of scheduling meetings involving
participants who are geographically distributed. According to which precise criteria should this
component appear or not in Figure 4.3? Problem diagrams may also become clumsy for large
sets of requirements. How do we compose or decompose them? What properties must be
preserved under composition or decomposition? Chapter 11 will come back to those issues.
A more precise semantics will be given for components and connections, providing criteria
for identifying and refining components and interconnections. We will also see there how /
the
useful view offered by context and problem diagrams can be derived systematically from goal
diagrams.
.entity is a class of concept instances that have distinct identities and share common features.
ti: features may be attributes or relationships (as defined below). In an ER diagram, entities
represented by rectangles. For example, the Meeting concept is captured as an entity
·gore 4.5; each Meeting instance has a distinct identity, and like any other instance it is
·cterized by a Date.
:An informal but precise definition should annotate every entity in a diagram. For example,
ider the Participant entity in Figure 4.5 that captures the set of all possible participant
ces. We should make it clear what the concept of 'meeting participant' really means, for
mple:
Person expected,to attend the meeting, at least partially, in a specific role. Appears in the system
When the meeting is initiated and disappears when the meeting is no longer relevant to the system.
'attribute is an intrinsic feature of an entity regardless of other entities. It has a name and
e of values. For example, Date appears as an attribute of Meeting in Figure 4.5. The
cipant entity is characterized by the attributes Name, Address and Email.
ationships
)ationship is a feature that conceptually links several entities together. Each entity plays a
8.fic role in the conceptual link. The arlty of the relationship is the number of entities linked
ft. Binary relationships are represented by plain lines labelled by their name. For example,
'tation concept appears as a binary relationship in Figure 4.5; it links Participant, playing
fc>le invitedTo, and Meeting, playing the role Invites.
~lationships can be characterized by attributes as well. For example, the constraints a
'pant may have on meeting dates and locations are captured in Figure 4.5 through
bnstraints relationship linking the Participant and Meeting entities. This relationship is
cterized by two attributes; namely, excludedDates and preferredDates. The range of those
"lites might be declared as a set of time intervals (not represented there). In the UML
Invitation Invites
O..*
constraintsFrom
• Acttgrams declare activities by their input/output data and interconnect them through
data dependency links.
• Datagrams declare system data by their producing/consuming activities and interconnect
them through control dependency links.
• A data-activity duality principle requires actigram items to have some counterpart in a
datagram, and vice versa.
The SADT actigram in Figure 4.6 specifies the activity of Handlingconstraints in our meeting
scheduling system. The concept of meetingConstraints, appearing as an output there, is specified
by the datagram in Figure 4. 7.
An actigram specifies system activities. The latter can be refined into sub-activities. For
example, the HandlingConstraints activity is decomposed in Figure 4.6 into three sub-activities;
namely, AskConstraints, Returnconstraints and MergeConstraints. Each activity may be charac-
terized by four types of labelled arrows: 'west' and 'east' arrows declare input and output
data, respectively; 'north' arrows declare data or events that control the activity; 'south' arrows
denote system components that process it. For example, the Returnconstraints sub-activity has
constraintRequest as input, individualConstraints as output, dateRange and Deadline as controlling
data and Participant as processing component.
In a similar way, datagrams specify system data through four types of labelled arrows: 'west'
and Jeast' arrows declare activities that produce and consume the data, respectively; 'north'
arrows declare activities that control data integrity; 'south' arrows denote resources needed for
processing the data. For example, the meetingconstraints data in Figure 4.7 have MergeConstraints
as producing activity, PlanMeeting as consuming activity, CheckValidity as controlling activity and
constraintsRepository as memory support. Datagrams are refinable as well.
Tools can analyse the specifications produced in the SADT graphical language. They check
rules of consistency and completeness such as the following:
• The input/output data of an activity must appear as input/output data of sub-activities for
the refinement to be consistent (see meetingRequest and meetingConstraints in Figure 4.6).
• Any activity (or data) must have an input and an output (or a producer and a consumer).
• A controlling activity in a datagram must be defined in an actigram.
II Fundamentals of Requirements Engineering
dateRange
meeting meeting
Request Constraints
constraint
Request
individual meeting
Constraints Constraints
Check
Validity
Merge Plan
Constraints Meeting
These tools would detect that a CheckValidity activity is missing in the refining actigram <
Figure 4.6, as the latter rule is violated when checking the diagrams in Figures 4.6 and 4.7.
The· SADT specification technique was a precursor of others in many respects. It suppor1
multiple views that are linked through consistency rules. The language is conceptually rich(
for RE than many of the semi-formal notations that were developed afterwards. In additio
to data and activities, it supports some rudimentary representation of events, triggers an
responsibility assignments to system components. SADT also supports stepwise refinement <
global specifications into more detailed ones - an essential feature for complex specifications
Figure 4.8 shows a DFD diagram for the meeting scheduling system-to-be. Some of the
constraint-handling operations there correspond to those introduced with the same name in
Figure 4.6.
Bubbles in a DFD represent operations that are processed by an implicit system component
associated with the DFD (here, the scheduler). The arrows capture the incoming/outgoing
flows of the data labelling them. Boxes denote system components originating or terminating
a flow. Double bars denote data repositories.
The semantics of a bubble with incoming and olitgoing arrows is simply that the operation
needs the data flowing in to produce the data flowing out. There is no control flow implied
by this; DFD diagrams capture data dependencies among operations without prescribing any
ordering of events or sequencing of operations. Making DFD specifications executable requires
a precise operational semantics for the dataflow language, including rules for firing and
synchronizing data transformations within the same operation and among different operations.
It also requires an executable formalization of the informal rules for transforming input data into
output data.
The simplicity of DFD diagrams explains their popularity for the communication and
documentation of operational aspects of the system in a structured, summarized way. DFD tools
can check the graphical declarations against some forms of consistency and completeness - in
much the same way as SADT tools or static semantics checkers in compilers. The price to pay
copyOf
constraintsRequest
meeting
Notification
meeting
Constraints
*-
Initiator
*
Participant
....
*
Participant
<<include>> ..
Scheduler
t------;-
Conflict
Resolver
for simplicity is of course the limitation in which requirements-related aspects we can reall;
capture and analyse automatically. We come back to this in Section 4.3.10.
OK-re uest
? constraints dateRan e)
I constraints
OK-constr
J scheduleDeterrmination
The MSC and UML variants of ET diagrams have more sophisticated features. The limitt
form considered here is simple enough to be used by scenario specifiers and to be understoc
by stakeholders. As in the case of the other semi-formal notations reviewed in this section, v
need to annotate the diagrams with informal statements to provide full details on the scenaric
ET diagrams provide a natural, straightforward way of declaring scenarios. The strengtl
and limitations of ET scenarios are those discussed in Section 2.2.5.
If the controlled item is in state 51 and event ev occurs then this item moves to state 52.
States
A SM state captures the set of all situations where some variable characterizing the controll
item always has the same value regardless of other characterizing variables, whose values rr
differ from one situation in this set to the other. These variables may correspond to attribu
or relationships controlled by the component and declared in an associated entity-relations]
diagram. For example, the state MeetingScheduled in Figure 4.11 corresponds to the set
all situations where the meeting has a determined value for its attributes Date and Locat
Requirements Specification and Documentation II
(see Figure 4.5), regardless of other characterizing variables, such as who is invited to that
meeting. Similarly, the state doorsOpen for a train controlled by a train controller corresponds to
the set of all situations where the controlled attribute DoorsState has the value 'open' regardless
of other train attributes such as Speed, which might be 'O m.p.h.' in one situation of this set and
'30 m.p.h.' in another.
Two particular states can be introduced in an SM diagram. The initial state, represented by a
black circle in Figure 4.11, corresponds to the state of the controlled item when it appears in the
;;c:.system. Symmetrically, the final state, represented by a bull's eye in Figure 4.11, corresponds
/ 1 to the state of the controlled item when it disappears from the system.
meeting
Request
Figure 4. 11 State machine diagram for a meeting controlled by the meeting scheduler
II Fundamentals of Requirements Engineering
A transition without an event label fires automatically. A guarded, label-free transition is thu
fired as soon as the guard condition becomes true.
and
are two traces in the SM diagram shown in Figure 4.11. An SM diagram may have infinite)
many traces, whereas a trace by definition is always finite.
If we annotate the Scheduler timeline in the scenario of Figure 4.10 with explicit stat
information about the meeting it controls, we notice that this timeline corresponds to a trace
This trace is a subtrace of an SM trace in Figure 4.11. The scenario is covered by a pat
in the SM graph in Figure 4.11. A SM diagram generalizes ET diagrams along two dimension
it refers to any instance of a system component, not just a specific one, and it covers mo1
traces.
Non-deterministic behaviours
As introduced earlier, a non-deterministic behaviour is captured in an SM diagram by multip
outgoing transitions labelled with the same event name. Figure 4.12 illustrates this on pa
of an SM diagram for a train controlled by our train controller. In many safety-critical ar
security-critical systems, this kind of source of uncertainty has to be ruled out. Tools can chec
for deterministic SM behaviour automatically (see Section 5.4).
.. . When parallel SM diagrams are fully formalized, tools can automatically check desired
•• properties on them and generate counterexample traces in case of property violation (see
§eciions 4.4.4 and 5.4). The semantics usually taken by tools for concurrency is an interleaving
slmiantics; in the case of two transitions being fired in the same state, one is taken after the
6ther according to a non-deterministic choice.
closing
opening
[speed= OJ
[speed=O]
SM diagrams are frequently used for specifying reactive systems and user interfaces. They
provide a convenient notation for specifying the system's dynamics. Identifying the right states
and the right level of granularity for states and transitions can, however, be difficult. Another
problem is the diversity of semantics of different SM notations - even sometimes of the same
notation! For example, labelled transition systems assume that no other event can occur, when
a component is in a particular state, than those labelling outgoing transitions; many other SM
notations assume that other events can occur, leaving the component in the same state.
In addition to a simple semantics for concurrent behaviours, we need heuristics for iden-
tifying the right states and transitions, and techniques for building SM diagrams incrementally
and systematically. Chapters 13 and 18 will come back to those issues.
View integration
For comprehensive and coherent coverage, these different views should be complementary
and integrated. Inter-view consistency rules are a standard mechanism for integrating diagrams
of different types (Rumbaugh et al., 1991; Nuseibeh et al., 1994). They prescribe constraints
that the specifier should enforce to ensure view compatibility and complementarity.
Requirements Specification and Documentation II
Authorized Unauthorized
Here is a typical sample of inter-view consistency rules that we might consider for the
cation constructs introduced in this section.
Such rules may be included in requirements inspection checklists (see Section 5.1.3). They
can also be checked automatically by query tools on a multiview specification database (see
Section 5.2). They are a specification counterpart of the static semantics checks automated by
programming language compilers.
Inter-view consistency rules are also helpful for requirements evolution. They provide
explicit constraints that are to be maintained when changes are made to items to which they
refer. Section 6.3 will come back to this.
These different types of diagram will be further studied in Part II of the book. The techniques
there will help us build models in a systematic way using UML notations together with other,
RE-dedicated ones.
'Is there any SM state other than the final state with no outgoing transition?'
Requirements Specification and oocumentatlon II
Those benefits of semi-formal notations, combined with recent standardization efforts,
~xplainthe growing popularity of some specific subsets of the UML language (Dobing &
Parsons, 2006).
On the other hand, semi-formal notations have limitations. By their nature, they allow us to
specify surface-level features of RD items without paying too much attention to the properties
of such items. For example, what are the invariant properties of an entity or a relationship?
... What is the precise effect of the application of an operation? What is the precise meaning of
:>~.SM state beyond its name and incoming/outgoing event labels? The deep semantics of RD
items has to be stated informally, with all the problems incurred by natural language. As a
consequence, semi-formal notations are only amenable to fairly limited forms of analysis.
The 'box-and-arrow' semantics of graphical notations often lacks precision. It is therefore
~sy to use such notations in the wrong way. The same specification can also be interpreted in
1'different ways by different people.
The semi-formal constructs in this chapter address mainly functional and structural aspects.
There are other important aspects that we need to consider in a requirements document, notably
• system objectives, non-functional requirements and assumptions about the environment. Part
i Il of the book will introduce other semi-formal constructs for such aspects, together with a
""'Ff.
semi-formal specification declares some items of the requirements document (RD) formally,
:but leaves the descriptive and prescriptive statements about those items informal. Formal
. .specification goes one step further by formalizing such statements as well. The benefits
~.~xpected from formalization are a higher degree of precision in the formulation of statements,
.~precise rules for their interpretation and much more sophisticated forms of validation and
~werification than can be automated by tools.
As the collection of statements we may want to specify formally can be large, the formalism
mgeneral provides mechanisms for organizing the specification into units linked through
·~r:p.cturing relationships, such as unit instantiation, specialization, import or enrichment. Each
µ~t has a declaration part, where the variables of interest are declared, and an assertion part,
··i\tb,ere the intended properties of the declared variables are formalized.
~"~.This section overviews the main paradigms available for specifying some aspects of
the requirements document formally. (Remember that formal means 'in some machine-
pwcessable form'.) This will provide a basis for some of the specification analysis techniques
;i~ewed in Chapter 5 and for the more advanced requirements analysis techniques studied in
~pt~rs 17-18.
m AS the various paradigms for formal specification are grounded in logic, we start by briefly
reviewing some necessary rudiments of classical logic.
* This section is provided here for comprehensive coverage of the topic of this chapter. It may be skipped by fast-track readers
only interested in a general overview of RE fundamentals or with no background in rudimentary discrete mathematics. Its material is,
however, a prerequisite for Chapters 17-18.
Fundamentals of Requirements Engineering
• The syntax is a set of rules that defines the grammatically well-formed statements.
• The semantics is a set of rules that defines the precise meaning of such statements.
• The proof theory is a set of inference rules that allows new statements to be derived from
given ones.
Propositional logic
This logical system is the simplest one. It allows us to compose propositions recursively through
logical connectives such as /\ ('and'), v ('or'), -. ('not'), -+ ('implies') and ~ ('equivalent to').
For example, we may write a propositional statement such as
In the preceding rules, '<s>' in the definition meta-language means 'any instance of
syntactic category s', '::=' is the definition meta-symbol, and 'I' denotes an alternative choice.
Semantic rules tell us how to evaluate the meaning of a statement for a given way of
interpreting its atomic elements. In propositional logic this is quite simple. An intetpretation
of a set of statements assigns truth values to all their proposition symbols. The meaning of a
statement under that interpretation is its truth value.
Let vah be the interpretation function that assigns truth values T (for true) or F (for false)
to every proposition symbol of a statement under interpretation I. (The truth values T and V
in the definition meta-language should not be confused with the symbols true and false in
the defined language, respectively.) Let VAL1 be the semantic evaluation function that returns
the truth value of the entire statement under interpretation I. The semantics of propositional
logic is recursively defined through the following rules, where S, Sl and 52 denote arbitrary
propositional statements.
Requirements Speclficatio!l and oocumentatlon II
VALi (true)= T, VALi (false)= F, VAL.i (P) =val1 (P) for any propositional symbol P
VALi (-.5) = T if VAL.i (5) = F
F if VAL.i (5) = T
VALi (51 /\ 52) = T if VALi (51) = T and VALi (52) = T
F otherwise
i VALi (51 v 52) = T if VALi (51) =Tor VALi (52) = T
Fotherwise
VALi (51 -+ 52) = T if VALi (51) =For VALi (52) = T
Fotherwise
VALi (51 ~ 52) = T if VALi (51) = VALi (52)
Fotherwise
For example, consider the preceding statement 'trainMoving ~ doorsClosed' and an interpretation
l that assigns the following truth values to its proposition symbols:
According to the semantic rule for implications, the propositional semantics of that statement is:
The inference rules from a proof theory enable us to derive new statements from given
1;)11,es systematically. For each rule, the given statements are called the premise and the new
s:I~rived statement is called the conclusion. A sound rule guarantees that the conclusion is true
\'.:~ 1 all the interpretations that make the premise true. To analyse statements or derive their
~nsequences (e.g. for adequacy checking), a tool can perform derivations automatically by
repeated application of sound inference rules. No semantic evaluation of the conclusions is then
if the initial statements are accepted as being true in the interpretations of interest.
are a few sound rules of inference in propositional logic (the upper and lower parts
rule are the premise and conclusion, respectively):
Let us provide a simple example of derivation using the resolution rule. From the premise
-. trainMoving v doorsClosed, trainStopped v trainMoving
doorsClosed v trainStopped
II Fundamentals of Requirements Engineering
The semantics of first-order predicate logic is again provided by a set of rules for evaluating
the truth value of a statement under a given way of interpreting its atomic elements.
To define an interpretation for a set of statements, we first need to define the domain of
interest as a set of objects that the terms in those statements may represent - for example, are
we talking about trains, meetings or books? Then we need to define what specific object in
that domain a constant or unquantified variable in those statements designates, what specific
function over the domain a function symbol designates, and what n-ary relation over the
domain a predicate symbol on n arguments designates.
To illustrate this, suppose that we would like to specify in first-order logic that the distance
between two successive trains should be kept sufficient to avoid collisions if the first train stops
suddenly:
v tr1, tr2
Following (tr2, tr1)-+ Dist (tr2, tr1) > WCS-Dist (tr2)
To evaluate this statement semantically, we first need to fix an interpretation for its building
blocks by saying that:
• The domain of interpretation for the statement is the set of trains in our system.
• The atomic predicate Following (tr2, tr1) is true if and only if the pair (tr2, tr1) is a member
of the binary relation Following over trains, defined as the set of pairs of trains in which
the first train in the pair directly follows the second.
• The function symbol Dist designates the real-valued function that, for two given trains,
returns their exact distance from each other.
RequlrementsSpeclflcatlon and oocumentation II
• The function symbol WCS-Dist designates the real-valued function that, for a given train,
returns the worst-case distance needed for the train to stop in an emergency.
• The predicate symbol '>', used in infix form, designates the '>'binary relation over real
numbers.
• It makes no sense to say that a formal statement is true or false in an absolute sense;
truth is always relative to a given interpretation of interest.
• Whatever the specification formalism might be, making this interpretation fully precise in
the documentation through such designations is essential to avoid ambiguity, inadequacy
or misunderstanding of the specification. (This point, often neglected in practice, will be
re-emphasized throughout the book.)
In first-order logic, the interpretation function valr on the atomic elements of a set of
statements under interpretation I captures the following:
The semantic rules for first-order predicate logic are then recursively defined under an
interpretation I as follows. (S, Sl and S2 denote arbitrary predicate statements, and {x ~ d}ol
denotes the interpretation that extends the interpretation I by forcing variable x to represent
the domain element d.)
The proof theory of first-order predicate logic enables much more expressive statements to
be derived systematically from given ones. It includes the inference rules of propositional logic
plus specific ones such as:
where the variables tr1 and tr2 designate arbitrary instances of the Train entity, the atomic
predicate Following corresponds to a binary reflexive rel;;i.tionship on Train, and the function
symbols Dist and WCS-Dist correspond to attributes of Following and Train, respectively. A state
of variable tr2 might be characterized by the fact that the designated train is following another
Requirements Specification and oocumentatlon II
train, designated by tr1, at a distance of 100 metres, say, and with a worst-case stopping
distance of 50 metres in that state.
A formal specification consists of a structured set of such statements. It defines a logical
theory; that is, a set of axioms from which new statements can be derived automatically as
theorems using the inference rules from the proof theory. Such derivations may be used for
adequacy checking, for example. Stakeholders may be shown the derived theorems, after
translation into natural language, and asked whether they really want the consequences
of what was specified. The inference rules can also be used for consistency checking; a
sequence of derivations yielding the predicate false means that the theory formalizing the
requirements, assumptions and domain properties is logically inconsistent. (We come back to
fhis in Section 5.4.)
· In this setting, the semantics of first-order logic allows for a more precise characterization
of some of the specification flaws introduced in Section 1.1.7. A specification is contradictory
·. if there is no interpretation of interest that can make all its statements true together. It
ambiguous if there are different interpretations of interest that make all its statements
.we together. It is redundant if some of the statements can be inferred as theorems from
··t>thers.
addition to the usual propositional connectives and first-order constructs, LTL provides the
following temporal connectives:
D
(some time in the future)
(always in the future)
•• (some time in the past)
(always in the past)
w (always in the future unless) B (always in the past back to)
u (always in the future until) s (always in the past since)
0 (in the next state) • (in the previous state)
A system history in LTL is an infinite temporal sequence of system states. Time is iso-
morphic to the set Nat of natural numbers, and a history H is more precisely defined as
a function:
This function assigns to every time point i in H the system state at that time point. x is
the set of system variables and State (X) is the set of all possible states for the corresponding
variables in X; see Section 4.4.1.
As mentioned before, LTL assertions are interpreted over linear system histories. To define
the LTL semantics more precisely, we use the notation
(H, i) I= P
to express that the LTL assertion P is satisfied by history H at time position i (j e Nat). The
assertion P is satisfied by the entire history H if it is satisfied at the initial time position: (H, 0)
I= P.
The semantic rules for LTL temporal connectives are then recursively defined as follows:
The semantic rules for the connectives over the past are similar. Two other frequently used
temporal connectives are:::} (entails) and<::> (congruent), defined by
Multiple units can be used (e.g. second, day, week); they are implicitly converted into the
smallest time unit. The a-operator then yields the nearest subsequent time position according
to this smallest unit.
The semantics of the real-time operators is defined accordingly, for example:
(H, i) I= <>so P iff for some j ::: i with dist (i, j)::: d: (H, j) I= P
(H, i) I= Dsd P iff for all j ::: I such that dist (i, j) ::: d: (H, j) I= P
As with any other logic, LTI has a proof theory enabling assertions to be derived sys-
tematically from others. For example, the following LU-specific rules of inference can be
used:
OP P=>Q
- state particularization - - - - - - - - entailment modus ponens - - - montonicity
p DQ DP=>DQ
In the first rule above, the conclusion P means 'P holds in some arbitrary current state'.
Let us give a sample of typical LTI specifications. The first-order predicate statement
introduced in Section 4.4.1 can now be made stronger, and closer to what we want, by
requiring a safe distance always to be maintained between successive trains:
Note that the '-+' propositional connective was replaced by the ':::::>' temporal connective
to prescribe that the implication should hold in every future state from any current one (in
particular, from the initial state of the system). Other statements for our train control system
might be specified in LTI as follows:
(Requirement:) Train doors shall always remain closed between platforms unless the train is
stopped in an emergency.
v tr: Train, pl: Platform
•At (tr, pl) r..-. At (tr, pl)=> tr.Doors= 'closed' w [At (tr, next (pl)) v Alarm (tr) r..-. Moving (tr))
IJI Fundamentals of Requirements Engineering
(Requirement:) Trains shall reach their next platfonn within at most 5 minutes.
For our meeting scheduling system, we might write LTL assertions such as the following:
(Requirement:) Intended parlicipants shall be notified of the meeting date and location at
least 3 weeks before the meeting starls.
v p: Person, m : Meeting
Holds (m) /\ Intended (p, m) :::} +::3 w Notified (p, m)
(Assumption:) An intended panicipant will panicipate in a meeting if the meeting date and
location are convenient and notified to him or her.
v p: Person, m : Meeting
Intended (p, m) /\ Notified (p, m) A convenient (m, p) :::} <> Participates (p, m)
v p: Person, m : Meeting
convenient (m, p) :::} o Convenient (m, p)
v p: Person, m : Meeting
Notified (p, m) :::} o Notified (p, m)
AG P means 'for !!11 paths, P holds ~obally for all states along that path'
AF P means 'for !!ll paths, P holds finally in some state along that path'
EG P means 'there ~xists a path where P holds ~obally for all states along that path'
EF P means 'there ~xists a path where P holds finally in some state along that path'
The logic is somewhat more expressive than LTL, in that reachability properties can also be
formalized through EF quantifiers.
The mapping between the natural language formulation and its formalization is not
ft:c :~tra'ighttorwara in
this case. Specification patterns address this problem by providing templates
frequent assertions requiring such ovemesting of temporal connectives (Dwyer et al., 1999).
example, the preceding assertion in their pattern catalogue is specified instead by the
Q precedes R after P
Another problem with temporal logics as specification formalisms taken in isolation is their
lack of structuring mechanisms. We cannot structure the variables to which the assertions
refer or group assertions into cohesive chunks. This problem can be addressed by combining
i:nu,ltiple formalisms, as we will see in Chapter 17.
State-based specification
ms•tead of characterizing the admissible histories of the system-to-be, we may characterize
admissible system states at some arbitrary snapshot. The requirements, assumptions and
properties are specified in a sorted logic through system invariants and pre- and
post-conditions on the system's operations:
• An invariant is a condition constraining the system's states at this snapshot. It must thus
always hold in any state along the system's admissible histories.
• Pre- and post-conditions are conditions constraining the application of system operations
at this snapshot. A pre-condition is a necessary condition on input variables for the
operation to be applied; it captures the operation's applicability and must always hold
in the state in which the operation is applied. A post-condition is a condition on
output variables if the operation is applied; it captures the operation's effect and must
always hold in the state right after the operation has been applied. For specification
completeness we are interested in the least restrictive applicability condition - that is, the
II Fundamentals of Requirements Engineering
weakest pre-condition - and the most complete effect condition - that is, the strongest
post-condition. A post-condition may be constructive or not depending on whether or
not the effect condition defines the output variables explicitly through equations.
Languages such as Z, VDM, B, Alloy or OCL rely on the state-based paradigm. The main
differences between them lie in the constructs available for structuring the system states, the
mechanisms for organizing the specification into manageable units and the implementation
bias of some of them. These languages were designed with different objectives in mind:
systematic specification refinement towards an implementation (Jones, 1990; Abrial, 1996),
efficient verification of bounded models (Jackson, 2006) or integration in UML design models
(Warmer & Kleppe, 2003).
Let us have a closer look at Z as a good representative of state-based specification languages
(Spivey, 1992). This will allow typical issues in formal specification to be covered in greater
detail. Z is probably the easiest language to learn because of its conceptual simplicity and its
standard notations. The language is conceptually simple as it based just on sets and tuples as
primitives for structuring the state space. Most notations come from elementary mathematics
for manipulating sets, tuples, relations, functions and sequences. These notations are pre-
defined in terms of sets and tuples in an extendable specification toolkit. The language also
provides simple yet powerful mechanisms for importing specification units and combining
them piecewise to form the specification of the entire system. Moreover, Z's implementation
bias is lower than other state-based languages such as B or VDM.
A Z specification is a collection of schemas together with some textual definitions. Each
schema has a declaration part where the variables used in the schema are declared or imported
from other schemas, and an assertion part where the assertions constraining the state space
are specified. There are basically two kinds of schema:
• A data schema specifies a portion of the system's state space by declaring an aggregate
of tightly coupled state variables and by stating invariants on them.
• An operation schema specifies a system operation by declaring the operation's input and
output variables and by stating pre-conditions and post-conditions on these input and
output variables, respectively.
The initial state of the system is defined through initialization schemas that particularize
each data schema in this state. The specifier must also declare, in textual definitions, what
the given types are in the specification. These are 'primitive' sets of conceptual instances or
values on which Z declarations rely. It is therefore important to make precise what given types
designate in the system.
Let us review the main features of Z through specification excerpts for our library system.
We might specify our given types by the following declaration:
Dkecwry _ _ _ _ _ _ _ ~
This data schema specifies the Directory as an aggregation of three state variables: WhichBook,
WrittenBy and Covers. Each one is declared in the upper part of the schema. The semantics
of such a declaration is that the possible values for the variable on the left belong to the set
represented on the right. For example, the possible values for the state variable Covers belongs
to the set Book+>PTopic; that is, a value for Covers is a partial function from the set Book of
all possible books to the set PTopic of all possible subsets of topics. The notations '+>' and 'P'
are the standard ones in elementary mathematics to denote partial mappings and powersets,
respectively. They are not primitives in Z. For example, a function is a specific kind of binary
relation, which is a subset of a Cartesian product, which is a set of (binary) tuples. The function
Covers is partial here because it is not defined everywhere on its input set Book; in any current
state of our library system, Covers is defined only on a strict subset of all possible books (to
which corresponds a specific subset of topics among all possible subsets of topics).
The two invariants in the assertion part of the Directory data schema constrain the three tightly
coupled state variables further by interrelating the domains where they are everywhere defined
and the ranges of values they return. The notations •s;;•, 'dom' and 'ran' are the standard ones
in mathematics to denote set inclusion and the domain and range of a function, respectively.
For example, the second invariant states that every book for which Covers returns a subset of
topics is a book corresponding to a book copy currently found in the library's directory.
In the preliminary description of the library system, access to certain facilities is restricted
to specific user categories (see Section 1.1.2). We therefore introduce another data schema to
structure our state space:
LibraryAgents - - - - - -
OrdinaryPatron: PPerson
Staff: PPerson
OrdinaryPatron n Staff = 0
The invariant in this schema states that the two introduced sets have an empty intersection;
that is, the same person cannot be both an ordinary patron and a staff member. We should spec-
and structure the system state space further by introducing a data schema for library shelves:
II Fundamentals of Requirements Engineering
LibraryShelves - - - - - - - - - - - - - - -
LibraryAgents
Available, Onloan: PBookCopy
BorrowedBy: BookCopy +> Person
Available n Onloan =0
Onloan = dom BorrowedBy
ran BorrowedBy ~ OrdinaryPatron v Staff
Vp: OrdinaryPatron • #BorrowedBy- 1(1 {p} I)::;; Loanlimit
There is a schema inclusion in the declaration part of this schema. It amounts to importing
all declarations and assertions from the included schema LibraryAgents to the including schema
LibraryShelves. As a result, the declaration and invariant on OrdinaryPatron and Staff are implicit
in the LibraryShelves schema.
The assertion part of LibraryShelves contains four invariants. (Assertions on multiple lines
are implicitly conjoined.) The first invariant states a domain property; namely, that a book
copy may not be both checked out and available for check-out at the same time. The second
invariant is a definition; the variable OnLoan is defined as the set of book copies currently
borrowed by people. The third invariant is a requirement; it restricts the set of borrowers to
persons currently registered as ordinary patrons or staff members. The fourth invariant is a
requirement as well; it restricts the number of book copies an ordinary patron may borrow
at the same time. The notations '#S', 'R-i. and 'R(I SI)' are the standard ones in mathematics
to denote the number of elements in a set S, the inverse of a relation R, and the relational
image of a set S by a relation R, respectively. The'•' symbol, not to be confused with the LTI.
'previous' operator, delimits quantifiers.
We may now complete our structuring of the system's state space through a new data
schema that includes two previously defined ones plus a specific invariant:
LibrarySystem - - - - - - - - -
Directory
LibraryShelves
This invariant states that any copy of a book currently listed in the library's directory is
either checked out or available for check-out. (Note that this property is a global one; it could
not be specified in the schema LibraryShelves as the variable WhichBook is not defined there.) In
conjunction with the first invariant in LibraryShelves, we are saying that the set of book copies
listed in the library's directory is partitioned, at any system snapshot, into the subset of copies
checked out and the subset of copies available for check-out.
As mentioned earlier, every data schema has an associated initialization schema to define
the .initial .state of the corresponding state variables. This is used in particular for inductive
reasoning about properties of the specification (see Section 5.4). For example, the initial state of
Requirements Specification and Documentation II
library shelves is specified by associating the following schema with the LibraryShelves schema,
to add the property that the corresponding sets are initially empty:
lnitLibraryShelves _ _ _ _ _ _ _ _ __
__ L_ib_ra_ry_S_h_e_lv_es_ __
l
Available = 0 A Onloan = 0 A BorrowedBy = 0
Operation schemas
In a Z specification building process, the elaboration of data schemas is highly intertwined
with the elaboration of operation schemas. The introduction of state variables in the latter, for
easier specification of pre- or post-conditions, has to be propagated to the former.
As in many specification languages, Z makes a distinction between two kinds of operation:
modifiers change the state of some system variables whereas obseroers don't.
For our library system, the operation of checking out a copy of a book is a modifier. We
specify it by the following operation schema:
Checkout-----------
8 LibrarySystem
E Directory; E LibraryAgents
p?: Person
be?: BookCopy
p? e OrdinaryPatron v Staff
be? e Available
# BorrowedBy-1(1 {p?} I)< Loanlimit
Available' = Available\ {be?}
Onloan' = Onloan v {be?}
BorrowedBy' =BorrowedBy u {be? ~p?}
The declaration part of this schema states that the Checkout operation modifies the state of
the variables imported from the included schema LibrarySystem (as expressed by the 'fl.' prefix).
Among those, the variables imported from the included schemas Directory and LibraryAgents
are, however, left unchanged (as expressed by their '8' prefix). The operation has two instance
variables as input arguments: p and be, whose sets of possible values are the given types Person
and BookCopy, respectively. The '?' suffix to their name declares them as input arguments for
the operation.
The notations 'E', '\' and 'U' appearing in the assertion part of the Checkout operation
schema are the standard ones for set membership, difference and union, respectively. The first
three conditions refer to input arguments only; they are thus implicitly pre-conditions for the
Checkout operation. The first pre-condition states that the input borrower must be currently
registered as an ordinary patron or a staff member. The second pre-condition states that the
input book copy must be among the available ones. The third pre-condition states that the input
II Fundamentals of Requirements Engineering
borrower may not have reached his or her loan limit in the initial state before the operation
is applied. The three next conditions are implicitly post-conditions; they are equations defining
the effect, on the modified state variables, of checking out the book copy declared as an input
argument. The first two post-conditions state that this book copy has migrated from the set of
available copies to the set of borrowed copies. The third post-condition states that the function
BorrowedBy includes a new functional pair be? I-+ p? in the operation's final state.
As in most state-based formalisms, the prime suffix decorating a modified state variable, is
necessary to distinguish the state of this variable before and after application of the operation
(the corresponding equality would not be satisfiable here without such a distinction). Also
note that, without the 'S' prefix on Directory and LibraryAgents in the declaration part of the
Checkout operation, we should have included additional equations stating that the initial and
final states of all variables declared in Directory and LibraryAgents are the same. The 'S' prefix is
a partial Z answer to the so-called frame problem of specifying in a state-based language that
the operations make no other changes than the ones explicitly specified (Borgida et al., 1993).
The inverse operation of returning a borrowed book may be specified by the following
schema:
Return
fl LibrarySystem
::: Directory; ::: LibraryAgents
be?: BookCopy
be? eOnLoan
Available = Available u {be?}
OnLoan =OnLoan \ {be?}
BorrowedBy = {be?}~BorrowedBy
The last post-condition in this schema says that the function BorrowedBy in the operation's
final state is the same as the one in the initial state except that it is no longer defined on its
argument be?; that is, the function no longer includes a functional pair whose first element
is be?. (The notation 'S~R' is the standard one in mathematics for restricting the domain of a
relation R to the complement of set S.)
The Checkout and Return operations are modifiers. Observers in our system include query
operations. Among them, bibliographical search is a basic service to be provided by our system:
Bib/ioSearch - - - - - - - - -
:::Directory
tp?: Topic
booklistl: PBook
As seen from the '!'suffix, this schema declares the variable book/ist as an external output
variable; the output of the BiblioSearch operation is not among the variables in the tuple of
Requirements Specification and DOcumentatlon fl
state variables defining the system's state space. (The '!' suffix on external output variables
should not be confused with the prime suffix on state variables in an operation's output
state.)
Note how simple the specification of Bib/ioSearch is thanks to the built-in relational style of
expression supported by Z. Also note that the inclusion of the function Covers among the state
variables declared in the Directory schema was motivated by this specification of the BiblioSearch
functionality. ·
Combining schemas
Z has an elegant structuring mechanism for defining new schemas from finer-grained ones.
The new schema is defined as a logical combination of the finer-grained ones using the
propositional connectives introduced in Section 4.4.1. Such a definition amounts to introducing
·a new schema explicitly whose declaration part would include all declarations from the
finer-grained schemas and the assertion part would include all assertions from these schemas,
interconnected through the corresponding logical connective. For example, the specification:
NewSchema - - - - - - - - - - - - - - - - - -
All declarations from Schema1, Schema2, Schema3
NotRegisteredAgent - - - - - - - - -
::: LibraryAgents
p?: Person
mes!: Message
p? e; OrdinaryPatron v Staff
mes! = 'this person is currently not registered'
Doing so for every identifiable exception, we obtain a robust version of the Checkout
operation built from the previous one and from similarly specified exceptions:
II Fundamentals of Requirements Engineering
The schema operations for the exceptions UnauthorizedAgent and UnknownCopy might then
be reused in a robust version of the operation Return.
This section has provided a sufficiently detailed account of a simple state-based specification
language to provide deeper insights on typical notations and mechanisms used for elaborating
a formal specification. Other features were left aside; notably, mechanisms for parameterizing
and instantiating a specification.
This is not the end of the story, though. We then need to declare the PrecedingLoan variable
in the LibraryShelves schema, specify invariants there to define the domain and range of this
function, and write adequate post-conditions to define the final state of this function in every
A-operation schema on LibraryShelves, in particular in the Checkout and Return operation
schemas.
To sum up, state-based languages such as Z tum out to be especially appropriate in domains
involving complex objects that need to be structured and inter-related. They produce more
operational specifications than those obtained with history-based formalisms. They appear
more appropriate for the later stages of the specification process where specific software
services have been elicited from more abstract objectives. Parts II and III of the book will come
back to this important point.
tabular format close to decision tables for better structuring, readability and checkability of
complex combinations of conditions and events (see Section 4.2.1). It is output driven, allowing
the specifier to concentrate on one single input-output function at a time and to investigate
all the conditions under which the corresponding output must be produced. Last but not least,
SCR is supported by a rich toolset automating a variety of analyses (see Chapter 5).
SCR is built on the four-variable model that defines requirements as a relation between
monitored and controlled variables (see Section 1.1.4). The system globally consists of two com-
ponents: the machine, consisting of the software-to-be together with its associated input-output
devices, and the environment. The machine defines values for the controlled variables, whereas
the environment defines values for the monitored variables.
An SCR specification defines the machine through a set of tables together with associated
information such as variable declarations, type definitions, initial state definitions and
assumptions. Each table defines a mathematical input-output function. The specification
thus prescribes deterministic machine behaviours. The behaviour of the environment is non-
deterministic.
An SCR table may be a mode transition table, an event table or a condition table.
@T(V) WHEN C,
which means, in terms of the prime notation introduced in the previous section,
c" ~v "V',
where C and v are evaluated in the current state whereas V' is v evaluated in the next state. For
example, @T(Reset =On) WHEN Alarm= On amounts to Alarm= on A .... Reset= On "Reset'= On.
This event occurs when Alarm is 'On' and Reset is not 'On' in the current state, and Reset is 'On'
in the next state.
Table 4.1 illustrates a mode transition table for our train control system. The mode class is
MovementState; the variable measuredSpeed is monitored by the train controller. The first row
in the table states that if MovementState is 'MovingOK' and the event @T(measuredSpeed = O)
occurs, then the MovementState is switched to 'Stopped'. Rows must be disjoint. If none of the
rows applies to the current state, MovementState does not change.
Complex machines may be defined in terms of several mode classes operating in parallel.
Requirements Specification and Documentation II
Event tables
An event table defines the various values of a controlled variable or a term as a function
of a mode and events. The mode belongs to an associated mode class (AMC). A term is an
auxiliary variable defined by a function on monitored variables, mode classes or other terms.
Using term names instead of repeating their definition helps make the specification more
concise.
Table 4.2 illustrates an event table defining the auxiliary term Emergency to capture emer-
gency situations in which appropriate actions must be taken. This term is defined as a function
of the AMC MovementState and the monitored variables Alarm and Reset. The last column in
!able 4.2 states that 'if the reset button is pushed in a state where the alarm is "On" then
~mergency must become false whatever the current mode is'.
A condition table defines the various values of a controlled variable or a term as a total function
of an AMC mode and conditions. A condition is a predicate defined on one or more monitored,
controlled or internal variables. Conditions in a row are expected to be disjoint (for the table
to be a function) and covering the entire state space (for the function to be total).
Table 4.3 illustrates the use of a condition table to specify the controlled variable OoorsState
. as a function of the AMC MovementState and the term Emergency. The first row and column state
that if MovementState is 'Stopped' with AtP/atform or Emergency being true, then DoorsState must
be 'Open'. An entry False in an event (or condition) table means that no event (or condition)
may cause the variable defined by the table to take the value in the same column as the entry.
Note that there is always one output value whose corresponding condition is true.
II Fundamentals of Requirements Engineering
SCR is built on the synchrony hypothesis; that is, the machine is assumed to react infinitely
fast to changes in its environment. It handles one input event completely before the next one
is processed. This hypothesis explains why (a) a mode transition table may specifiy the next
value of the mode class in terms of the current and next values of monitored variables (and the
current value of the mode class); (b) an event table may specify the next value of the target
variable in terms of the current and next values of other variables (and the current value of this
variable); and (c) a condition table may define the next value of the target variable in terms of
the next value of other variables.
It may be worth pointing out that event-based languages such as SCR have a different kind
of semantics than specification languages based on temporal logic or state-based languages
such as Z. The former have a generative semantics; that is, every state transition in the
system is forbidden except the ones explicitly required by the specification. The latter have
a pruning semantics; that is, every state transition in the system is allowed except the ones
explicitly forbidden by the specification. Unlike Z, for example, event-based languages make it
unnecessary to specify explicitly that 'nothing else changes' - in other words, there is a built-in
solution to the frame problem.
One assertion to specify these operations further might be the following law of composition:
The paradigm differs significantly from the previous ones in that there is no explicit or
implicit notion of state. A system history here corresponds to a trace of successive applications
of operations.
Specification languages such as OBJ, ASL, PLUSS or LARCH rely on the algebraic paradigm.
lhey differ notably in the mechanisms available for structuring the specification into manageable
Units.
In addition to the signature of each operation associated with the target concept, the
declaration part of a specification unit may import concepts and operations specified in other
In some languages it is also possible to specify the structure of the target concept by the
.use of type constructors such as SetOf[TJ, to construct sets of elements of type T; Sequenceofm,
construct sequences of elements of type T; Tuple(T1, .. ., Tn), to construct tuples of elements of
co1:re~;pond:ing type; and so on. Such constructors are parameterized types that are algebraically
Prc:-cteh.ne4::1, together with their standard operations, in a specification toolkit. For example, we
~ght add the following declaration to the preceding signatures:
The effect undef_/eave of the operation combination in the latter case corresponds to an
exception (to be subsequently implemented by, for example, an error message).
Writing a consistent, complete and minimal set of composition laws in the assertion
part of an algebraic specification unit is not necessarily obvious. Which pahwise operation
combinations should be considered; which ones should not? The following heuristic answers
this question and provides a systematic way of building the specification. It is based on a
stateless counterpart of the classification of operations introduced in Section 4.4.3:
• Modifiers allow any instance of the target concept to be obtained by composition with
other modifiers. A necessary condition for an operation to be a modifier is to have the
target concept among the components of its output set. For example, the operations of
creating an empty sequence, appending an element to a sequence, removing an element
or concatenating two sequences are modifiers for the concept of sequence.
• Generators form a minimal subset of modifiers that allow any instance of the target
concept to be generated through a minimal number of compositions of them. For
example, the operations of creating an empty sequence and appending an element to
a sequence form a generator set, as any sequence can be generated from these in a
Requirements Specification and Documentation II
minimal number of compositions. Adding to this set the operation of element removal
would result in redundant compositions to generate the target sequence. In our train
example, the operation EnterB/ock is a modifier in the generator set.
• Obseroers allow us to get information about any instance of the target concept. A
necessary condition for an operation to be an observer is to have the target concept
among the components of its input set. For example, the operations of getting the length
of a sequence or checking whether some element occurs in it are observers. In our train
example, the operation On is an observer.
In this procedure, <term> specifies the effect of the corresponding composition, by means of
operations defined in the same unit or imported from other units; <Case> specifies the condition
for the equation to hold (if any), in terms of arguments appearing in the left-hand side. The
term <term> must denote an instance of the output set declared in the operation's signature.
The conditions <Case> often cover a base case, where the defined operation is applied to the
generator of an empty concept instance, and a recursive case, where the defined operation
occurs in the right-hand side as well, where it is applied to a strictly 'smaller' concept instance.
The correctness argument underlying this specification-building procedure is that the terms
Gen (... ) capture the various ways of generating any instance of the target concept, and all other
operations are defined on each of these.
To get further insight into the difference between the algebraic and state-based paradigms,
let us algebraically specify some of the library system operations that were specified in Z
;n Section 4.4.3. The specification unit for the Library concept might include the following
cieclarations:
emptylib: 0 -+ Library
AddCopy: Library x BookCopy -+ Library
Removecopy: Library x BookCopy -+ Library
Checkout: Library x Bookcopy -+ Library
Return: Library x BookCopy -+ Library
CopyExists: Library x BookCopy -+ Boolean
CopyBorrowed: Library x BookCopy -+ Boolean
lfj Fundamentals of Requirements Engineering
We then need to find out what the generators are. For the portion of the specification we are
considering, we need to be able to generate an arbitrary set of book copies managed by our
library system and an arbitrary set of copies on loan. The operations Emptylib and AddCopy
are the generators for the former set, whereas the operation Checkout is the generator for the
latter set. The other operations to be composed with them are RemoveCopy, Return, CopyExists
and CopyBorrowed. Hence the following equations for a minimally complete set of composition
laws regarding our declared operations:
As we can see, the right-hand side of many operation compositions consists of a base case and
a recursive one on a smaller structure, for example a library with one copy less. In particular,
the composition RemoveCopy;Add Copy has no effect if these operations apply to the same book
copy be; otherwise it amounts to the composition AddCopy;RemoveCopy where the RemoveCopy
operation is applied to the remainder of the library's set of copies of books. Similarly, the
composition Return;CheekOut has no effect if these operations apply to the same book copy be;
otherwise it amounts to the composition CheekOut;Return where the Return operation is applied
to a smaller set of borrowed copies of books.
Also note that we dropped the compositions Retum(EmptyLibO,be) and Remove-
Copy(EmptyLibO,be) as they are ruled out by the stated pre-conditions. Similarly, the composition
Requirements Specification and oocumentation II
}flerurn(AddCopy (lib,bc?,bc) is ruled out in case be = be' because of the stated pre-condition
.CopyBorrowed (lib,bc) on the Return operation (and a domain property stating that an added
;cppy is not borrowed). Some exception handling should have been specified without such
pre-conditions, for example:
The commutativity axiom pattern often applies to modifiers Op that are not in the generator
- see the modifiers Return and RemoveCopy. The independence axiom pattern often applies
observers Op - see the observers CopyExists and CopyBorrowed. Such patterns may be used
the specifier in writing a first specification sketch or to check it.
, .. The algebraic specification of the BiblioSearch operation, specified in Z in Section 4.4.3,
~\)Strates the importation of other specification units:
In this specification, the imported operations nil and AddList are the standard operations on lists
for creating an empty list and appending an element to a list, respectively. They are defined in
the parameterized specification unit List tn imported by Directory.
that algebraic specifications are more abstract; hiding states removes the need for notational
devices, such as the prime notation in state-based languages to distinguish the input and output
states of an operation, or frame axioms to specify that the operations make no other changes.
Algebraic languages also provide rich mechanisms for structuring and reusing specifications
such as parameterization, inheritance or enrichment of specification units.
On the other hand, there are serious limitations for requirements engineering. Equational
languages restrict what we can express. Like state-based languages, there is no built-in
historical referencing. Casting a specification as a set of recursive equations defining operation
compositions may be felt to be unnatural and turns out to be difficult in practice. We must
identify the right set of generators and perform inductive reasoning on well-founded sets to
build correct recursive equations - like when we are programming in a functional language.
Algebraic specifications thus appear to be not that abstract on second thoughts.
Process algebras
This paradigm characterizes systems as collections of concurrent and communicating processes
that can be executed by more or less abstract machines under specific laws of interaction (Hoare,
1985; Milner, 1989). This paradigm is used for specifying and analysing design solutions - in
particular, required interactions among components of a software architecture, communication
protocols or security protocols. It is therefore not relevant for specification at requirements
engineering time.
Requirements Specification and Documentation IJ
4.4.7 Formal specification: strengths and limitations
The formal specification paradigms reviewed in this chapter aim to express both the declaration
>and assertion part of a specification in formal language. This language is logic based and
:provides a formal syntax, a precise semantics for interpreting statements, and rules for
inferring new statements from given ones. The specification paradigms differ by their focus
"stories, states, event-based transitions or operation compositions), their style (declarative
operational) and their structuring mechanisms for specification-in-the-large. Each paradigm
a built-in semantic bias making it more effective in specific situations. State-based and
~algebraic specifications focus on sequential behaviours while providing rich structures for
'&fining complex objects. They are better targeted at transactional systems. Conversely, history-
"based and event-based specifications focus on concurrent behaviours while providing only
JtU.tly simple structures for defining the objects being manipulated. They are better targeted at
reactive systems.
} i~ ; Formal specification approaches have common strengths. Unlike statements in natural
,language, formal assertions are less inclined to some of the specification defects discussed
;in Section 1.1.7, notably ambiguities, noises, forward references, remorse and unmeasurable
~tements. The reason is that the language in which they are written offers precise rules for
. interpreting statements and built-in mechanisms for structuring the specification into pieces.
Formal specification languages support much more sophisticated forms of analysis of the
specification, such as specification animation for adequacy checking, algorithmic or deductive
verification of desired properties and formal completeness checks. (We come back to these in
{!lie next chapter.) Such analyses can be automated by tools. Moreover, formal specifications
· allow other useful artefacts to be generated automatically, such as counterexamples to claims,
r failure scenarios, test cases, proof obligations, specification refinements and source code.
f 3, The by-products of formal specification and analysis are often recognized as important as
' #ie formal product itself. A better informal documentation is obtained by feedback from formal
;~'expression, structuring and analysis. The architecture, source code and test data are more likely
to satisfy the specification.
;;:, On the downside, formal specification languages have limited expressiveness. They mostly
,iddress the functional aspects of the target system. The main exceptions are the timing
l)roperties we can capture with history-based or some event-based languages.
· Formal specifications are hard to write and hard to read. Getting adequate, consistent
.· ~nd complete specifications requires expertise and training. The input-output formats of
many analysis tools require encoding or decoding by experts. Formal specification approaches
are thus not easily accessible to practitioners. We could hardly imagine showing them to
&~eholders.
Process models that integrate formal specification approaches with conventional develop-
tn.ent practices, including inspections, reviews and testing, are also lacking (Craigen et al.,
1995).
Despite such limitations, there are many success stories using formal specifications for
real systems (see the bibliographical notes at the end of this chapter). They range from the
reengineering of existing systems to the development of new systems. Evidence has been
reported that the projects where they were used, while resulting in products of much higher
II Fundamentals of Requirements Engineering
quality, did not incur higher costs; on the contrary. Although many of the stories concern
safety-critical systems, notably in the transportation domain, there are other target areas such as
information systems, telecommunication systems, power plant control, protocols and security. A
fairly impressive example is the Paris metro system. The traffic on Line 14 (Tolbiac-Madeleine)
is entirely controlled by software. The safety-critical components of the software were formally
developed using the B state-based specification and refinement method (Abrial, 1996). The
refinement-based development was entirely validated by formal, fully automated proofs. Many
errors were found and fixed during development.
Formal specifications are mostly used in the design phase of a software project to elaborate,
specify and analyse a functional model of the software-to-be. Chapters 17-18 will further
describe how they can be used earlier for RE-specific tasks such as conflict management,
risk analysis, the refinement of system objectives into software requirements and environment
assumptions, and requirements animation. This requires a more lightweight and flexible
framework where multiple specification paradigms are integrated, statements can be formalized
only when and where needed, and RE-specific abstractions are supported.
4.5 conclusion
The requirements emerging from the elicitation and evaluation phases of the RE process
must be organized in a coherent structure and specified precisely to form the requirements
document. The latter should meet the various qualities discussed in Section 1.1.7. This chapter
has reviewed the main techniques available for requirements specification and documentation,
from free to structured text, and from tabular formats to semi-formal diagrams to formal
specifications.
Each technique was seen to be more suited to specific aspects of the target system, and
to have its own merits and limitations in terms of expressive power, structuring mechanisms,
analysability, usability and communicability. As those merits and limitations are complementary
to each other, an optimal trade-off should be reached by combined use of multiple techniques.
Which combination works best may depend on the domain, the prominence of non-
functional concerns such as safety-critical or security-critical ones and the project specifics,
including the level of expertise and training of project participants.
Beyond such mutual reinforcement, the semi-formal and formal approaches in this chapter
have common limitations with respect to the nature of the RE process discussed in Chapter 1.
·As noted in Section 1.1.4, this distinction is essential in the context of engineering require-
ents. We can negotiate, weaken, change or find alternatives to requirements or assumptions;
e cannot do so for domain properties.
OpenDoors ~~~~~~~~~~~~~
A Train
e approaches reviewed essentially provide sets of notations together with tools for a
·ori analysis. As a result, they induce a process of elaborating specifications by iterative
pugging. In view of the inherent complexity of the RE process, we should consider a more
P5tructive approach where the quality of the requirements documentation is ensured by the
ethod followed.
The above limitations provide the main motivation for the goal-oriented, multiparadigm
elling and specification method presented in Parts II and III of the book.
lj Fundamentals of Requirements Engineering
( summary )
• The requirements, assumptions and domain properties emerging from the elicitation
and evaluation phases of the RE process must be organized into a coherent structure
and specified precisely in the requirements document (RD). The specifications must
be complete, consistent, adequate, unambiguous, measurable, pertinent, realistic and
comprehensible.
• Free documentation in unrestricted natural language has no limitation in terms of
expressiveness and communicability. However, it is much more likely to result in
ambiguities, noises, forward references, remorse, unmeasurable statements and opacity.
• Disciplined documentation in structured natural language addresses some of these
problems. The specifier may be guided by local rules on how statements should be
formulated and structured, and by global rules on how the requirements document
should be organized. Locally, we should follow technical writing principles, introduce
decision tables for structuring and checking complex combinations of conditions, and
use statement templates to further document statements with useful information such as
their identifier, type, fit criterion, rationale, elicitation source and priority level. Globally,
we may follow specific rules for grouping RD items within sections. Global templates
allow us to organize the RD into specific sections according to organization-specific,
domain-specific or international standards.
• Diagrammatic notations provide a means for summarizing portions of the requirements
document in graphical language. This language is semi-formal in that the items
relevant to a system view are declared formally, whereas the statements describing or
prescribing their properties are informally stated in natural language. As they are formal,
the declarations are amenable to surface-level analysis by automated tools to check their
consistency and completeness. As they are graphical, they are easier to communicate.
Diagrammatic notations differ by the specific aspect of the target system that they are
intended to capture. They should be integrated, as those aspects are complementary.
Integration can be achieved by enforcement of inter-diagram consistency rules.
• Context diagrams allow us to define the system scope in terms of relevant components
and phenomena shared among them. A problem diagram further highlights the machine
together with the controlled phenomena and the requirements constraining them. Some
problem diagrams can be obtained by instantiation of frame diagrams that capture
generic components and connections for common problem classes.
• Entity-relationship diagrams are a popular notation for capturing the structural aspects
of the target system. Conceptual items are characterized by attributes and linked to
others through domain-specific relationships. They can be aggregated or specialized
into other conceptual items. Multiplicities formalize simple requirements and domain
properties.
Requirements Specification and oocumentation II
·• SADT, DFD and use case diagrams allow us to capture the functional aspects of
the system. SADT actigrams declare system operations by their inputk>utput data,
controlling data or events, and processing agents. Datagrams declare data in a dual
way by their producing or consuming activities, controlling activities and the resources
required to process them. DFD diagrams capture similar information in a simpler but
less expressive way through bubbles interconnected by dataflows, with data repositories
or system components as start and end points. Use case diagrams outline the system
operations, grouping them by component and suggesting their interaction with external
components. Those three types of diagram support functional decomposition.
• Event trace diagrams provide an easy-to-use notation for specifying examples of actual
or desired interactions among system component instances. The concrete, narrative style
of this notation makes it especially appropriate for scenario capture during requirements
elicitation and for counterexample visualization during requirements verification.
• State machine diagrams focus on the behavioural aspects of the target system. The
admissible behaviours of a system component are captured by sequences of possible
state transitions for items that the component controls. Such sequences are paths
in a directed graph where nodes represent states and arrows represent transitions
triggered by events. Transitions may be guarded by conditions. Concurrent behaviours
are represented by the parallel composition of state machine diagrams; a behavioural
trace in the composite diagram is then a finite sequence of aggregated states from the
composing SM diagrams.
<e R-net diagrams visualize answers to WHAT IF? questions about external stimuli. An
R-net specifies all the operations that a system component is required to perform,
possibly under a particular condition, in response to a single input stimulus.
• Formal specification languages go one step further by formalizing the statements about
RD items as well, not just their declaration. The benefits expected from this formalization
are a higher degree of precision in the formulation of statements, precise rules for
.their interpretation and much more sophisticated forms of validation and verification
that can be automated by software tools. In addition, the languages provide structuring
·mechanisms for organizing the specification into manageable units .
.• Formal specification languages are grounded on some logic - often a sorted first-order
logic to represent typed variables that can be quantified universally or existentially.
The languages are defined by syntax rules, semantic rules for assigning a precise
meaning to formal statements under some interpretation, and inference rules for
generating new statements as consequences. When formal constructs are used for RE,
it is essential to document explicitly and precisely what they designate in the target
domain. Formal assertions generally refer to state variables designating objects involved
in the requirements, assumptions or domain properties. Depending on the specification
II Fundamentals of Requirements Engineering
paradigm, the assertions are interpreted over system states or histories, being satisfied
by some of them and falsified by the others.
• In the history-based paradigm, assertions capture admissible histories of system objects,
where a history is a temporal sequence of states. Temporal logics support a declarative
style of specification where such sequences are kept implicit thanks to temporal
connectives for historical referencing to past and future states. LTL formulations tend to
be close to their natural language counterpart. Specification patterns can be used when
this is not the case.
• In the state-based paradigm, assertions capture admissible system states at some
arbitrary snapshot. They are formulated as system invariants and pre- and post-
conditions on system operations. State-based languages provide a variety of constructs
for structuring the state space and for decomposing the specification into manageable
units. Among them, Z is a fairly simple relational language that relies on sets and
tuples as sole primitives for structuring the state space. AZ specification is essentially
a collection of data and operation schemas that can be logically composed stepwise to
form the entire specification.
• In the event-based paradigm, assertions capture event-driven transitions between classes
of states along admissible system histories. A specification makes the transition function
underlying the corresponding SM diagram more precise by formally characterizing the
input and output states of transitions, their triggering events and their guard conditions.
SCR is an event-based language, based on the four-variable model, especially suitable
for formal specification of requirements for control systems. SCR specifications have
a tabular format close to decision tables. Various kinds of tables are used to define
controlled variables as functions of states of monitored variables, events and conditions.
Each table represents a single input-output function. The specifier may concentrate on
one table at a time and investigate all the conditions under which the corresponding
output must be produced. Such output-driven specification provides a better guarantee
of the completeness of requirements.
• In the algebraic specification paradigm, assertions are conditional equations capturing
the admissible laws for composing system operations. They are grouped by the
concept to which they refer. There is no notion of state; histories correspond to traces
of successive applications of operations. A set of equations can be built systematically
by identifying the set of generators for the associated concept and then by defining
the effect of pairwise combinations of all other operations with each generator. The
equations are most often decomposed into base cases and recursive cases; this requires
some inductive reasoning. This more elaborate reasoning is the price to pay for the
executability of the specification through term rewriting.
• Semi-formal and formal approaches may improve the structuring of the specification,
augment its precision and increase its analysability. Their limitations with respect to
Requirements Specification and oocumentatlon 8
such rejection was probably the major focus of UML on modelling notations for designers
and programmers; data flows among architectural components contradict the principle of
information hiding (Parnas, 1972) used in object-oriented programming.
Event trace diagrams were proposed in multiple forms, variants and extensions.
Message Sequence Charts (MSC), the original ITU standard, is described in ITU 0996).
A simple form of MSC was integrated into OMT (Rumbaugh et al., 1991). Sequence
diagrams, the UML variant, are described in Booch et al. 0999). The main extensions of
event trace (ET) diagrams concern the representation of time and durations, co-regions to
capture commutative interactions, ET prefixes as guards for ET suffixes to take place, and
high-level MSCs to 'flowchart' MSCs (Harel & Thiagarajan, 2003). The language of Live
Sequence Charts (LSCs) is a formal extension that makes an explicit distinction between
possible, necessary and forbidden interactions (Harel & Marelly, 2003).
Variants of state machine diagrams have been known in automata theory since the
early days as Mealy or Moore machines (Kain, 1972). They were used for modelling
behaviours of a wide range of systems, including telephone systems (Kawashima, 1971)
and user interfaces (Wasserman, 1979). Harel introduced statecharts as a significant
extension to support concurrency and hierarchical structuring of states (Harel, 1987,
1996). An informal variant of statecharts was tentatively integrated into OMT (Rumbaugh
et al., 1991) and incorporated in the UML standards (Rumbaugh et al., 1999). RSML
(Leveson et al., 1994) is a formal variant of statecharts further discussed below. Labelled
transition systems are formal SM diagrams that support concurrency through a simple
parallel composition operator; traces there are sequences of events rather than sequences
of states (Magee & Kramer, 2006). The use of SM diagrams for specifying reactive systems
is further discussed in Wieringa (2003).
There are many textbooks on logic. Manna and Waldinger 0993) and Gries 0981)
are highly recommended for their orientation towards applications in computing science.
The importance of explicitly documenting what formal predicates and terms designate
in the domain is argued further in Zave and Jackson 0997). The ERAE specification
language established an important bridge between ER diagrams and first-order logic
(Dubois et al., 1991).
There are also numerous textbooks on formal specification. A comprehensive presen-
tation of the state-based and algebraic paradigms can be found in Turner and McCluskey
0994) or Alagar and Periyasamy 0998). A widely accessible introduction is Wing 0990).
A road map on specification techniques and their strengths and limitations is given in van
Lamsweerde (2000a).
The formal systems underlying history-based specification in linear and branching-time
temporal logics are best introduced in Manna and Pnueli 0992) and Clarke et al. 0999),
respectively. Time can be linear (Pnueli, 1977) or branching (Emerson & Halpern, 1986).
Time structures can be discrete (Manna & Pnueli, 1992; Lamport, 1994), dense (Greenspan
et al., 1986) or continuous (Hansen et al., 1991). The specified properties may refer to
time points (Manna & Pnueli, 1992; Lamport, 1994), time intervals (Moser et al., 1997)
Requlreme~ Specification and oocumentatlon II
or both (Greenspan et al., 1986; Jahanian & Mok, 1986; Allen & Hayes, 1989; Ghezzi
& Kemmerer, 1991). Most often it is necessary to specify properties over time bounds;
real-time temporal logics are therefore necessary (Koymans, 1992; Morzenti et al., 1992;
Moser et al., 1997).
Invariants and pre-/post-conditions as abstractions of program executions were first
proposed in Turing (1949). Different axiomatic systems were introduced almost simulta-
neously and i'ndependently to formalize this principle (Floyd, 1967; Hoare, 1969; Naur,
1969). The term 'snapshot' in the context of state-based specification is coined from the
latter paper. A calculus for weakest preconditions appeared in Dijkstra (1976).
State-based specifications are sometimes called model-based specifications. The latter
terminology is somewhat confusing, as virtually every type of specification for a complex
system is based on a model whatever the underlying specification paradigm. The initial
design of the Z language was reported in Abrial (1980). It was rooted in early work
on the semantics of the binary relational model (Abrial, 1974). The Z user manual
was developed from extensive experience at Oxford (Spivey, 1992). An interesting
collection of specification case studies in Z is presented in Hayes (1987). Good textbooks
on specification and refinement in Z include Potter et al. (1996) and Woodcock and
Davies (1996). These books also provide further backgound on the underlying discrete
mathematics. Object-oriented variants such as Object-Z and Z++ are described in Lano
(1995). There have been many books on other state-based languages, their use, dedicated
analysis techniques and refinement calculi. For VVM, the original book remains the
main reference (Jones, 1990); it comes with a series of case studies (Jones & Shaw,
1990). A more recent book puts emphasis on the process of modelling in VDM-SL, the
standardized version of the language (Fitzgerald & Larsen, 1998). For B, 'the' book is
'Abrial (1996); shorter introductions include Lano (1996) and Schneider (2001). A variety
;of B specification case studies is provided in Sekerinski and Sere (1999). For Alloy,
the language and specification analyser are described in Jackson (2006). For OCL, the
best reference probably remains Warmer and Kleppe (2003); as the language is being
'standardized by the Object Management Group (OMG), the latest version should be
checked on the OMG website (www.omg.org/docs). The frame problem in state-based
languages is discussed in great detail in Borgida et al. 0993).
There have been quite a few event-based specification languages, with fairly different
semantics. The SCR notation was first introduced in Heninger 0980) based on experience
in specifying the flight software for the A-7 aircraft (Heninger et al., 1978). The language
was updated and further detailed in Pamas and Madey (1995). The formal semantics
used in the SCR toolset is described in Heitmeyer et al. (1996). RSML is an event-based
language that extends statecharts with interface descriptions and direct communication
among parallel state machines; state transitions are more precisely defined there (Leveson
et al., 1994). Like statecharts, RSML is a graphical formalism supporting hierarchical
state machines. It integrates decision tables for the definition of outputs under complex
combinations of conditions. Like SCR, the technique has been validated by experience in
II Fundamentals of Requirements Engineering
into some equivalent executable form. By submitting events simulating the environment to
an animation tool, we can check the appropriateness of specified behaviours in response
to such events. The primary quality being checked here is requirements adequacy. This
technique is discussed in Section 5.3.
• Formal verification covers a wide range of more sophisticated checks that tools
can perform on a formal specification. These include type consistency checks, com-
pleteness checks on decision tables, and algorithmic or deductive checking that a
behaviour model satisfies a desired property. This family of techniques is discussed in
Section 5.4.
preliminary phase determines the size of and the members of the inspection team; the
of the inspection process; the schedule and scope of each review meeting; and the
t of inspection reports. Guidelines may be used for this; see Section 5.1.2.
inspector reads the RD or part of it individually to look for defects. This phase can be
. •t:ij'.>Cr:ate'Ci in several modes:
• Free mode. The inspector receives no directive on what part of the RD to consider
specifically or what type of defect to look for. The review entirely relies on his or her
initiative and expertise.
• Checklist based. The inspector is given a list of questions and issues to guide the defect
search process. He or she may be directed to a specific part of the RD. (Checklists are
, discussed at the end of this section.)
• Process based. Each inspector is given a specific process to follow for defect search.
The RD is distributed among inspectors playing different roles, according to different
perspectives. Each of them is assigned specific targets, checklists and procedures or
techniques for checking a specific class of defect. For example, one inspector for
our train control system might be assigned the role of domain expert to check all
safety-related requirements and assumptions using fault tree analysis (see Section 3.2.2).
Another inspector might play the developer role and check all performance-related
requirements and assumptions on the train-station communication infrastructure. In our
meeting scheduling system, one inspector might be assigned the meeting initiator role
to focus on functional requirements and check them for adequacy, consistency and
completeness; another might play the meeting participant role to check convenience-
related requirements; another might play the developer role to focus on interoperability
requirements and so on.
The aim of this phase is thus to discard false positives; these are concerns pointed out
by one inspector which on second thoughts are perceived by the meeting participants not
to be a real problem. The authors of the RD may sometimes participate in order to provide
clarifications and counterarguments.
RD consolidation
The requirements document is revised to address all concerns expressed in the inspection
report.
There have been quite different opinions on the importance of review meetings. People have
argued that the primary source of defect detection is the individual reviewing phase (Parnas
& Weiss, 1985). Empirical studies suggest that individual reviewing in process-based mode
generally results in higher defect-detection rates and more effective reviews than reviewing in
free or checklist-based mode even to the point that inspection meetings bring no improvement
in view of their cost (Porter et al., 1995; Regnell et al., 2000). On the other hand, review meetings
appear effective in reducing false positives (Porter & Johnson, 1997).
• WHAT? The inspection report should be accurate and informative on specific points.
It should contain substantiated facts, not opinions. It should be constructive and not
offensive to the authors of the RD. It must be approved by all inspectors. A report
structure may be suggested to provide inspector guidance in individual reading and
defect collection. To reduce writing overheads, the report structure and format should be
lightweight. To encourage active inspection, it should leave room for free comments.
• WHO? The primary objective of inspection and reviews is to find as many actual defects
as possible. The inspectors should therefore be independent from the authors of the RD.
They should not have a conflict of interest with them, or be in charge of evaluating
them personally. To increase the coverage of the defect space, the inspection team
should be representative of all stakeholder viewpoints. It should include people with
different backgrounds, for example a domain expert, an end-user and a developer. A
quality assurance specialist may be appropriate as well. The minimum team size usually
advocated is three.
• WHEN? Requirements inspection should not be applied too soon, to avoid detecting
defects that would have subsequently been caught by the authors anyway, nor too late,
to avoid their downward propagation to subsequent project phases. Shorter, repeated
meetings are more productive than longer, fewer ones. Two-hour meetings are generally
recommended.
• WHERE? Empirical evidence from software testing suggests that the more defects are
found at a particular place, the more scrutiny is required at that place and the places
impacting on it or impacted by it. In any case, the inspection should carefully consider
Requirements Quality ASsurance
·~places where critical aspects of the system are presented, such as safety-related or
security-related ones.
-based checklists
, e ·are lists of questions structured according to the various types of defects that we can
ill a requirements document (see Section 1.1.7). Table 5.1 provides such a checklist for the
Table 5. 1 Continued
defect table given in Chapter 1 (see Table 1.1). The table is split between errors and flaws. The
granularity of an RD item may vary from a single statement to a group of statements following
each other in the RD document.
Defect-based checklists cover the entire defect search space in terms of an extensible set of
concrete questions for each defect type. Inspectors are thereby instructed on what to look for.
The checklists remain fairly generic, though.
Quality-specific checklists
Such checklists specialize defect-based ones to specific categories of non-functional require-
ments, for example safety, security, performance, usability and so forth (see Section 1.1.5).
For example, Lutz has defined a checklist for safety-related errors based on her studies
of prominent error patterns in NASA safety-critical software. Her checklist specializes Jaffe's
guidewords and correctness criteria to the context of interface and robustness requirements.
(Guidewords were introduced in Section 3.2.2.) Omissions are the primary target defects here
as their consequences are in general the most serious. Here is a sample (Lutz, 1996).
~ese may specialize generic and quality-specific checklists to the specific concepts and
·~<lard operations found in the domain. The aim is to provide increasingly specific guidance
;ifi~'1efectsearch. For example, we might define defect checklists specific to the meeting
iicheduling domain for the operations of initiating a meeting or determining a meeting schedule
.~from participants' constraints. We might do the same in the train control domain for the
)~rations of controlling train accelerations or doors opening.
':tanguage-based checklists
Sitth checklists specialize the defect-based ones to the specific constructs of the structured,
tsezru-formal or formal specification language used in the requirements document. The richer
the language is, the more specific and dedicated the checklist will be. Moreover, most
checks can be automated when those constructs are formalized (see Sections 5.2 and 5.4
•·•· liereafter) .
.For the statement templates discussed in Section 4.2.1, a checklist might be structured
:according to the specific template used, for example:
~"'j}', {
For the decision tables discussed in Section 4.2.1, completeness and redundancy checks can
be performed almost for free - just by counting the number of columns and entry conditions.
For example, consider the decision table in Table 5.2, inspired from our train braking example
in Section 4.2.1.
This table has seven columns. For three input conditions, there should be eight columns
to enumerate all possible combinations of conditions. One combination is thus missing. It
turns out to be the critical case where the train does not receive an outdated command but is
entering the station block too fast with a preceding train too close. In this missing case, full
braking should be activated.
For a binary decision table with N entry conditions, there must be 2N columns for the
table to list all possible combinations of conditions exhaustively. If the number of columns
is strictly less than 2N, the table is incomplete; if this number is strictly greater than 2N, the
table is redundant. The missing combinations must be identified. Some might be impossible
in view of domain properties, but this has to be documented in the RD. For those that are
missing and actually possible, a corresponding effect condition must be specified to complete
the specification.
For the global templates discussed in Section 4.2.2, a checklist might contain questions
regarding the conformance of the RD organization to the structure prescribed by the template,
and the matching of each section's actual content to the prescribed section heading.
For the diagrammatic languages discussed in Section 4.3, the checklists may include
surface-level consistency and completeness checks within diagrams and between diagrams, for
example:
• Does each input data flowing in this DFD operation appear as input data flowing in
some upstream sub-operation in the DFD refining this operation? Does each output data
flowing out of this DFD operation appear as output data flowing out of some downstream
sub-operation in the DFD refining this operation?
• Are the input and output data of this DFD operation declared in an ER diagram?
• Does this relationship in this ER diagram have an adequate multiplicity?
• For the normal scenario described by this ET diagram, is there any possible exception
scenario associated with it and not specified in the set of corresponding abnormal
scenarios?
Requirements ;Quality ASsurance II
. ,,,. Does the event labelling this interaction in this ET diagram trigger a transition in the SM
diagram generalizing it? Is the event trace in the former diagram covered by a path in the
latter diagram?
Is there any state other than the final one in this SM diagram with no outgoing transition?
·•.Are the dynamic attributes or relationships defining this state in this SM diagram declared
in an ER diagram?
that some of these questions correspond to the inter-view consistency rules discussed in
n 4.3.9, and to the kind of static semantics check performed by compilers - like 'Is this
variable declared?' or 'Is this declared variable used?'. These checks need not be manually
performed by inspectors as tools can automate them (see Section 5.2):
;.'~'1For the formal specification languages discussed in Section 4.4, the checklists may include
ntically richer consistency and completeness checks within specification units and between
ts. In Z, for example, a checklist may include questions such as the following:
Is the type of the right-hand-side expression defining the output variable in this equational
. post-condition compatible with the declared type of that variable?
· If this variable is declared as a partial function in this data schema, is there an invariant
in the schema to specify the input domain where this function is everywhere defined?
If the output variable in this operation schema is a partial function, is there a corresponding
pre-condition in the schema to specify where this variable is fully defined?
Is this pre-condition consistent·with invariants stated in the imported data schemas?
• Is there a corresponding exception schema for the case where this pre-condition does
not hold?
~·,~· D.oes this OR-combination of schemas cover all possible cases?
;;.• Does every imported variable in this d-operation schema have a post-condition to define
Jii its final state after the operation is applied?
·~ the specification is fully formalized, many of these checks can be automated by tools (see
.ction 5.4 hereafter).
,spe~ons and reviews are an effective technique for requirements quality assurance. This
·~5hhique appears to be even more effective than code inspection, in terms of type of defects
d and their potential impact (Laitenberger & DeBaud, 2000). It is the widest in scope and
plicability, and can be used to search for any kind of defect in any kind of specification
rmat. For individual reviewing, a process-based mode relying on a blend of defect-based,
Fundamentals of Requirements Engineering
Is there any output data flowing out of this operation that is not flowing out of any of the refining
sub-operations?
This check is a consistency check between adjacent levels of operation refinement, and
a completeness check on the refining DFD. (It corresponds to the first question in the
diagram-specific checklist in the previous section, simplified here for sake of clarity.)
Requirements Quality ASSurance fl
In a DFD-specific query language, the check might look like the following:
Query DFD-refinementconsistency
set out-data = Data
which Flowsout operation with operation.Name= 'myOperation'
and which not Flowsout ref-ops
where set ref-ops= operation which Refines Operation with Operation.Name= 'myOperation'
Input To
1..* o..•
Output From
1..*
Refinement
According to this definition of NextState, two transitions found to be conflicting with each other
are both discarded; we do not consider alternative subsets of transitions with non-conflicting
target states. We could be less strict and keep one of the conflicting transitions while discarding
the others (Heimdahl & Leveson, 1996). The problem then is the arbitrary choice of which
transition to keep. The user interacting with the animator might control this selection by
moving the animation one step back and dropping one of the conflicting transitions of his or
her choice.
• Textual. The input events are entered as textual commands; the model reactions are
displayed as execution traces.
• Diagrammatic. The input events are entered by event selection among those applicable
in the current state; the model reactions are displayed as tokens progressing along the
model diagrams together with corresponding state visualization.
• Domain-specific visualization. The input events are entered through domain-specific
control devices displayed on the screen; the model reactions are displayed as new values
on domain-specific control panels. The entire animation may even be visualized as an
animated scene in the software environment.
The third format for visualizing the simulation is obviously most appealing to stakeholders
and, in particular, to domain experts. The SCR and RSML animators support control devices
and panels. The LTSA and FAUST animators support domain scenes as well. Figure 5.3 shows
a screenshot from the FAUST animator. As we can see there, the visualization is a mix of
textual, diagrammatic and domain-specific formats. The textual window displays the trace of
an animation scenario being replayed. The lower right window shows parallel state machines
for train doors and train moves where the current state is highlighted. The two upper windows
show snapshots, in this state, of two domain scenes taking place in parallel. One scene shows
doors of train no. 1 opening and closing, whereas the other shows trains no. 1 and no. 2
moving along a single track with two stations and two railroad crossings. The window in the
left middle of the screen shows an input-output control panel containing a speedometer and
two joysticks for starting and stopping the train and opening and closing doors, respectively.
Also note that this animation snapshot reveals a pretty bad problem in the simulated model, as
seen visually but also pointed out by a property monitor on the lower left window - namely,
the train is moving (see the speedometer) with the doors open.
5.3.4 Conclusion
Requirements animation is a concrete technique for checking a specification. It may reveal
subtle inadequacies and other defects as well, in particular missing items - in Figure 5.3, a
guard 'doors closed' missing on the 'start' transition from 'train stopped' to 'train moving'.
Requirements Quality Assurance II
Due to its principle of 'What You See Is What You Check', animation is among the best ways
of getting stakeholders and practitioners into the quality assurance loop. Moreover, 'interesting'
animation scenarios may be recorded by the animator for later replay. In particular, these
scenarios may provide acceptance test data for free. Animators can also be coupled with other
analysis tools, such as monitors that detect property violations on the fly during animation (see
the lower left window in Figure 5.3) or model checkers (discussed in the next section).
On the downside, there is a price to pay - we need a formal specification. Moreover,
there is no guarantee that rewarding animation sequences will be played; these are interaction
sequences revealing defects in the specification. To provide such a guarantee, the users of the
animator should be carefully selected to be representative of experienced people who know
about tricky things that can happen in the environment. Like test data, the animation scenarios
should be carefully elaborated beforehand to ensure comprehensive model coverage. It should
ideally be possible to simulate multiple events occurring independently and in parallel in the
environment.
The gap between the animated model and the original specification may also be a problem.
Is the model adequately capturing what was intended in the original, non-formal specification?
If the animation reveals a bad symptom, where can the causes be found in the original
specification?
• Language checks are similar to those usually performed by compilers. They include
syntax checks, type checks, static semantics checks, circularity checks and the like.
• The formal constructs provided by some languages allow for certain forms of consistency
and completeness checking. We can use them to check that a specified input-output
relationship is a function, in order to preclude non-deterministic behaviours, and is total,
to cover all possible cases on its input set.
• A more sophisticated class of checks allows us to verify that the model we have
specified formally satisfies some domain-specific property. Such verification can be done
algorithmically by searching through the model for property violations; this is referred to
as model checking. Alternatively, the verification can be done deductively by application
of language-specific rules of inference to prove the property taken as candidate theorem.
Syntax checking
Every expression in the specification must be grammatically well formed according to the
syntax rules of the language. For example, a mistakenly written precondition 'be?: Available'
in the Checkout operation schema in Section 4.4.3 would be detected by a Z syntax checker;
declaration symbols may not appear in unquantified predicates.
Type checking
Each variable must have a specified type and all uses of this variable must be consistent with
the type declaration. For example, a declaration 'Available: BookCopy' and pre-condition 'be? E
Available' would be detected as inconsistent by a Z type checker as the variable Available is not
declared as representing a set, whereas it is used as a set. Similarly, in the Checkout operation
schema in Section 4.4.3, post-conditions such as
* This section is provided here for comprehensive coverage of the topic of this chapter. It is based on Section 4.4 and may be
skipped by fast-track readers only interested in a general overview of RE fundamentals or with no background in rudimentary discrete
mathematics. Its material is, however, a prerequisite for Chapters 17-18.
Requirements Quality ASsUrance ,II
where '-+' is the function declaration symbol, would be detected by a Z type checker as
\)eing inconsistent with the declaration 'be?: BookCopy' and the imported declarations 'OnLoan:
P600kCopy', 'Borrowed By: BookCopy +> Person'.
Such structure clashes are found fairly often in informal requirements documents. For
example, the constraints of a meeting participant could be defined at one place as a pair of
sets of excluded and preferred dates, respectively, whereas at another place they are referred
to as a set of excluded day slots. Finding such inconsistency through type checking may be
quite helpful.
in the LibraryShefves data schema in Section 4.4.3 would be easily detected as being outside the
scope of the variable WhichBook. The Z schema import and initialization mechanisms make it
easy to automate this type of check.
~ircular definitions are fairly frequent in technical reports such as requirements documents.
These properties can be easily checked when the relations are explicitly defined at single
Places in the specification.
Let us consider the SCR tables introduced in Section 4.4.4 to illustrate this kind of checking.
We saw there that a condition table must specify the various values of a controlled variable or
as a total function of an associated mode class and conditions.
Ii Fundamentals of Requirements Engineering
C1 " c2 = false
For example, consider the condition table in Table 5.3. (This table is a modified version of
Table 4.3 in Section 4.4.4.)
Checking the entry conditions in the first input row for disjointness we obtain, by distribu-
tivity of AND over OR:
(AtPlatform OR Emergency) AND NOT AtPlatform = (AtPlatform AND NOT AtPlatform) OR (Emergency AND
NOT AtPlatform)
= false OR Emergency AND NOT AtPlatform
= Emergency AND NOT AtPlatform
-:fa false
We thus have a problem here. Indeed, in situations where Emergency AND NOT AtPlatform holds,
the table prescribes a non-deterministic behaviour where the doors may either be open or
closed. This inconsistency must clearly be fixed by making the two conditions disjoint - here,
by adding a conjunct NOT Emergency to the second condition.
c1 v c2 v ... v en = true
For example, consider Table 5.4, the corrected version of Table 5.3. Checking the entry
conditions in the first input row for coverage we obtain:
This formal derivation relies on rewritings based on propositional tautologies such as:
Getting the same result for the second input row, we conclude that the table exhaustively
covers all possible input cases.
In practice, tautology checkers are used by tools to automate such checks effi-
ciently (Heitmeyer et al., 1996). Similar techniques can be applied for other transition-based
specification languages. In RSML, for example, the definition of the NextState relationship
makes it possible to check such consistency and exhaustiveness compositionally for the entire
.wstem, structured as a set of hierarchical state machines (Heimdahl & Leveson, 1996).
Yes
Behaviour model
No,
P~OQ + counterexample
Property
closing
opening
[speed= OJ
tralnStart
Figure 5.5 A faulty SM model for the behaviour of a controller of train doors and movements
With this model and assertion as inputs, a model checker might produce the following
counterexample trace, where each state is labelled by its incoming event:
• The reachability graph is explored exhaustively by recursively generating all next states
from current ones, and testing whether those next states are 'bad' ones that result in
property violation.
• The algorithm terminates when a bad state is reached, in which case the violating path
from the initial state to this bad state is produced, or when all states have been visited.
In the latter case, the output is 'yes' if no bad state was found, and 'no' if a good state
prescribed by the property was not found.
Model checkers can verify different types of properties on a parallel state machine:
• Reachability (or unreachability) properties are the easiest to verify. Such properties state
that a particular situation can (or cannot) be reached. They are checked just by inspecting
the reachability graph.
• Safety properties state that an undesirable condition may never hold (or a desirable
condition must always hold). In a linear temporal logic, they take the form 0 P. As soon
as a trace is found to satisfy ....., P, the search algorithm terminates. (The word 'safety'
used in this context is not necessarily related to safety-critical requirements.)
• Liveness properties state that a desirable condition must eventually hold. In a linear
temporal logic, they take the form ¢ P. When all states have been visited with no trace
satisfying P the search algorithm terminates with a 'no' output.
The combinatorial explosion of states to visit is an obvious problem with this kind of approach.
As already noted in Section 4.3.7, a parallel state machine on N variables, each having M
possible values, has MN states. Fortunately enough, researchers in model checking have
developed sophisticated techniques, based on theories of automata and data structures, in
order to optimize the search and reduce the space for storing visited states by representing
them implicitly. This makes it possible to explore a model with billions of states in reasonable
time and space.
Another issue is the length and comprehensibility of counterexample traces generated by
a model checker. For subtle bugs the trace can be long and provide little information on the
causes of the symptom. Many model checkers take a breadth-first search strategy to produce
the shortest violating traces. In the faulty model of Figure 5.5, the cause of the problem
revealed by the counterexample is easy to spot; the guard DoorsState ='closed' is missing on
the start transition. For non-toy models and long traces, the mining of error causes is likely to
be difficult, even if counterexample traces are the shortest ones.
II Fundamentals of Requirements Engineering
There are multiple variants and refinements of the principles outlined here (Berard et al.,
2001). The best-known variants and refinements are implemented in the SPIN and SMV model
checkers.
In SPIN, the properties have to be formalized in Linear Temporal Logic (LTL). The
state machine model must be expressed in PROMELA, a language close to guarded com-
mands (Dijkstra, 1976). To address the state explosion problem, the visited states are maintained
in a tunable hash table; the bigger the table, the smaller the likelihood of missing a bad
state (Holzmann, 1997, 2003).
In SMV, the properties have to be formalized in Computation Tree Logic (CTL), a branching
temporal logic where histories have a tree structure with branching to alternative successor
states (see Section 4.4.2). This logic supports the formalization of reachability properties through
EF quantifiers. To address the state explosion problem, sets of visited states are represented
symbolically by binary decision diagrams (BDDs). Large sets of states can often be represented
by small BDDs (Clarke et al., 1999; Bryant, 1992).
The Alloy analyser fits the general scheme in Figure 5.4 but in a different specification
paradigm. The input model is specified in Alloy, a state-based specification language. The
property to verify is a 'claim' specified in Alloy as well. The model is first instantiated by
restricting the range of the model variables to a few values, in order to avoid state explosion.
The analyser then tries to satisfy the negation of the claim within that bounded model, using an
efficient SAT solver. The counterexample produced, if any, is a set of values for the variables,
within the restricted range, that satisfies the negated claim.
Model checking techniques have multiple strengths. Property verification is fully automated;
unlike theorem proving, no human intervention is required during verification. As the analysis
is exhaustive, flaws cannot be missed. For example, the bug shown as an example in this
section will be detected. The same bug was revealed by animation in the previous section,
but there we had to be lucky enough to decide to animate that trace. In fact, many model
checking environments include an animator to visualize the generated counterexample traces.
In practice, counterexamples prove to be the most helpful output of a model checker. They
help debugging the specification and can be subsequently used as test data. These various
strengths explain the increasing popularity of model checking technology in industry.
On the limitation side, the state explosion problem still makes model checkers unusable for
the analysis of very large, complex models. Bounded model checkers address this limitation
quite effectively; they check models instantiated to a few instances. But then we lose the original
guarantee of not missing flaws. Much research work is also devoted to finding good abstractions
to enable the analysis of infinite state models with finite state verification techniques. Other
efforts focus on the explanation of complex counterexample traces to help find the causes of
the problem in the model.
In this inference rule, Prop [s] means 'the property Prop is satisfied in state s', So denotes the
'tial state and the expression {P} Op {Q} means 'if operation Op is applied in a state satisfying
, its application results in a state satisfying Q'. The rule encodes an induction principle over
rf!s. It says that a desired property can be inferred to hold in any state provided that it holds
the initial state and it is kept invariant along state transitions caused by any of the modifier
perations.
To see what a proof may look like, let us get back to the Z specification of portions of our
·brary system in Section 4.4.3. Suppose that we want to verify that the assertion
the UbraryShelves schema, does indeed hold in any state in view of the specification of
e modifier operations on this schema. In other words, we want to prove that the modifiers
specified do preserve the invariance of this assertion. The above invariance rule allows us
~erive the invariant from the specification of modifiers as follows:
• Prop [So] is trivially verified since the initialization schema tnitLibraryShelves tells us that
Available= 0 and OnLoan = 0 (see Section 4.4.3).
• We then need to show that {Prop} RobustCheckOut {Prop}.
a. Since all exceptions UnauthorizedAgent, UnregisteredUser, Unknowncopy, UnavailableBook,
LoanLimitReached in the specification of RobustCheckOut are 8-operations, we just need
to focus on the Checkout a-operation.
II Fundamentals of Requirements Engineering
b. Assuming that Checkout is applied in a state satisfying Prop, we use the post-conditions
specified in Checkout to compute the term Available n Onloan in the final state of
.Checkout (see Section 4.4.3):
Let us now illustrate how an inconsistency can be derived. Suppose that the following
assertion is explicitly or implicitly specified in the Checkout operation schema:
OnLoan' = OnLoan
(Such an error is not uncommon when state variables are declared, through a 8-schema import,
as being left unchanged.) From the invariant defining the OnLoan variable in the LibraryShe/ves
data schema, we know that:
Using an instantiation rule, we conclude that in the states after application of the operation
Checkout, we have in particular:
Getting back to the above wrong post-condition on OnLoan, we obtain by transitivity of equality:
;B.equirements quality assurance is a major concern in view of the diversity of potential defects
lo the requirements document (RD), their consequences and the cost of late repair. The RD
,iiiust be carefully analysed to detect defects and fix them - especially incomplete, inconsistent,
(inadequate, ambiguous or unmeasurable RD items.
The techniques reviewed in this chapter vary in scope, applicability and cost effectiveness:
• Inspections and reviews can in principle detect any kind of defect in any kind of
specification format. Their cost can be controlled by a well-planned process and effective
checklists. This technique is less likely to uncover subtle errors. Tool support is fairly
limited.
• Queries on a requirements database can detect structural inconsistencies and omissions
in semi-formal specifications. The technique is fairly cheap as it can be fully automated
by easy-to-use tools. As queries address surface aspects of specifications only, they are
not likely to find subtle errors.
• Requirements animation requires a formal, executable specification of what we want to
animate. The main target defects are inadequacies, although missing items can also be
detected. Suggestive visualizations of the simulation allow domain experts and end-users
to be involved in the quality assurance process. Animators can point out subtle errors,
but only along the animation scenarios followed. The main cost is that of building the
specification to be animated.
Formal verification can reveal ambiguous and unmeasureable RD items during specifica-
tion formalization, and omissions, inconsistencies and inadequacies during specification
analysis. They are supported by tools that can uncover subtle errors. However, they
are less widely applicable as they all require a formal specification of what we want to
analyse. The tools generally require experienced users to be available. Dedicated consis-
tency and completeness checks may sometimes offer a good cost-benefit compromise;
there is no need for building huge reachability graphs or using complex theorem proving
technology. Empirical evidence, however, suggests that model checkers and theorem
II Fundamentals of Requirements Engineering
provers are champions at uncovering subtle, critical errors. In the former case, exhaustive
exploration guarantees that no defects are missed; in the latter case, deductive derivations
on declarative specifications are less exposed to state explosion problems.
Those complementary strengths and limitations lead to the conclusion that an ideal requirements
quality assurance process should integrate inspections and reviews on the entire RD, queries
on surface aspects of the conceptual, functional and behavioural facets of the system, and
animation and formal checking for in-depth analysis of critical aspects, including safety- and
security-related ones, if any.
This chapter did not cover all the approaches to requirements quality assurance. In
particular, natural language paraphrasing of a formal specification has appeared promising
in other areas (Swartout, 1983). Generating natural language sentences from a semi-formal
or formal specification, for checking by stakeholders, might prove effective for adequacy or
consistency checking (Dallianis, 1992; Gervasi & Zowghi, 2005).
There are other quality-related outcomes of requirements validation and verification that
we did not cover but that are worth mentioning (see the bibliographical notes at the end of
this chapter for some references):
• We can produce test data from interaction scenarios, R-net diagrams, decision tables or
animations. We can generate them automatically with model checkers, constraint solvers
applied to state-based or history-based specifications, or dedicated test data generators.
• We can also generate specification refinements towards implementations.
• Instead of verifying invariant properties, we can sometimes generate them algorithmically
to check the adequacy of the specification, to complete it or to prune the state space.
• Formal reuse of specifications can be supported by specification matching tools.
The semi-formal and formal techniques in this chapter assume that an operational specification
is fully available, such as a state machine specification or a structured system decomposition
into data and operations. This strong assumption does not hold in the earlier stages of the
RE process. The material available for early analysis is partial and made up of declarative
statements such as objectives and constraints mixed with scenario examples. We therefore
need complementary techniques for earlier, incremental checks on declarative specification
fragments. Parts II and III of the book will present a sample of such techniques. A goal-oriented
modelling framework will support a variety of semi-formal and formal techniques for earlier
checking of partial models, in particular:
• To check that refinements of objectives into sub-objectives are correct and complete.
• To check that operationalizations of objectives into specifications of operations are correct
and complete.
To detect conflicts and divergences among objectives, requirements and assumptions (as
introduced in Section 3.1) and to resolve them according to various strategies.
we will see there, formal analysis can be restricted to specific parts of declarative models
applied only when and where needed.
• When the specification to be analysed is fully formal, a wide range of more sophisticated .
checks can be performed. Beyond the standard language checks that compilers can
do, tools can automatically check certain forms of specification consistency and
completeness. Model checkers can algorithmically verify that the specification satisfies
some desired property. Theorem provers can deductively verify this by taking the
property as a candidate theorem.
• When the specification language allows expected functionalities and behaviours to
be locally defined as input-output relations, we can formally verify input-output
consistency by checking that the relation is a function, and input-output completeness
by checking that this function is total. When the relationship is represented in a tabular
format, these checks amount to simple checks along each table row for disjointness
and coverage.
• Model checking is an increasingly popular technique for verifying that a formally
specified model satisfies some desired property. If the property is not satisfied, a
counterexample is generated. In the most frequent case, the input model is a parallel
state machine, the property is formalized in a linear or tree temporal logic, and the
output counterexample is a state machine trace showing how the property can be
violated. The verification is performed by an exhaustive search for property violation
through a reachability graph. Large models with complex concurrent behaviours entail
a state explosion problem. A wide range of time/space optimization techniques is
aimed at addressing this problem.
• Theorem proving is another approach to property verification. The verified property
is formally derived from the specification by a sequence of applications of deductive
inference rules associated with the specification language. This approach can also
be used to show logical inconsistencies among specifications or to derive logical
consequences for adequacy checking. Theorem provers may succeed in verifications
where model checkers fail, but require the assistance of experienced users.
• An effective requirements quality assurance process for mission-critical systems should
ideally combine inspections and reviews of the entire RD, queries on surface aspects
of the conceptual, functional and behavioural facets of the system, animation-based
validation and formal verification of critical aspects.
As behaviour is the primary focus of animation, many efforts were devoted to animating
event-based specifications (Harel et al., 1990; Holzmann, 1997; Larsen et al., 1997). A
comprehensive comparison of tools emanating from this research can be found in Schmid
et al. (2000). The SCR, RSML and LSC animators support input-output interactions through
control panels (Heitmeyer et al., 1997, 1998a; Thompson et al., 1999; Harel and Marelly,
2003). The LTSA animator supports domain scenes as well (Magee et al., 2000; Magee
& Kramer, 2006). FAUST supports domain scenes visualizing simulations that are gener-
ated from operational versions of temporal logic specifications (Tran Van et al., 2004). A
more elaborate NextState function than the one described in this chapter is introduced
in Heimdahl and Leveson (1996). Other types of model may be animated as well; for
example, the tool described in Heymans and Dubois 0998) animates specifications of
the obligations and permissions of concurrent system agents.
Formal consistency and completeness checks based on the constraint that input-output
relations must be total functions are presented in greater detail in Heitmeyer et al. (1996)
and Heimdahl and Leveson (1996). In the SCR toolset, the checks are local on the tables
representing the relations. In the RSML toolset they are performed compositionally on
the entire system specification. With the latter toolset, several sources of dangerous
incompleteness and non-determinism were found in the specification of the TC4S air
traffic collision avoidance system (Heimdahl & Leveson, 1996).
The original idea and principles of model checking were independently developed
in Queille and Sifakis (1982) and Clarke et al. (1986). Model checkers were originally
conceived for hardware verification. They are now widely used in the semiconductor
industry, and are increasingly used for checking critical aspects of software systems.
Some of the best-known uses are reviewed in Clarke and Wing (1996). There have
been quite a few tutorials and books on model checking techniques, usually oriented
towards the SPIN automata-based approach (Holzmann, 1997, 2003) or the SMV symbolic
approach (McMillan, 1993; Clarke et al., 1999). Those techniques were extended to
support richer models such as timed models or hybrid models. A sample of model
checkers along this track includes KRONOS (Daws et al., 1994), UPPAAL (Larsen et al.,
1997) and LTSA (Magee & Kramer, 2006). A comprehensive presentation of techniques and
tools with comparisons will be found in Berard et al. (2001). Model checking techniques
and tools were also developed for RE languages such as SCR (Atlee, 1993) or integrated
into formal RE toolsets (Heitmeyer et al., 1998b). Some efforts were made to apply r:podel
checking technology earlier in the RE lifecycle. For example, Fuxman and colleagues
extended the i* RE framework with a temporal logic assertion language in order to model
check early requirements specifications using the NuSMV model checker (Fuxman et al.,
2001; Cimatti et al., 2000).
Many tools for state-based specification languages provide a front end to a theorem
prover. For example, the Z/Eves front end to the Eves theorem prover supports formal
derivations of Z assertions. In particular, it derives pre-conditions and checks that
partial functions cannot be applied outside their domains (Saaltink, 1997). For algebraic
Requirements Quality AsSUran~ II
specification languages, flexible front ends to efficient term-rewriting systems are available
as well (Clavel et al., 1996). The higher-order PVS verification system is often used by
front ends as it allows specific formalisms to be defined as embeddings; language-specffic
.proofs can then be carried out interactively using the proof strategies and efficient decision
procedures provided (Owre et al., 1995). A SCR/PVS front end is described in Heitmeyer
u:n al. Cl998a). STeP is another verification system for Ln and event-based specifi~tions
that combines theorem proving and model checking facilities (Manna & The STeP Group,
1996).
Techniques for generating invariants from event-based specifications in languages
·.such as SCR or RSML are described in Bensalem .et al. 0996), Park et al. 099~) and
effords and Heitmeyer 0998). There have been numerous efforts to generate test cases
flnd oracles automatically from logic-based specifications, including Bernot et al. 0991),
,Richardson et al. (1992), Roong-Ko and Frankl 0994), Weyuker et al. 0994) and Mandrioli
'et al. (1995). Classic references on the refinement of a state-based specification towards an
··. plementationinclude Morgan (1990),Jones 0990) and Abrial 0996). Formal techniques
for specification reuse are described.in Katz et al. (1987), Reubenstein and Waters 0991),
:Zaremski and Wing 0997) and Massonet and van Lamsweerde 0997).
The benefits of combining multiple quality assurance techniques for finding different
types of defects in requirements for safety-critical systems are convincingly illustrated in
,Modugno et al. 0997).
II Fundamentals of Requirements Engineering
he world keeps moving - our target system too. After the system-to-be come
systems-to-be-next. The system objectives, conceptual structures, requirements
and assumptions that have been elicited, evaluated, specified and analysed may
to be changed for a variety of reasons, including defects to be fixed; project fluc-
ns in terms of priorities and constraints; better customer understanding of the
's actual features, strengths and limitations; and a wide range of environmental
, including new or alternative ways of doing things, new business opportu-
' new or alternative technologies, organizational changes, new or alternative
ons and so on.
h changes may be required at various stages of the project: during requirements
ering itself, as a result df requirements evaluation and analysis; during subsequent
lopment of the software-to-be, as design or implementation may reveal problematic
implied by the requirements; or after system deployment, as experience with the new
ih is gained.
Requirements evolution raises a difficult information management problem. Large amounts
ormation need to be versioned and maintained in a consistent state. Changes to the
ikments document (RD) must be propagated through other items that depend on the
ged items in order to maintain their mutual consistency. These include other RD items and
ard product items such as prototypes, design specifications, architectural descriptions,
ta, source code, user manuals and project management information. Consistency main-
ce requires the management of traceability links among items and propagating changes
such links.
pter 1 introduced evolution-related RD qualities. Good structuring and modifiability
;$aii:ned at making the required changes as local as possible, whereas traceability is aimed
localizing those required changes easily (see Section 1.1. 7). Undocumented traceability
together with late, unanticipated and undocumented changes may have quite severe
quences in terms of maintenance cost and product quality. Requirements engineers
II Fundamentals of Requirements Engineering
therefore need to prepare for change, from the very beginning of the project, and manage the
change process in a controlled way based on policies, techniques and tools.
Evolution is at the heart~ of the RE process as it triggers new cycles in the spiral process
introduced in Chapter 1 (see Figure 1.6). The process of anticipating, evaluating, agreeing
on and .propagating changes to RD items is called requirements change management - or
requirements management for short.
This chapter offers an overview of the various issues and techniques available for require-
ments change management. Section 6.1 introduces the two dimensions of evolution together
with their causal factors; ~volution over time yields system revisions whereas evolution across
product families yields system v&iants, We will then follow the successive stages of a dis-
ciplined requirements management process. Section 6.2 introduces change anticipation as
the first milestone for effective evolution support, and describes techniques for anticipating
changes. Section 6.3 introduces traceability management as another milestone in this process,
and reviews techniques for managing traceability for better evolution support. Section 6.4 then
discusses the various aspects of change control, from the handling of change requests to the
evaluation and consolidation of changes. Section 6.5 introduces a recent paradigm for dynamic
evolution where changes in environmental assumptions are monitored at system runtime for
dynamic adaptation to such changes.
As requirements evolution is highly intertwined with the earlier phases of the RE process,
this chapter will partly rely on material from previous chapters.
• A revision results from changes generally made to correct or improve the current version
of a single product.
• Variants result from changes made to adapt, restrict or extend a master version to
multiple classes of users or usage conditions. The variants share commonalities while
having specific differences.
Revisions result from evolution over time, whereas variants result from evolution across product
families (sometimes called product lines). Figure 6.1 helps visualize those two dimensions ol
evolution. At any single point in time along the system lifetime, multiple variants may co-exist
Requirements Evolution IJ
space
Revision A 1 Revision A2
Variant A
(user class A) 41-----41--------fj----;>
Revision 81 Revision B2
VariantB
(user class 8)
-·- ----·- ------ -·--- -;>
time
multiple places. Multiple revisions of the same item are expected to exist at different points
time, but not at the same time point.
Consider our meeting scheduling system, for example. The current version of the RD might
olve into a revised version or into multiple variants. A revision might fix a number of
uirements omissions and inconsistencies that have been detected, and include improved
res such as taking date preferences into account or notifying invited participants of the
eting date by an SMS message to their mobile phone. Variants might include a contracted
rsion where meeting scheduling refers to meeting dates only; a version where both meeting
s and locations are handled; a variant of the latter where participants with different
s are distinguished; and an extended version providing configurable rule-based conflict
fhanagement facilities.
Revisions and variants define a two-dimensional space for defining product evolutions,
ther than two separate tracks. Variants may evolve into revisions, and a revision may give
to multiple variants. Requirements can also be deferred until subsequent product versions
to prioritization in view of project phasing, user skills and experience, technological risk
Ad so on. (See Sections 3.4 and 6.4.2.)
• At RE time, requirements and assumptions that are felt to be too volatile may call
for alternative, more stable ones to reduce the evolution cost beforehand. When an
assumption or requirement is kept in spite of its volatility, it deserves more attention during
traceability management (see Section 6.3). We may also anticipate and record adequate
responses to anticipated changes; this might be much cheaper than rediscovering them
later on when the change occurs. Anticipated responses to likely changes further allow
us to support runtime evolution, where volatile assumptions are monitored at system
runtime and responses to changing assumptions are enacted on the fly (see Section 6.5).
• At software development time, the documentation of likely changes is essential for
designing architectures that remain stable in spite of those changes, for example through
their encapsulation or wrapping within dedicated, easily localizable components in the
architecture. The contextual information about volatile requirements and assumptions is
helpful for maintenance teams too.
For large, multiversion systems, change anticipation should ideally address the two dimensions
of evolution. We can do this by classifying a requirement or assumption as stable or volatile
from one system revision to the other, and as common or distinct from one system variant to
the other.
More specifically, we may associate levels of stability or commonality with statements,
as suggested in Section 4.2.1, or with sets of statements grouped into features. To enable
comparative analysis, we may transpose some of the principles introduced in Section 3.4 for
requirements prioritization:
• Each level should contain items of similar stability or commonality. The number of such
levels should be kept small.
• The characterization of levels should be qualitative rather than quantitative, and relative
rather than absolute, for example 'more stable than' rather than 'stable'.
Figure 6.2 suggests a feature ranking for the meeting scheduling system. Dedicated RD sections
may then highlight those requirements and assumptions that are more volatile, or that are
distinct from one variant to the other.
The elicitation techniques in Chapter 2 may help us determine adequate stability or
commonality levels for the items that we are eliciting. Stakeholders should be involved in this
II Fundamentals of Requirements Engineering
~~
- -- -- - -- - - - - - - - - - - _ -- -- - _ - - _ - _ - -- _ - - -- -- - - - -- - _ - _ - _ - _ - • more stable than
~~~
- - - - - _ - _ - - _ - _ -- ______ -- ______________ - - - _ - _ - ________ - __ more stable than
assessment to ensure its adequacy. In addition, we may base our analysis of likely changes on
the following heuristic rules:
• Regroup within features cohesive sets of statements that share the same stability or
commonality level and address the same system objective. It makes no sense to mix within
the same change unit statements that are stable and statements that are highly volatile.
• To help identify the most stable features, ask yourself what useful subset offeatures should
be found in any contraction, extension or variant of the system. Other features not in
this subset should be classified as less stable. For example, any contraction, extension or
variant of the meeting scheduling system should include a date determination feature and
a participant notification feature whatever the type of meeting is - regular or episodic,
physical or videoconferencing and so on.
• Intentional and conceptual aspects are more stable than operational and factual ones.
High-level system objectives and domain-specific conceptual structures are more stable
than operational ways of doing things, user characteristics, assumptions about ways of
using the software or technology constraints. For example, the objective of informing
invited participants of the meeting date is more stable than the operational requirement
of sending them an SMS notification to achieve this.
• Functional aspects related to the core objectives of the system are more stable than
non-functional constraints for improvement or technology adaptation. For example, the
requirements for getting participant constraints are likely to be more stable than the
requirements for a visual input-output calendar for increased usability of the constraint
acquisition feature.
• Decisions among multiple options deserve special scrutiny. They may rely on incomplete
knowledge or on assumptions that may no longer be valid later on in the system lifecycle.
The requirements resulting from them are therefore likely to be more volatile:
a. In the frequent case of incomplete knowledge at RE time, changes may be required
hter on as the missing knowledge becomes available. For example, we generally
don't know all the potential security threats to the system, and new classes of attacks
Requirements Evolution II
on similar systems may be revealed later on in the system lifecycle. We may also
formulate architectural or inter-operability requirements without necessarily knowing
all their implications for the development process; these implications might become
apparent at software implementation time. Requirements set up with such incomplete
knowledge are likely to be less stable.
b. Conflicts are another source of likely changes (see Section 3.1.3). When we explore
alternative resolutions to a detected conflict, we might select a resolution based on
conditions at RE time that are no longer valid later on. Requirements emerging from
such conflict resolution are thus likely to be less stable.
c. Risks are another source of potential changes (see Section 3.2). The likelihood of a
risk, assessed at RE time, may need to be revised later on based on new conditions
or better knowledge of the real nature of the risk. When we explore alternative
countermeasures to a particular risk, we might select a countermeasure that turns out
to be no longer appropriate later on. Requirements emerging from this countermeasure
are thus likely to be less stable.
d. There are often alternative ways of meeting a system objective through different
combinations of sub-objectives. (Part II will come back to this at length.) We might
choose one specific combination based on assumptions that might no longer be valid
subsequently. Requirements emerging from such combination of sub-objectives are
thus likely to be less stable.
e. There might be alternative ways of assigning responsibilities among system compo-
nents. We generally choose specific responsibility assignments based on assumptions
about system components that might no longer be valid later on in the system lifecycle.
Responsibility assignments are another source of likely changes.
Jn all such cases it is worth documenting the reasons, conditions and assumptions under the
~election of one specific option, as they are subject to change, and to document the alternative
options that have been identified. These alternatives might give us appropriate responses to
1subsequent changes; we don't need to rediscover them when the corresponding change occurs.
for reducing this cost. To conclude, Section 6.3.4 discusses cost-benefit trade-offs for effective
traceability management.
• Within the RD, an item may rely on other RD items. We may wish to retrieve the definition
of a concept involved in a requirement, for example, or the assumptions on which the
requirement relies. Such traceability is called horizontal traceability.
• An RD item may originate from upward items found in the RD, such as business
objectives or elicited items from interview transcripts or observation videotapes. It
may give rise to lower-level RD items or downward software lifecycle items. Such
traceability with upward and downward artefacts is called vertical traceability. Forward
vertical traceability is sometimes called downward traceability whereas backward vertical
traceability is sometimes called upward traceability.
Figure 6.3 helps visualize those basic notions. Note that some traceability links can be many to
many; a source item can have multiple targets and a target item can have multiple sources. Let
us consider some examples illustrating traceability paths in Figure 6.3.
Architectural
components and connectors F
Source code Test data User manual
Figure 6.3 Traceability links: forward, backward, horizontal, and vertical traceability
Section 1.3 ('Definitions, acronyms and abbreviations'). These are all examples of where
horizontal traceability is required for change management.
On the other hand, two security requirements in RD Section 3.6 might originate from
an interview where attacks on the system-as-is were reported. These requirements might
give rise to an Access Control module in the software architecture, to specific black-box
test data on this module and to the description of a user authentication procedure in
the user manual. Besides, distribution constraints reported in RD Section 3.4 ('Design
constraints') might result in the selection of a Publish-Subscribe architectural style. These
are all examples of where vertical traceability is required for change management.
• F:xample 2. Suppose now that we have specified the conceptual items involved in our
meeting scheduler by the entity-relationship diagram in Figure 4.5 (see Section 4.3.2).
The structuring of participant constraints in this diagram through excluded and preferred
dates, and the distinction between important and normal participants, will give rise
to specific requirements on how returned date preferences should be validated and
handled (horizontal traceability). These requirements will give rise to the corresponding
specifications of a Constraints Handler module in the architecture and the description of
.
a constraints submission procedure in the on-line user manual (vertical traceability) .
The implications of traceability in forward, backward, horizontal and vertical directions are
important. Consider an RD item traceable along those directions:
• We can easily retrieve the context in which this item was created and changed, following
traceability links backwards, and answer questions such as: 'Why is this here? Where is it
comingfrom?' For any target item we can thereby identify the source items that explain it.
Likewise, we can retrieve the context in which the item is taken into account, following
traceability links forwards, and answer questions such as: 'What are the implications
of this? W'bere is this taken into account?' We can thereby identify any item that exists
because of the source item.
• As a consequence, we can easily localize the impact of creating, modifying or deleting
traceable items in order to assess the impact along horizontal and vertical traceability
chains.
lfj Fundamentals of Requirements Engineering
---- ----
Dependency link
Dependency This is the most general traceability link type. There is a Dependency link
between a target item Band a source item A if changing A may require changing B. We say
that A affects B, in the forward direction for traceability, or B depends on A, in the backward
direction (see Figure 6.5).
Dependency can be specialized in various ways. The more specialized the dependency, the
more specific the reason for it, the more accurate the link, the easier its correct establishment
and the more accurate its analysis for multiple uses in traceability management.
As Figure 6.4 shows, there are dependencies among different versions of the RD (left
branch) and dependencies within a single version (right branch). Let us define the link types
along the left branch first. (Remember that a feature was defined as a change unit.)
Variant There is a Variant link between a target item B and a source item A if B has all the
features of A while having its own distinguishing features. We say that Bis a variant of the
master version A (see Figure 6.6).
affects dependsOn
Dependency
previous next
Revision
• Example. Consider an RD variant for our meeting scheduling system where participants
have different status. This variant will share with other variants all features from the master
RD version while having member status management and priority-based scheduling
among its distinguishing features.
Revision There is a Revision link between a target item B and a source item A if B overrides
certain features of A, adds new ones and/or removes others, while keeping all remaining
features. We say that Bis a next version of the previous version A (see Figure 6.7).
• Example. A revision of the current RD version for the meeting scheduler might override
the current rule for optimal date determination by another rule taking date preferences
into account, and add the new feature of notifying the scheduled date by an SMS message
to participants, in addition to e-mail notification.
For better traceability among the linked items, variant and revision links are generally annotated
with configuration management information such as the following:
Along the right branch in Figure 6.4, two kinds of dependency are distinguished within a single
RD version.
Use There is a Use link between a target RD item B and a source RD item A if changing A
makes B become incomplete, inadequate, inconsistent or ambiguous. We say that A is used
by B, in the forward direction for traceability, or B uses A, in the backward direction (see
Figure 6.8).
II Fundamentals of Requirements Engineering
usedBy uses
Use
Derivation There is a Derivation link between a target item B and a source item A if B is
built from A under the constraint that A must be met. We say that A is met by B, in the forward
direction for traceability, or Bis deri.vedfrom A, in the backward direction (see Figure 6.9).
Note that this definition does imply a dependency: changing A may require changing B,
since B was built under the constraint of meeting the old A, not the new one. What it means
for A to be met depends on the type of items being linked by Derivation links. In particular:
Example 1. The objective of 'anywhere anytime notification' for the meeting scheduler
might have emerged from concerns expressed by frequent travellers during interviews
(first Derivation link); this objective might be met by a SMS notification feature with
corresponding requirements (second Derivation link); these requirements might in
metBy derivedFrom
Derivation
The Derivation link type is a vertical dependency whereas Use is a horizontal one. The real
difference between these two link types lies in satisfaction arguments. A Derivation link from
a source item A to a target item B calls for an argument, to be documented, that B contributes
to meeting A. No such argument is called for in the case of Use links.
Derivation
Derivation
Derivation
meaning roughly 'the system goal G is satisfied whenever the operation specifications in OP
are satisfied'.
These additional types of satisfaction arguments will produce more derivational traceability
links for free; see Figure 6.10.
VariantOf o.. •
MasterVersion 0.. 1
Uses Use
o.. •
NextTo 0 .. 1
DerivedFrom
PreviousFrom 0 .. 1
1 .. * MetBy
Derivation
1..* ,___ _ _ _ __,
VariantOf O..* :
MasterVersion o.. 1
NextTo 0 .. 1
PreviousFrom 0 .. 1
the different multiplicities on the left-hand side of the variant and revision relationships; a
master version may have multiple variants, whereas a single RD item version may have a single
revision at most.
Figure 6.12 illustrates a possible instantiation of the traceability model in Figure 6.11 to RD
items for our meeting scheduling system. It is based on examples previously introduced in this
section. (1be MS acronym there stands for 'meeting scheduler'.)
The various types of traceability links defined in this section remain fairly generic. In
practice, they can be further specialized to the specifics of the organization, of the domain or
of the type of project, when this is felt to be useful (Ramesh & Jarke, 2001).
Revision
---------------· .. ------!'"
.. ',,
Derivation
.. Us~'s'• DerlvedFrom
_____f!.~~6!."~----l.!'!~!T~----·-------------------
'
.
............................. ·····- ...................................................................
I
• I
: : Derivation
: DerlvedFrom :
• -- -- - -- .. - -- - - - - - - ..... r-- -- -- ....... - ... - ........... ..,
I I f"
Answering these intertwined questions raises four issues to be settled: the granularity of a
link, its semantic richness, its accuracy and the overhead required for its management.
• Link granularity. What does a source or target item of a link really cover? An entire
section of the RD, a cohesive set of related requirements defining a feature or a single
assumption? The granularity of a traceability link is determined by the granularity of the
linked items.
• Semantic richness. Is the link intended to convey semantics, like the Derivation and Use
links, or is it a purely lexical link, like keyword-based indexing or tagging?
• Link accuracy. Does the established link correctly stick to its semantics, as defined in
the previous section? Is it focused enough for precise localization of dependencies? How
accurate are the conclusions that can be drawn by retrieving the link?
• Link overhead. How important is the extra effort required for establishing and maintaining
this link?
These four issues interact positively or negatively with each other. A finer-grained link
contributes to higher accuracy, for example in localizing the impact of a change, but also
to higher overhead; the finer the grain, the higher the number of links to be created and
Deciding which RD items need to be linked and through which link types allows us to build a
traceability grapb where nodes are traceable items and edges are labelled by the corresponding
link type (see Figure 6.12 for a possible visualization of such a graph). As previously mentioned,
each node in the graph must be assigned a unique identifier for unambiguous reference. The
traceability graph then needs to be recorded for later use.
For effective evolution support, a full traceability graph should cover inter-version, Use and
Derivation links among selected items at a sufficiently fine-grained level. The resulting graph
can be fairly large and complex to build. Section 6.3.3 will discuss techniques and tool support
for alleviating that task.
• Evolution support. The primary use is, of course, for consistency management during
·requirements evolution. When a change is requested on an item B, the context of the
change is obtained by following dependency links backwards to get the source items A
on which B depends. The impact of the change is assessed by following all dependency
links forwards horizontally and vertically, to get the items C depending on B, and
recursively. When the change is approved, it is propagated forwards along all those
dependency links.
• Rationale analysis. The reasons for an RD item may be obtained by following derivation
links upwards to obtain its sources and motivations. When a retrieved reason corresponds
to an objective to be met by the system, we can check whether the RD item is sufficient
for meeting it, and possibly discover other items that are missing for the objective to
be fully met. When no reason is found, we may question the relevance of this item.
Likewise, the reasons for an implementation feature may be obtained by following
derivation links upwards from this feature. When a requirement is reached, we can check
Ii Fundamentals of Requirements Engineering
whether the feature is sufficient for meeting this requirement, and possibly discover
missing or alternative implementation features for meeting the same requirement. When
no requirement is found, the feature may either prove to be irrelevant 'gold plating'
or reveal requirements that have been overlooked (and that perhaps might give rise to
other implementation alternatives). Through rationale analysis we can thus find answers
to questions about RD items and downward software items such as: 'Why is this here?'
'Is this enough?' 'Is this relevant?'
• Coverage analysis. We can also assess whether, how and where an RD item is met, by
following derivation links downwards to other RD items and to architectural, implementa-
tion, test data, user manual and project management items. We can thereby find answers
to questions raised at RE time or during the subsequent software development phases,
such as: 'Is this concern, expressed during that interview, taken into account in the RD?
Where?' 'Where and how is this assumption taken into account in the requirements?' 'ls
this requirement taken into account somewhere in the design or implementation? Where?'
'Have all requirements been allocated?' 'ls this requirement exercised by an animation
sequence?' 'Are there test data to exercise this requirement?'
• Defect tracking. In the case of a problem being detected during requirements animation
or acceptance testing, we may follow derivation links upwards towards possible origins
of the problem, such as the inadequacy of some upward requirement. Such cause-effect
tracking is a pre-condition for effective, prompt repair.
• Compliance checking. When the traceable items include contractual clauses, regulations
or standards prescriptions, we may follow derivation links upwards towards them in
order to check or demonstrate that the RD meets them.
• Project tracking. When traceability chains include project management information about
tasks, resources and costs, we may follow dependency links to monitor progress, allocate
resources and control costs.
• When a new RD item is created, we should consider integrating it into the traceability
graph with adequate new links.
• When a non-traceable RD item is modified, we should question whether the modified
item should not be subsequently traceable and, if so, integrate it into the traceability
graph.
• When a traceable RD item is deleted, we should delete it from the traceability graph
together with its incoming and outgoing links - after all consequences of this change
have been propagated.
Requirements Evolution .9
• When a traceable RD item is modified, we should check all its incoming and outgoing
links and, for each of them, determine according to its semantics whether it should be
kept, deleted or modified.
• The presence of unconnected nodes, as a result of those manipulations, must be analysed
to determine whether they should be 'garbage-collected' or whether new links should
connect them.
Cross referencing
The first obvious possibility consists of configuring a standard text editor, spreadsheet or hyper-
text editor to support cross-referencing among the items we want to trace. This corresponds to
the good old 'see Section X.y' practice:
• The items to be traced are selected and assigned a unique name for cross-referencing.
• An indexing or tagging scheme is set up for the selected items to define which items are
lexically linked to which others.
• The available search or browsing facilities are configured to this scheme.
• The items are then retrieved simply by following cross-reference chains.
This technique is lightweight and readily available. It can accommodate any level of granularity.
It is, however, limited to a single link type that carries no semantics - namely, lexical reference.
The information conveyed by the indexing scheme is implicit and may be inaccurate. As
a result, the control and analysis of traceability information are very limited. The cost of
maintaining the indexing scheme may turn out to be fairly high too.
Traceability matrices
Such matrices are often used in cross-referencing to represent the indexing scheme and to track
cross-references. They can, however, be used to represent in matrix form any traceability graph
built on a single relation - for example, the Dependency relation defined in Section 6.3.1.
Each row/column in the matrix is associated with the name of an item, for example
an objective, a requirement, an assumption, a domain property, a document section, an
architectural component, a set of test data and so on. The matrix element TtJ has a value '1'
(say) if the item ~ is linked with item 7J, and 'O' otherwise. Table 6.2 shows a very small
example of a traceability matrix.
Looking at a specific row we easily retrieve, from the source item in the first column, all
target items linked to it in the forward direction. Looking at a specific column we easily retrieve,
from the target item in the first row, all source items linked to it in the backward direction.
lj Fundamentals of Requirements Engineering
For example, suppose that the traceability matrix in Table 6.2 captures a Dependency
relation. The third row shows that item T3 affects items Tl and T5, whereas the third column
shows that item T3 depends on items T2 and T4. The traceability graph represented by Table 6.2
is shown in Figure 6.14.
Traceability matrices provide a simple representation for traceability graphs. They allow
for navigation in both forward and backward directions. They also support simple forms of
analysis of the traceability graph, such as the detection of undesirable cycles. For example,
the dependency cycle Tl--+ T4--+ T3--+ Tl in Figure 6.14 is easily detected through standard
algorithms applied to the matrix in Table 6.2.
For large projects with many items to be traced, such matrices may become unmanageable.
Filling in very large sparse matrices is error prone. A standard alternative against sparseness
is the equivalent, unidirectional list representation; see Table 6.3. We can, however, then no
longer navigate easily in the reverse direction.
A more serious limitation of traceability matrices and lists is their restriction to single
relations. We cannot represent and navigate through traceability graphs with multiple link
Feature diagrams
Feature diagrams are another simple graphical representation dedicated to the Variant link
type (Kang et al., 1990). They allow us to capture multiple variants of a system family, with their
commonalities and variations, within a single graph. The variants are retrieved by navigation
through the graph with selection of corresponding features.
A node in a feature diagram represents a composite or atomic feature. The basic structuring
mechanism is feature decomposition. A composite feature can be AND-composed of multiple
sub-features that are mandatory or optional. It can also be OR-composed of alternative features
that are exclusive ('one of) or non-exclusive ('more of). An atomic feature is not decomposed
any further.
A variant is retrieved from the feature diagram through a specific selection of atomic features
in the diagram that meets the constraints prescribed by the diagram.
Figure 6.15 shows a small, simplified feature diagram for our meeting scheduling system.
The diagram specifies the MeetingScbeduling feature as an aggregation of four mandatory com-
posite features (Meetinglnitiation, ConstraintsAcquisition, Planning and MeetingNotification),
as prescribed by the closed dot on top of each feature, plus an optional atomic feature (flule-
BasedConjlictResolution), as indicated by the open dot. The ConstraintsAcquisition feature is
either a byEmail or a byE-agenda feature, as prescribed by the open triangle joining the parent
feature to its child ones (exclusive OR). The MeetingNotification feature is a byEmail feature
or a bySMS feature or both, as indicated by the closed triangle joining the parent feature to
its children (non-exclusive OR). Note that there are three possible feature con.figurations for
MeetingNotification.
Figure 6.15 Feature diagram for variants of the meeting scheduling system
II Fundamentals of Requirements Engineering
The diagram in Figure 6.15 captures a family of variants that share Meetinglnitiation as
a common feature. Note how compact this notation is; the number of variants captured in .
Figure 6.15 is equal to the number of possible feature combinations, that is:
Traceability databases
For better scaleability and automated support of multiple link types, we can store all traceability
information in a dedicated database and use database management facilities such as queries,
views and versioning to manage that information.
A traceability database tool often manipulates a hierarchical structure of projects, documen-
tation units within a project and data within a documentation unit. User-defined traceability
attributes can be attached to the units and to the data. These may include, among others, the
configuration management attributes mentioned before such as the date and the rationale for
creation of the documentation unit/data, the author and contributors, the evaluation/approval
status and so on. User-defined attributes can also encode dependencies among units and data
such as Use and Derivation links.
The facilities provided by traceability database tools generally include the following:
• Creation and modification of documentation units and data together with their user-
defined traceability attributes.
• Historical tracking of changes made to the units/data.
• Baselining of approved versions, for sharing among project members, until the next
approved change.
• Forward and backward navigation through traceable units and data.
• Coverage checking.
• Database view extraction, visualization of traceability chains and report generation.
• Interoperability with other software engineering tools.
Several commercial tools provide this functionality, including DOORS (Telelogic), RTM (Chip-
ware/Marconi) and RequisitePro (Rational/IBM). They are in a sense generic, allowing users
to customize the tool to their own traceability attributes and granularity. They scale up to
large projects and support forward, backward, horizontal and vertical tracing of requirements
and assumptions. However, the manual intervention required for customizing the tool and for
establishing the traceability graph may be difficult and error prone. One reason is the lack of
Requlremen~ Evolution Ill
structuring of traceability information, to be provided as 'flat', user-defined attributes, and the
lack of guidance in feeding the tool with this unstructured information.