100% found this document useful (1 vote)
465 views591 pages

Lecture Notes in Computer Science

(5401 Theoretical Computer Science and General Issues) Rachid Guerraoui (Auth.), Theodore P. Baker, Alain Bui, Sébastien Tixeuil (Eds.)-Principles of Distributed Syst

Uploaded by

thingsfotthat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
465 views591 pages

Lecture Notes in Computer Science

(5401 Theoretical Computer Science and General Issues) Rachid Guerraoui (Auth.), Theodore P. Baker, Alain Bui, Sébastien Tixeuil (Eds.)-Principles of Distributed Syst

Uploaded by

thingsfotthat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 591

Lecture Notes in Computer Science

Commenced Publication in 1973


Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
University of Dortmund, Germany
Madhu Sudan
Massachusetts Institute of Technology, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max-Planck Institute of Computer Science, Saarbruecken, Germany

5401

Theodore P. Baker Alain Bui


Sbastien Tixeuil (Eds.)

Principles of
Distributed Systems
12th International Conference, OPODIS 2008
Luxor, Egypt, December 15-18, 2008
Proceedings

13

Volume Editors
Theodore P. Baker
Florida State University
Department of Computer Science
207A Love Building, Tallahassee, FL 32306-4530, USA
E-mail: [email protected]
Alain Bui
Universit de Versailles-St-Quentin-en-Yvelines
Laboratoire PRiSM
45, avenue des Etats-Unis, 78035 Versailles Cedex, France
E-mail: [email protected]
Sbastien Tixeuil
LIP6 & INRIA Grand Large
Universit Pierre et Marie Curie - Paris 6
104 avenue du Prsident Kennedy, 75016 Paris, France
E-mail: [email protected]

Library of Congress Control Number: 2008940868


CR Subject Classication (1998): C.2.4, C.1.4, C.2.1, D.1.3, D.4.2, E.1, H.2.4
LNCS Sublibrary: SL 1 Theoretical Computer Science and General Issues
ISSN
ISBN-10
ISBN-13

0302-9743
3-540-92220-2 Springer Berlin Heidelberg New York
978-3-540-92220-9 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microlms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
springer.com
Springer-Verlag Berlin Heidelberg 2008
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientic Publishing Services, Chennai, India
Printed on acid-free paper
SPIN: 12582457
06/3180
543210

Preface

This volume contains the 30 regular papers, the 11 short papers and the abstracts
of two invited keynotes that were presented at the 12th International Conference
on Principles of Distributed Systems (OPODIS) held during December 1518,
2008 in Luxor, Egypt.
OPODIS is a yearly selective international forum for researchers and practitioners in design and development of distributed systems.
This year, we received 102 submissions from 28 countries. Each submission
was carefully reviewed by three to six Program Committee members with the
help of external reviewers, with 30 regular papers and 11 short papers being
selected. The overall quality of submissions was excellent and there were many
papers that had to be rejected because of organization constraints yet deserved
to be published. The two invited keynotes dealt with hot topics in distributed
systems: The Next 700 BFT Protocols by Rachid Guerraoui and On Replication of Software Transactional Memories by Luis Rodriguez.
On behalf of the Program Committee, we would like to thank all authors of
submitted papers for their support. We also thank the members of the Steering Committee for their invaluable advice. We wish to express our appreciation to the Program Committee members and additional external reviewers for
their tremendous eort and excellent reviews. We gratefully acknowledge the
Organizing Committee members for their generous contribution to the success of the symposium. Special thanks go to Thibault Bernard for managing the conference publicity and technical organization. The paper submission
and selection process was greatly eased by the EasyChair conference system
(http://www.easychair.org). We wish to thank the EasyChair creators and
maintainers for their commitment to the scientic community.

December 2008

Ted Baker
Sebastien Tixeuil
Alain Bui

Organization

OPODIS 2008 was organized by PRiSM (Universite Versailles Saint-Quentin-enYvelines) and LIP6 (Universite Pierre et Marie Curie).

General Chair
Alain Bui

University of Versailles St-Quentin-en-Yvelines,


France

Program Co-chairs
Theodore P. Baker
Sebastien Tixeuil

Florida State University, USA


University of Pierre and Marie Curie, France

Program Committee
Bjorn Andersson
James Anderson
Alan Burns
Andrea Clementi
Liliana Cucu
Shlomi Dolev
Khaled El Fakih
Pascal Felber
Paola Flocchini
Gerhard Fohler
Felix Freiling
Mohamed Gouda
Fabiola Greve
Isabelle Guerin-Lassous
Ted Herman
Anne-Marie Kermarrec
Rastislav Kralovic
Emmanuelle Lebhar
Jane W.S. Liu
Steve Liu
Toshimitsu Masuzawa
Rolf H. M
ohring
Bernard Mans
Maged Michael
Mohamed Mosbah

Polytechnic Institute of Porto, Portugal


University of North Carolina, USA
University of York, UK
University of Rome, Italy
INPL Nancy, France
Ben-Gurion University, Israel
American University of Sharjah, UAE
University of Neuchatel, Switzerland
University of Ottawa, Canada
University of Kaiserslautern, Germany
University of Mannheim, Germany
University of Texas, USA
UFBA, Brazil
University of Lyon 1, France
University of Iowa, USA
INRIA, France
Comenius University, Slovakia
CNRS/University of Paris 7, France
Academia Sinica Taipei, Taiwan
Texas A&M University, USA
University of Osaka, Japan
TU Berlin, Germany
Macquarie University, Australia
IBM, USA
University of Bordeaux 1, France

VIII

Organization

Marina Papatriantalou
Boaz Patt-Shamir
Raj Rajkumar
Sergio Rajsbaum
Andre Schiper
Sam Toueg
Eduardo Tovar
Koichi Wada

Chalmers University of Technology, Sweden


Tel Aviv University, Israel
Carnegie Mellon University, USA
UNAM, Mexico
EPFL, Switzerland
University of Toronto, Canada
Polytechnic Institute of Porto, Portugal
Nogoya Institute of Technology, Japan

Organizing Committee
Thibault Bernard
Celine Butelle

University of Reims Champagne-Ardenne,


France
EPHE, France

Publicity Chair
Thibault Bernard

University of Reims Champagne-Ardenne,


France

Steering Committee
Alain Bui
Marc Bui
Hacene Fouchal
Roberto Gomez
Nicola Santoro
Philippas Tsigas

University of Versailles St-Quentin-en-Yvelines,


France
EPHE, France
University of Antilles-Guyane, France
ITESM-CEM, Mexico
Carleton University, Canada
Chalmers University of Technology, Sweden

Referees
H.B. Acharya
Amitanand Aiyer
Mario Alves
James Anderson
Bjorn Andersson
Hagit Attiya
Rida Bazzi
Muli Ben-Yehuda
Alysson Bessani
Gaurav Bhatia
Konstantinos Bletsas
Bjoern Brandenburg

Alan Burns
John Calandrino
Pierre Casteran
Daniel Cederman
Keren Censor
Jeremie Chalopin
Claude Chaudet
Yong Hoon Choi
Andrea Clementi
Reuven Cohen
Alex Cornejo
Roberto Cortinas

Pilu Crescenzi
Liliana Cucu
Shantanu Das
Emiliano De Cristofaro
Gianluca De Marco
Carole Delporte
UmaMaheswari Devi
Shlomi Dolev
Pu Duan
Partha Dutta
Khaled El-fakih
Yuval Emek

Organization

Hugues Fauconnier
Pascal Felber
Paola Flocchini
Gerhard Fohler
Pierre Fraignaud
Felix Freiling
Zhang Fu
Shelby Funk
Emanuele G. Fusco
Giorgos Georgiadis
Seth Gilbert
Emmanuel Godard
Joel Goossens
Mohamed Gouda
Maria Gradinariu
Potop-Butucaru
Vincent Gramoli
Fabiola Greve
Damas Gruska
Isabelle Guerin-Lassous
Phuong Ha Hoai
Ahmed Hadj Kacem
Elyes-Ben Hamida
Danny Hendler
Thomas Herault
Ted Herman
Daniel Hirschko
Akira Idoue
Nobuhiro Inuzuka
Taisuke Izumi
Tomoko Izumi
Katia Jares-Runser
Prasad Jayanti
Arshad Jhumka
Mohamed Jmaiel
Hirotsugu Kakugawa
Arvind Kandhalu
Yoshiaki Katayama
Branislav Katreniak
Anne-Marie Kermarrec
Ralf Klasing
Boris Koldehofe

Anis Koubaa
Darek Kowalski
Rastislav Kralovic
Evangelos Kranakis
Ioannis Krontiris
Petr Kuznetsov
Mikel Larrea
Erwan Le Merrer
Emmanuelle Lebhar
Hennadiy Leontyev
Xu Li
George Lima
Jane Liu
Steve Liu
Hong Lu
Victor Luchangco
Weiqin Ma
Bernard Mans
Soumaya Marzouk
Toshimitsu Masuzawa
Nicole Megow
Maged Michael
Luis Miguel Pinho
Rolf M
ohring
Mohamed Mosbah
Heinrich Moser
Achour Mostefaoui
Junya Nakamura
Alfredo Navarra
Gen Nishikawa
Nicolas Nisse
Luis Nogueira
Koji Okamura
Fukuhito Ooshita
Marina Papatriantalou
Dana Pardubska
Boaz Patt-Shamir
Andrzej Pelc
David Peleg
Nuno Pereira
Tomas Plachetka
Shashi Prabh

IX

Giuseppe Prencipe
Shi Pu
Raj Rajkumar
Sergio Rajsbaum
Dror Rawitz
Tahiry Razandralambo
Etienne Riviere
Gianluca Rossi
Anthony Rowe
Nicola Santoro
Gabriel Scalosub
Elad Schiller
Andre Schiper
Nicolas Schiper
Ramon Serna Oliver
Alexander Shvartsman
Riccardo Silvestri
Francoise Simonot-Lion
Alex Slivkins
Jason Smith
Kannan Srinathan
Sebastian Stiller
David Stotts
Weihua Sun
Hakan Sundell
Cheng-Chung Tan
Andreas Tielmann
Sam Toueg
Eduardo Tovar
Corentin Travers
Frederic Tronel
Remi Vannier
Jan Vitek
Roman Vitenberg
Koichi Wada
Timo Warns
Andreas Wiese
Yu Wu
Zhaoyan Xu
Hirozumi Yamaguchi
Yukiko Yamauchi
Keiichi Yasumoto

Table of Contents

Invited Talks
The Next 700 BFT Protocols (Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rachid Guerraoui
On Replication of Software Transactional Memories
(Extended Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Luis Rodrigues

Regular Papers
Write Markers for Probabilistic Quorum Systems . . . . . . . . . . . . . . . . . . . .
Michael G. Merideth and Michael K. Reiter

Byzantine Consensus with Unknown Participants . . . . . . . . . . . . . . . . . . . .


Eduardo A.P. Alchieri, Alysson Neves Bessani,
Joni da Silva Fraga, and Fabola Greve

22

With Finite Memory Consensus Is Easier Than Reliable Broadcast . . . . .


Carole Delporte-Gallet, Stephane Devismes, Hugues Fauconnier,
Franck Petit, and Sam Toueg

41

Group Renaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Yehuda Afek, Iftah Gamzu, Irit Levy, Michael Merritt, and
Gadi Taubenfeld

58

Global Static-Priority Preemptive Multiprocessor Scheduling with


Utilization Bound 38% . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bj
orn Andersson

73

Deadline Monotonic Scheduling on Uniform Multiprocessors . . . . . . . . . . .


Sanjoy Baruah and Joel Goossens

89

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT . . . . . . .


Bj
orn B. Brandenburg and James H. Anderson

105

A Self-stabilizing Marching Algorithm for a Group of Oblivious


Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Yuichi Asahiro, Satoshi Fujita, Ichiro Suzuki, and
Masafumi Yamashita
Fault-Tolerant Flocking in a k-Bounded Asynchronous System . . . . . . . . .
Samia Souissi, Yan Yang, and Xavier Defago

125

145

XII

Table of Contents

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Antonio Fern
andez Anta and Alessia Milani

164

Degree 3 Suces: A Large-Scale Overlay for P2P Networks . . . . . . . . . . . .


Marcin Bienkowski, Andre Brinkmann, and Miroslaw Korzeniowski

184

On the Time-Complexity of Robust and Amnesic Storage . . . . . . . . . . . . .


Dan Dobre, Matthias Majuntke, and Neeraj Suri

197

Graph Augmentation via Metric Embedding . . . . . . . . . . . . . . . . . . . . . . . . .


Emmanuelle Lebhar and Nicolas Schabanel

217

A Lock-Based STM Protocol That Satises Opacity and


Progressiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Damien Imbs and Michel Raynal

226

The 0 1-Exclusion Families of Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Eli Gafni

246

Interval Tree Clocks: A Logical Clock for Dynamic Systems . . . . . . . . . . .


Paulo Sergio Almeida, Carlos Baquero, and Victor Fonte

259

Ordering-Based Semantics for Software Transactional Memory . . . . . . . . .


Michael F. Spear, Luke Dalessandro, Virendra J. Marathe, and
Michael L. Scott

275

CQS-Pair: Cyclic Quorum System Pair for Wakeup Scheduling in


Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Shouwen Lai, Bo Zhang, Binoy Ravindran, and Hyeonjoong Cho

295

Impact of Information on the Complexity of Asynchronous Radio


Broadcasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tiziana Calamoneri, Emanuele G. Fusco, and Andrzej Pelc

311

Distributed Approximation of Cellular Coverage . . . . . . . . . . . . . . . . . . . . .


Boaz Patt-Shamir, Dror Rawitz, and Gabriel Scalosub

331

Fast Geometric Routing with Concurrent Face Traversal . . . . . . . . . . . . . .


Thomas Clouser, Mark Miyashita, and Mikhail Nesterenko

346

Optimal Deterministic Remote Clock Estimation in Real-Time


Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Heinrich Moser and Ulrich Schmid

363

Power-Aware Real-Time Scheduling upon Dual CPU Type


Multiprocessor Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Joel Goossens, Dragomir Milojevic, and Vincent Nelis

388

Table of Contents

Revising Distributed UNITY Programs Is NP-Complete . . . . . . . . . . . . . .


Borzoo Bonakdarpour and Sandeep S. Kulkarni
On the Solvability of Anonymous Partial Grids Exploration by Mobile
Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Roberto Baldoni, Francois Bonnet, Alessia Milani, and
Michel Raynal
Taking Advantage of Symmetries: Gathering of Asynchronous Oblivious
Robots on a Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ralf Klasing, Adrian Kosowski, and Alfredo Navarra

XIII

408

428

446

Rendezvous of Mobile Agents When Tokens Fail Anytime . . . . . . . . . . . . .


amek, Elias Vicari, and
Shantanu Das, Mat
us Mihal
ak, Rastislav Sr
Peter Widmayer

463

Solving Atomic Multicast When Groups Crash . . . . . . . . . . . . . . . . . . . . . .


Nicolas Schiper and Fernando Pedone

481

A Self-stabilizing Approximation for the Minimum Connected


Dominating Set with Safe Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sayaka Kamei and Hirotsugu Kakugawa

496

Leader Election in Extremely Unreliable Rings and Complete


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stefan Dobrev, Rastislav Kr
alovic, and Dana Pardubsk
a

512

Toward a Theory of Input Acceptance for Transactional Memories . . . . .


Vincent Gramoli, Derin Harmanci, and Pascal Felber
Geo-registers: An Abstraction for Spatial-Based Distributed
Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Matthieu Roy, Francois Bonnet, Leonardo Querzoni, Silvia Bonomi,
Marc-Olivier Killijian, and David Powell
Evaluating a Data Removal Strategy for Grid Environments Using
Colored Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Nikola Trcka, Wil van der Aalst, Carmen Bratosin, and
Natalia Sidorova

527

534

538

Load-Balanced and Sybil-Resilient File Search in P2P Networks . . . . . . .


Hyeong S. Kim, Eunjin (EJ) Jung, and Heon Y. Yeom

542

Computing and Updating the Process Number in Trees . . . . . . . . . . . . . . .


David Coudert, Florian Huc, and Dorian Mazauric

546

Redundant Data Placement Strategies for Cluster Storage


Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Andre Brinkmann and Sascha Eert

551

XIV

Table of Contents

An Unreliable Failure Detector for Unknown and Mobile Networks . . . . .


Pierre Sens, Luciana Arantes, Mathieu Bouillaguet,
Veronique Simon, and Fabola Greve
Ecient Large Almost Wait-Free Single-Writer Multireader Atomic
Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Andrew Lutomirski and Victor Luchangco
A Distributed Algorithm for Resource Clustering in Large Scale
Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Olivier Beaumont, Nicolas Bonichon, Philippe Duchon,
Lionel Eyraud-Dubois, and Hubert Larcheveque

555

560

564

Reactive Smart Buering Scheme for Seamless Handover in PMIPv6 . . .


Hyon-Young Choi, Kwang-Ryoul Kim, Hyo-Beom Lee, and
Sung-Gi Min

568

Uniprocessor EDF Scheduling with Mode Change . . . . . . . . . . . . . . . . . . . .


Bj
orn Andersson

572

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

579

The Next 700 BFT Protocols


(Invited Talk)
Rachid Guerraoui
EPFL LPD, Bat INR 310, Station 14, 1015 Lausanne, Switzerland

Byzantine fault-tolerant state machine replication (BFT) has reached a reasonable level of maturity as an appealing, software-based technique, to building
robust distributed services with commodity hardware. The current tendency
however is to implement a new BFT protocol from scratch for each new application and network environment. This is notoriously dicult. Modern BFT
protocols require each more than 20.000 lines of sophisticated C code and proving their correctness involves an entire PhD. Maintainning and testing each new
protocol seems just impossible.
This talk will present a candidate abstraction, named ABSTRACT (Abortable
State Machine Replication), to remedy this situation. A BFT protocol is viewed
as a, possibly dynamic, composition of instances of ABSTRACT, each instance
developed and analyzed independently. A new eective BFT protocol can be
developped by adding less than 10% of code to an existing one. Correctness proofs
become at human reach and even model checking techniques can be envisaged.
To illustrate the ABSTRACT approach, we describe a new BFT protocol we
name Aliph: the rst of a hopefully long series of eective yet modular BFT
protocols. The Aliph protocol has a peak throughput that outperforms those of
all BFT protocols we know of by 300% and a best case latency that is less than
30% of that of state of the art BFT protocols.
This is joint work with Dr V. Quema (CNRS) and Dr M. Vukolic (IBM).

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, p. 1, 2008.
c Springer-Verlag Berlin Heidelberg 2008


On Replication of
Software Transactional Memories
(Invited Talk)
Luis Rodrigues
INESC-ID/IST
joint work with:

Paolo Romano and Nuno Carvalho


INESC-ID

Extended Abstract
Software Transactional Memory (STM) systems have garnered considerable interest of late due to the recent architectural trend that has led to the pervasive
adoption of multi-core CPUs. STMs represent an attractive solution to spare
programmers from the pitfalls of conventional explicit lock-based thread synchronization, leveraging on concurrency-control concepts used for decades by
the database community to simplify the mainstream parallel programming [1].
As STM systems are beginning to penetrate into the realms of enterprise systems [2,3] and to be faced with the high availability and scalability requirements
proper of production environments, it is rather natural to foresee the emergence
of replication solutions specically tailored to enhance the dependability and the
performance of STM systems. Also, since STM and Database Management Systems (DBMS) share the key notion of transaction, it might appear that the state
of the art database replication schemes e.g. [4,5,6,7] represent natural candidates
to support STM replication as well.
In this talk, we will rst contrast, from a replication oriented perspective,
the workload characteristics of two standard benchmarks for STM and DBMS,
namely TPC-W [8] and STBench7 [9]. This will allow us to uncover several
pitfalls related to the adoption of conventional database replication techniques
in the context of STM systems.
At the light of such analysis, we will then discuss promising research directions we are currently pursuing in order to develop high performance replication
strategies able to t the unique characteristics of the STM.
In particular, we will present one of our most recent results in this area which
not only tackles some key issues characterizing STM replication, but actually
represents a valuable tool for the replication of generic services: the Weak Mutual
Exclusion (WME) abstraction. Unlike the classical Mutual Exclusion problem
(ME), which regulates the concurrent access to a single and indivisible shared
resource, the WME abstraction ensures mutual exclusion in the access to a
shared resource that appears as single and indivisible only at a logical level,
while instead being physically replicated for both fault-tolerance and scalability
purposes.
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 24, 2008.
c Springer-Verlag Berlin Heidelberg 2008


On Replication of Software Transactional Memories

Dierently from ME, which is well known to be solvable only in the presence of very constraining synchrony assumptions [10] (essentially exclusively in
synchronous systems), we will show that WME is solvable in an asynchronous
system using an eventually perfect failure detector, P , and prove that P is
actually the weakest failure detector for solving the WME problem. These results imply, unlike ME, WME is solvable in partially synchronous systems, (i.e.
systems in which the bounds on communication latency and relative process
speed either exist but are unknown or are known but are only guaranteed to
hold starting at some unknown time) which are widely recognized as a realistic
model for large scale distributed systems [11,12].
However, this is not the only element contributing to the pragmatical relevance
of the WME abstraction. In fact, the reliance on the WME abstraction, as a mean
for regulating the concurrent access to a replicated resource, also provides the
two following important practical benets:
Robustness: pessimistic concurrency control is widely used in commercial o
the shelf systems, e.g. DBMSs and operating systems, because of its robustness and predictability in presence of conict intensive workloads. The
WME abstraction lays a bridge between these proven contention management techniques and replica control schemes. Analogously to centralized lock
based concurrency control, WME reveals particularly useful in the context
of conict-sensitive applications, such as STMs or interactive systems, where
it may be preferable to bridle concurrency rather than incurring the costs
of application level conicts, such as transactions abort or re-submission of
user inputs.
Performance: the WME abstraction ensures that users issue operations on
the replicated shared resource in a sequential manner. Interestingly, it has
been shown that, in such a scenario, it is possible to sensibly boost the
performance of lower level abstractions [13,14], such as consensus or atomic
broadcast, which are typically used as building blocks of modern replica
control schemes and which often represent, like in typical STM workloads,
the performance bottleneck of the whole system.

References
1. Adl-Tabatabai, A.R., Kozyrakis, C., Saha, B.: Unlocking concurrency. ACM
Queue 4, 2433 (2007)
2. Cachopo, J.: Development of Rich Domain Models with Atomic Actions. PhD
thesis, Instituto Superior Tecnico/Universidade Tecnica de Lisboa (2007)
3. Carvalho, N., Cachopo, J., Rodrigues, L., Rito Silva, A.: Versioned transactional
shared memory for the FenixEDU web application. In: Proc. of the Second Workshop on Dependable Distributed Data Management (in conjunction with Eurosys
2008), Glasgow, Scotland. ACM, New York (2008)
4. Agrawal, D., Alonso, G., Abbadi, A.E., Stanoi, I.: Exploiting atomic broadcast in
replicated databases (extended abstract). In: Lengauer, C., Griebl, M., Gorlatch,
S. (eds.) Euro-Par 1997. LNCS, vol. 1300, pp. 496503. Springer, Heidelberg (1997)

L. Rodrigues

5. Cecchet, E., Marguerite, J., Zwaenepole, W.: C-JDBC: exible database clustering
middleware. In: Proc. of the USENIX Annual Technical Conference, Berkeley, CA,
USA, p. 26. USENIX Association (2004)
6. Pati
no-Martnez, M., Jimenez-Peris, R., Kemme, B., Alonso, G.: Scalable replication in database clusters. In: Proc. of the 14th International Conference on Distributed Computing, London, UK, pp. 315329. Springer, Heidelberg (2000)
7. Pedone, F., Guerraoui, R., Schiper, A.: The database state machine approach.
Distributed and Parallel Databases 14, 7198 (2003)
8. Transaction Processing Performance Council: TPC BenchmarkTM W, Standard
Specication, Version 1.8. Transaction Processing Perfomance Council (2002)
9. Guerraoui, R., Kapalka, M., Vitek, J.: Stmbench7: a benchmark for software transactional memory. SIGOPS Oper. Syst. Rev. 41, 315324 (2007)
10. Delporte-Gallet, C., Fauconnier, H., Guerraoui, R., Kouznetsov, P.: Mutual exclusion in asynchronous systems with failure detectors. J. Parallel Distrib. Comput. 65,
492505 (2005)
11. Dwork, C., Lynch, N., Stockmeyer, L.: Consensus in the presence of partial synchrony. J. ACM 35, 288323 (1988)
12. Cristian, F., Fetzer, C.: The timed asynchronous distributed system model. IEEE
Transactions on Parallel and Distributed Systems 10, 642657 (1999)
13. Brasileiro, F.V., Greve, F., Mostefaoui, A., Raynal, M.: Consensus in one communication step. In: Proc. of the International Conference on Parallel Computing
Technologies, pp. 4250 (2001)
14. Lamport, L.: Fast paxos. Distributed Computing 9, 79103 (2006)

Write Markers for


Probabilistic Quorum Systems
Michael G. Merideth1 and Michael K. Reiter2
1
2

Carnegie Mellon University, Pittsburgh, PA, USA


University of North Carolina, Chapel Hill, NC, USA

Abstract. Probabilistic quorum systems can tolerate a larger fraction


of faults than can traditional (strict) quorum systems, while guaranteeing
consistency with an arbitrarily high probability for a system with enough
replicas. However, the masking and opaque types of probabilistic quorum
systems are hampered in that their optimal loada best-case measure of
the work done by the busiest replica, and an indicator of scalabilityis
little better than that of strict quorum systems. In this paper we present a
variant of probabilistic quorum systems that uses write markers in order
to limit the extent to which Byzantine-faulty servers act together. Our
masking and opaque probabilistic quorum systems have asymptotically
better load than the bounds proven for previous masking and opaque
quorum systems. Moreover, the new masking and opaque probabilistic
quorum systems can tolerate an additional 24% and 17% of faulty replicas, respectively, compared with probabilistic quorum systems without
write markers.

Introduction

Given a universe U of servers, a quorum system over U is a collection Q =


{Q1 , . . . , Qm } such that each Qi U and
|Q Q | > 0

(1)

for all Q, Q Q. Each Qi is called a quorum. The intersection property (1)


makes quorums a useful primitive for coordinating actions in a distributed system. For example, if clients perform writes at a quorum of servers, then a client
who reads from a quorum will observe the last written value. Because of their utility in such applications, quorums have a long history in distributed computing.
In systems that may suer Byzantine faults [1], the intersection property (1) is
typically not adequate as a mechanism to enable consistent data access. Because
(1) requires only that the intersection of quorums be non-empty, it could be that
two quorums intersect only in a single server, for example. In a system in which
up to b > 0 servers might suer Byzantine faults, this single server might be
faulty and consequently, could fail to convey the last written value to a reader,
for example.
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 521, 2008.
c Springer-Verlag Berlin Heidelberg 2008


M.G. Merideth and M.K. Reiter

For this reason, Malkhi and Reiter [2] proposed various ways of strengthening
the intersection property (1) so as to enable quorums to be used in Byzantine
environments. For example, an alternative to (1) is
|Q Q \ B| > |Q B|

(2)

for all Q, Q Q, where B is the (unknown) set of all (up to b) servers that are
faulty. In other words, the intersection of any two quorums contains more nonfaulty servers than the faulty ones in either quorum. As such, the responses from
these non-faulty servers will outnumber those from faulty ones. These quorum
systems are called masking systems.
Opaque quorum systems, have an even more stringent requirement as an alternative to (1):
(3)
|Q Q \ B| > |(Q B) (Q \ Q)|
for all Q, Q Q. In other words, the number of correct servers in the intersection
of Q and Q (i.e., |Q Q \ B|) exceeds the number of faulty servers in Q (i.e.,
|Q B|) together with the number of servers in Q but not Q. The rationale
for this property can be seen by considering the servers in Q but not Q as
outdated, in the sense that if Q was used to perform an update to the system,
then those servers in Q \ Q are unaware of the update. As such, if the faulty
servers in Q behave as the outdated ones do, their behavior (i.e., their responses)
will dominate that from the correct servers in the intersection (Q Q \ B) unless
(3) holds.
The increasingly stringent properties of Byzantine quorum systems come with
costs in terms of the smallest system sizes that can be supported while tolerating
a number b of faults [2]. This implies that a system with a xed number of
servers can tolerate fewer faults when the property is more stringent as seen in
Table 1, which refers to the quorums just discussed as strict. Table 1 also shows
the negative impact on the ability of the system to disperse load amongst the
replicas, as discussed next.
Naor and Wool [3] introduced the notion of an access strategy by which clients
select quorums to access. An access 
strategy p : Q [0, 1] is simply a probability distribution on quorums, i.e., QQ p(Q) = 1. Intuitively, when a client
accesses the system, it does so at a quorum selected randomly according to the
distribution p.
The formalization of an access strategy is useful as a tool for discussing the
load dispersing properties of quorums. The load [3] of a quorum system, L(Q), is
the probability with which the busiest server is accessed in a client access, under
the best possible access strategy p. As listed in Table 1, tight lower bounds
have been proven for the load of each type of strict Byzantine quorum system.
The load for opaque quorum systems is particularly unfortunatesystems that
utilize opaque quorum systems cannot eectively disperse processing load across
more servers (i.e., by increasing n) because the load is at least a constant. Such
Byzantine quorum systems are used by many modern Byzantine-fault-tolerant
protocols, e.g., [4,5,6,7,8,9] in order to tolerate the arbitrary failure of a subset
of their replicas. As such, circumventing the bounds is an important topic.

Write Markers for Probabilistic Quorum Systems

One way to circumvent these bounds is with probabilistic quorum systems.


Probabilistic quorum systems relax the quorum intersection properties, asking
them to hold only with high probability. More specically, they relax (2) or (3),
for example, to hold only with probability 1  (for , a small constant), where
probabilities are taken with respect to the selection of quorums according to an
access strategy p [10,11]. This technique yields masking quorum constructions
tolerating b < 2.62/n and opaque quorum constructions tolerating b < 3.15/n
as seen in Table 1. These bounds hold in the sense that for any  > 0 there is
an n0 such that for all n > n0 , the required intersection property ((2) or (3)
for masking and opaque quorum systems, respectively) holds with probability at
least 1 . Unfortunately, probabilistic quorum systems alone do not materially
improve the load of Byzantine quorum systems.
In this paper, we present an additional modication, write markers, that improves on the bounds further. Intuitively, in each update access to a quorum of
servers, a write marker is placed at the accessed servers in order to evidence the
quorum used in that access. This write marker identies the quorum used; as
such, faulty servers not in this quorum cannot respond to subsequent quorum
accesses as though they were.
As seen in Table 1, by using this method to constrain how faulty servers can
collaborate, we show that probabilistic masking quorum systems with
load O(1/ n) can be
achieved, allowing the sysTable 1. Improvements due to write markers (Bold
tems to disperse load in- entries are properties of particular constructions; othdependently of the value ers are lower bounds)
of b. Further, probabilistic opaque quorum systems
Non-Byzantine: load
faults

with load O(b/n) can be


strict
(1/ n) [3]
<n
achieved, breaking the constant lower bound on load
Masking:
load
faults

for opaque systems. Morestrict
( b/n) [2]
< n/4.00 [12]
over, the resilience of probprobabilistic
(b/n)
[10] < n/2.62 [11]
write markers O(1/ n) [here] < n/2.00 [here]
abilistic masking quorums
can be improved an addiOpaque:
load
faults
tional 24% to b < n/2, and
strict
1/2
[2]
< n/5.00 [2]
the resilience of probabilistic
probabilistic
unproven
< n/3.15 [11]
opaque quorum systems can
write markers O(b/n) [here] < n/2.62 [here]
be improved an additional
17% to b < n/2.62.
The probability of error in probabilistic quorums requires mechanisms to ensure that accesses are performed according to the required access strategy p if
the clients cannot be trusted to do so. Therefore, we adapt one such mechanism,
the access-restriction protocol of probabilistic opaque quorum systems [11], to
accomodate write markers. Thus, as a side benet, our implementation forces
faulty clients to follow the access strategy. With this, we provide a protocol to
implement write markers that tolerates Byzantine clients.

M.G. Merideth and M.K. Reiter

Our primary contributions are (i) the identication and analysis of the benets
of write markers; and (ii) a proposed implementation of write markers that
handles the complexities of tolerating Byzantine clients. Our analysis yields the
following results:
Masking Quorums: We show that the use of write markers allows probabilistic
maskingquorum systems to tolerate up to
b < n/2 faults when quorums are of
size ( n). Setting all quorums to size n for some constant , weachieve
a load
that is asymptotically optimal for any quorum system, i.e., n/n =
O(1/ n) [3].
This represents an improvement in load and the number of faults that can
be tolerated. Probabilistic masking quorums without write markers can tolerate
up to b < n/2.62 faults [11] and achieve load no better than (b/n) [10]. In
addition, the maximum number of faults that can be tolerated is tied to the size
of quorums [10]. Thus, without write markers, achieving optimal load requires
tolerating fewer faults. Strict masking quorum
 systems can tolerate (only) up to
b < n/4 faults [2] and can achieve load ( b/n) [12].
Opaque Quorums: We show that the use of write markers allows probabilistic opaque quorum systems to tolerate up to
b < n/2.62 faults. We present a
construction with load O(b/n) when b = ( n), thereby breaking the constant
lower bound
quorum systems [2]. Moreover,
of 1/2 on the load of strict opaque
if b = O( n), we can set all quorums to size n for some constant , in order
toachieve a load
that is asymptotically optimal for any quorum system, i.e.,
n/n = O(1/ n) [3].
This represents an improvement in load and the number of faults that can
be tolerated. Probabilistic opaque quorum systems without write markers can
tolerate (only) up to b < n/3.15 faults [11]. Strict opaque quorum systems can
tolerate (only) up to b < n/5 faults [2]; these quorum systems can do no better
than constant load even if b = 0 [2].

Definitions and System Model

We assume a system with a set U of servers, |U | = n, and an arbitrary but


bounded number of clients. Clients and servers can fail arbitrarily (i.e., Byzantine faults [1]). We assume that up to b servers can fail, and denote the set of
faulty servers by B, where B U . Any number of clients can fail. Failures are
permanent. Clients and servers that do not fail are said to be non-faulty. We
allow that faulty clients and servers may collude, and so we assume that faulty
clients and servers all know the membership of B (although non-faulty clients
and servers do not). However, for our implementation of write markers, as is
typical for many Byzantine-fault-tolerant protocols (c.f., [4,5,6,9]), we assume
that faulty clients and servers are computationally bound such that they cannot
subvert standard cryptographic primitives such as digital signatures.

Write Markers for Probabilistic Quorum Systems

Communication. Write markers require no communication assumptions


beyond those of the probabilistic quorums for which they are used. For completeness, we summarize the model of [11], which is common to prior works in
probabilistic [10] and signed [13] quorum systems: we assume that each nonfaulty client can successfully communicate with each non-faulty server with high
probability, and hence with all non-faulty servers with roughly equal probability.
This assumption is in place to ensure that the network does not signicantly bias
a non-faulty clients interactions with servers either toward faulty servers or toward dierent non-faulty servers than those with which another non-faulty client
can interact. Put another way, we treat a server that can be reliably reached by
none or only some non-faulty clients as a member of B.
Access set; access strategy; operation. We abstractly describe client operations as either writes that alter the state of the service or reads that do not.
Informally, a non-faulty client performs a write to update the state of the service
such that its value (or a later one) will be observed with high probability by any
subsequent operation; a write thus successfully performed is called established
(we dene established more precisely below). A non-faulty client performs a read
to obtain the value of the latest established write, where latest refers to the
value of the most recent write preceding this read in a linearization [14] of the
execution.
In the introduction, we discussed access strategies as probability distributions
on quorums used for operations. For the remainder of the paper, we follow [11]
in strictly generalizing the notion of access strategy to apply instead to access
sets from which quorums are chosen. An access set is a set of servers from
which the client selects a quorum. If the client is non-faulty, we assume that this
selection is done uniformly at random. We adopt the access strategy that all
access sets are chosen uniformly at random (even by faulty clients). In Section 4,
we adapt a protocol to support write markers from one in [11] that approximately
ensures this access strategy. Our analysis allows that access sets may be larger
than quorums, though if access sets and quorums are of the same size, then
our protocol eectively forces even faulty clients to select quorums uniformly at
random as discussed in the introduction. In our analysis, all access sets used for
reads and writes are of constant size ard and awt respectively. All quorums used
for reads and writes are of constant size qrd and qwt respectively.
Candidate; conflicting; error probability; established; participant;
qualified; vote. Each write yields a corresponding candidate at some number of servers. A candidate is an abstraction used in part to ensure that two
distinct write operations are distinguishable from each other, even if the corresponding data values are the same. A candidate is established once it is accepted
by all of the non-faulty servers in some write quorum of size qwt within the write
access set of size awt . In opaque quorum systems, property (3) anticipates that
dierent non-faulty servers each may hold a dierent candidate due to concurrent writes. A candidate that is characterized by the property that a non-faulty
server would accept either it or a given established candidate, but not both, is

10

M.G. Merideth and M.K. Reiter

called a conicting candidate. Two candidates may conict because, e.g., they
both bear the same timestamp. In either masking or opaque quorum systems,
a faulty server may try to forge a conicting candidate. No non-faulty server
accepts two candidates that conict with each other.
A server can try to vote for some candidate (e.g., by responding to a read
operation) if the server is a participant in voting (i.e., if the server is a member
of the clients read access set). However, a server becomes qualied to vote for
a particular candidate only if the server is a member of the clients write access
set selected for the write operation for which it votes. Non-faulty clients wait for
responses from a read quorum of size qrd contained in the read access set of size
ard . An error is said to occur in a read operation when a non-faulty client fails
to observe the latest value or a faulty client obtains suciently many votes for
a conicting value.1 The error probability is the probability of this occurring.
Behavior of faulty clients. We assume that faulty clients seek to maximize
the error probability by following specic strategies [11]. This is a conservative
assumption; a client cannot increasebut may decreasethe probability of error
by failing to follow these strategies. At a high level, the strategies are as follows:
a faulty client, which may be completely restricted in its choices: (i) when establishing a candidate, writes the candidate to as few non-faulty servers as possible
to minimize the probability that it is observed by a non-faulty client; and (ii)
writes a conicting candidate to as many servers as will accept it (i.e., faulty
servers plus, in the case of an opaque quorum system, any non-faulty server that
has not accepted the established candidate) in order to maximize the probability
that it is observed.

Analysis of Write Markers

Intuitively, when a client submits a write, the candidate is associated with a


write marker. We require that the following three properties are guaranteed by
an implementation of write markers:
W1. Every candidate has a write marker that identies the access set chosen
for the write;
W2. A veriable write marker implies that the access set was selected uniformly
at random (i.e., according to the access strategy);
W3. Every non-faulty client can verify a write marker.
When considering a candidate, non-faulty clients and servers verify the candidates write marker. Because of this verication, no non-faulty node will accept
a vote for a candidate unless the issuing server is qualied to vote for the candidate. Since each write access set is chosen uniformly at random (W2), the
faulty servers that can vote for a candidate, i.e., the faulty qualied servers, are
therefore a random subset of the faulty servers.
1

Faulty clients may be able to aect the system with such votes in some protocols [11].

Write Markers for Probabilistic Quorum Systems

11

Thus, write markers remove the advantage enjoyed by faulty servers in strict
and traditional-probabilistic masking and opaque quorum systems, where any
faulty participant can vote for any candidateand therefore can collude to have
a conicting, potentially fabricated candidate chosen instead of an established
candidate. This aspect of write markers is summarized in Table 2, which shows
the impact of write markers in terms of the abilities of faulty and non-faulty
servers to vote for a given candidate.
3.1

Consistency Constraints

Probabilistic quorum systems must satisfy constraints similar to those of strict


quorum systems (e.g., (2), (3)), but only with probability 1 . As with strict
quorum systems, the purpose of these constraints is to guarantee that operations
can be observed consistently in subsequent operations by receiving enough votes.
First, the constraints must ensure
in expectation that a non-faulty client
can observe the latest established can- Table 2. Ability of a server to vote for a
didate if such a candidate exists. Let given candidate: (traditional quorums); 
Qrd represent a read quorum chosen (write markers)
uniformly at random, i.e., a random
Vote
variable, from a read access set itself Type of server
Non-faulty qualied participant

chosen uniformly at random. (Think
Faulty qualied participant

of this quorum as one used by a nonNon-faulty non-qualied participant
faulty client.) Let Qwt represent a Faulty non-qualied participant

write quorum chosen by a potentially


faulty client; Qwt must be chosen from
Awt , an access set chosen uniformly at random. (Think of Qwt as a quorum used
for an established candidate.) Then the threshold r number of votes necessary
to observe a value must be less than the expected number of non-faulty qualied
participants, which is
E [|(Qrd Qwt ) \ B|] .

(4)

The use of write markers has no impact here on (4) because (Qrd Qwt ) \ B
contains no faulty servers. However, write markers do enable us to set r smaller,
as the following shows.
Second, the constraints must ensure that a conicting candidate (which is in
conict with an established candidate as described in Section 2) is, in expectation, not observed by any client (non-faulty or faulty). In general, it is important
for all clients to observe only established candidates so as to enable higher-level
protocols (e.g., [4]) that employ repair phases that may aect the state of the
system within a read [11]. Let Ard and Awt represent read and write access sets,
respectively, chosen uniformly at random. (Think of Awt as the access set used by
a faulty client for a conicting candidate, and of Ard as the access set used by a
faulty client for a read operation. How faulty clients can be forced to choose uniformly at random is described in Section 4.) We consider the cases for masking
and opaque quorums separately:

12

M.G. Merideth and M.K. Reiter

Probabilistic Masking Quorums. In a masking quorum system, (2) dictates that


only faulty servers may vote for a conicting candidate. Using write markers, we
require that the faulty qualied participants alone cannot produce sucient votes
for a candidate to be observed in expectation. Taking (4) into consideration, we
require:
E [|(Qrd Qwt ) \ B|] > E [|(Ard Awt ) B|] .

(5)

Contrast this with (2) and with the consistency requirement for traditional probabilistic masking quorum systems [10] (adapted to consider access sets), which
requires that the faulty participants (qualied or not) cannot produce sucient
votes for a candidate to be observed in expectation:
E [|(Qrd Qwt ) \ B|] > E [|Ard B|] .

(6)

Intuitively, the intersection between access sets can be smaller with write markers
because the right-hand side of (5) is less than the right-hand side of (6) if
awt < n.
Probabilistic Opaque Quorums. With write markers, we have the benet, described above for probabilistic masking quorums, in terms of the number of
faulty participants that can vote for a candidate in expectation. However, as
shown in (3), opaque quorum systems must additionally consider the maximum
number of non-faulty qualied participants that vote for the same conicting
candidate in expectation. As such, instead of (5), we have:
E [|(Qrd Qwt ) \ B|] > E [|(Ard Awt ) B|]+E [| ((Ard Awt ) \ B) \ Qwt |] . (7)
Contrast this with the consistency requirement for traditional probabilistic
opaque quorums [11]:
E [|(Qrd Qwt ) \ B|] > E [|Ard B|] + E [| ((Ard Awt ) \ B) \ Qwt |] .

(8)

Again, intuitively, the intersection between access sets can be smaller with write
markers because the right-hand side of (7) is less than the right-hand side of (8)
if awt < n.
3.2

Implied Bounds

In this subsection, we are concerned with quorum systems for which we can
achieve error probability (as dened in Section 2) no greater than a given  for
any n suciently large. For such quorum systems, there is an upper bound on b
in terms of n, akin to the bound for strict quorum systems.
Intuitively, the maximum value of b is limited by the relevant constraint (i.e.,
either (5) or (7)). Of primary interest are Theorem 1 and its corollaries, which
demonstrate the benets of write markers for probabilistic masking quorum systems, and Theorem 2 and its corollaries, which demonstrate the benets of write

Write Markers for Probabilistic Quorum Systems

13

markers for probabilistic opaque quorum systems. They utilize Lemmas 1 and 2,
which together present basic requirements for the types of quorum systems with
which we are concerned. Due to space constraints, proofs of the lemmas and
theorems appear only in a companion technical report [15].
Dene MinCorrect to be a random variable for the number of non-faulty servers
with the established candidate, i.e., MinCorrect = |(Qrd Qwt ) \ B| as indicated
in (4).
Lemma 1. Let n b = (n). For all c > 0 there is a constant d > 1 such that
for all qrd , qwt where qrd qwt > dn and qrd qwt n = (1), it is the case that
E [MinCorrect] > c for all n suciently large.
Let r be the threshold, discussed in Section 3.1, for the number of votes necessary to observe a candidate. Dene MaxConflicting to be a random variable for
the maximum number of servers that vote for a conicting candidate. For example: due to (5), in masking quorums with write markers, MaxConflicting =
|(Ard Awt ) B|; and due to (7), in opaque quorums with write markers,
MaxConflicting = |(Ard Awt ) B| + | ((Ard Awt ) \ B) \ Qwt |.
Lemma 2. Let the following hold,2
E [MinCorrect] E [MaxConflicting] > 0,


E [MinCorrect] E [MaxConflicting] = ( E [MinCorrect]).
Then it is possible to set r such that,
error probability 0

as E [MinCorrect] .

Here and below, a suitable setting of r is one between E [MinCorrect] and


E [MaxConflicting], inclusive. The remainder of the section is focused on determining, for each type of probabilistic quorum system, the upper bound on b and
bounds on the load that Lemmas 1 and 2 imply.
Theorem 1. For all  there is a constant d > 1 such that for all qrd , qwt where
qrd qwt > dn, qrd qwt n = (1), and
qrd qwt n
,
b<
qrd awt + ard awt
any such probabilistic masking quorum system employing write markers achieves
error probability no greater than  given a suitable setting of r for all n suciently
large.
Corollary 1. Let ard = qrd and awt = qwt . For all  there is a constant d > 1
such that for all qrd , qwt where qrd qwt > dn, qrd qwt n = (1), and
b < n/2,
any such probabilistic masking quorum system employing write markers achieves
error probability no greater than  given a suitable setting of r for all n suciently
large.
2

is the little-oh analog of , i.e., f (n) = (g(n)) if f (n)/g(n) as n .

14

M.G. Merideth and M.K. Reiter

In other words, with write markers, the size of quorums does not impact the
maximum fraction of faults that can be tolerated when quorums are selected
uniformly at random (i.e., when ard = qrd and awt = qwt ).
Corollary 2. Let ard = qrd , awt = qwt ,
and b < n/2. For all  there is a
constant > 1 such that if qrd = qwt = n, any such probabilistic masking
quorum system employing write markers achieves error probability no greater
than  given a suitable setting of r for all n suciently large, and has load

n/n = O(1/ n).


Theorem 2. For all  there is a constant d > 1 such that for all qrd , qwt where
qrd qwt > dn, qrd qwt n = (1), and
b<

n(ard a2wt + ard qwt n + qrd qwt n 2ard awt n)


,
awt (ard awt + qrd n)

any such probabilistic opaque quorum system employing write markers achieves
error probability no greater than  given a suitable setting of r for all n suciently
large.
Corollary 3. Let ard = qrd and awt = qwt . For all  there is a constant d > 1
such that for all qrd , qwt where qrd qwt > dn, qrd qwt n = (1), and
b<

qwt n
,
qwt + n

any such probabilistic opaque quorum system employing write markers achieves
error probability no greater than  given a suitable setting of r for all n suciently
large.
Comparing Corollary 3 with Corollary 1, we see that in the opaque quorum case
qwt cannot be set independently of b.
Corollary 4. Let ard = qrd , awt = qwt , and b < (qwt n)/(qwt + n). For all 
there is a constant d > 1 such that for all qrd , qwt where qrd qwt > dn and
qrd qwt n = (1), any such probabilistic opaque quorum system employing write
markers achieves error probability no greater than  given a suitable setting of r
for all n suciently large, and has load
(b/n).

Corollary 5. Let b = ( n). For all  there is a constant d > 1 such that
for all ard , awt , qrd , qwt where ard = awt = qrd = qwt = lb for a value l such
that c l > n/(n b) for some constant c , (lb)2 > dn and (lb)2 n = (1),
any such probabilistic opaque quorum system employing write markers achieves
error probability no greater than  given a suitable setting of r for all n suciently
large, and has load
O(b/n).

Write Markers for Probabilistic Quorum Systems

15

Corollary 6. Let ard = qrd and awt = qwt = n b. For all  there is a constant
d > 1 such that for all qrd , qwt where qrd qwt > dn, qrd qwt n = (1), and
b < n/2.62,
any such probabilistic opaque quorum system employing write markers achieves
error probability no greater than  given a suitable setting of r for all n suciently
large.

Implementation

Our implementation of write markers provides the behavior assumed in Section 3,


even with Byzantine clients. Specically, it ensures properties W1W3. (Though,
technically, it ensures W2 only approximately in the case of opaque quorum
systems, in which, as we explain below, a faulty server might be able to create
a conicting candidate using a write marker for a stale, i.e., out-of-date, access
setbut to no advantage.)
Because clients may be faulty, we cannot rely on, e.g., digital signatures issued by them to implement write markers. Instead, we adapt mechanisms of our
access-restriction protocol for probabilistic opaque quorum systems [11]. The
access-restriction protocol is designed to ensure that all clients follow the access
strategy. It already enables non-faulty servers to verify this before accepting a
write. And, since it is the only way of which we are aware for a probabilistic
quorum system to tolerate Byzantine clients when write markers are of benet (i.e., when the sizes of write access sets are restricted), its mechanisms are
appropriate.
The relevant parts of the preexisting protocol work as follows [11]. From a precongured number of servers, a client obtains a veriable recent value (VRV),
the value of which is unpredictable to clients and b or fewer servers prior to
its creation. This VRV is used to generate a pseudorandom sequence of access
sets. Since a VRV can be veried using only public information, both it and
the sequence of access sets it induces can be veried by clients and servers.
Non-faulty clients simply choose the next unused access set for each operation.3
However, a faulty client is motivated to maximize the probability of error. If the
use of the next access set in the sequence does not maximize the probability
of error given the current state of the system (i.e., the candidates accepted by
the servers), such a client may try to skip ahead some number of access sets.
Alternatively, such a client might try to wait to use the next access set until the
state of the system changes. If allowed to follow either strategy, such a client
would circumvent the access strategy because its choice of access set would not
be independent from the state of the system.
Three mechanisms are used together to coerce a faulty client to follow the access strategy. First, the client must perform exponentially increasing work in expectation in order to use later access sets. As such, a client requires exponentially
3

Non-faulty clients should choose a new access set for each operation to ensure independence from the decisions of faulty clients [11].

16

M.G. Merideth and M.K. Reiter


Verify
Marker

Choose Access Set

increasing time in expectation


i
ii
Client
in order to choose a later access
set. This is implemented by reS0
quiring that the client solve a
S1
client puzzle [16] of the approS2
priate diculty. The solution to
the puzzle is, in expectation,
S3

dicult to nd but easy to verSn


ify. Second, the VRV and sequence of access sets become invalid as the non-faulty servers Fig. 1. Read operation with write markers: mesaccept additional candidates, or sages and stages of verication of access set
as the system otherwise pro- (Changes in gray)
gresses (e.g., as time passes).
Non-faulty servers verify that an access set is still valid, i.e., not stale, before
accepting it. Thus, system progress forces the client to start its work anew, and,
as such, makes the work solving the puzzle for any unused access set wasted.
Finally, during the time that the client is working, the established candidate
propagates in the background to the non-faulty servers that are non-qualied
(c.f., [17]). This decreases the window of vulnerability in which a given access
set in the sequence is useful for a conicting write by making non-qualied servers
aware that (i) there is an established candidate (so that they will not accept a
conicting candidate) and (ii) that the state of the system has progressed (so
that they will invalidate the current VRV if appropriate).
The impact of these three mechanisms is that a non-faulty server can be
condent that the choice of write access set adheres (at least approximately) to
the access strategy upon having veried that the access set is valid, current, and
is accompanied by an appropriate puzzle solution.
For write markers, we extend the protocol so that, as seen in Figure 1, clients
can also perform verication. This requires that information about the puzzle
solution and access set (including the VRV used to generate it) be returned by
the servers to clients. (As seen in Figure 2 and explained below, this information
varies across masking and opaque quorum systems.) In the preexisting accessrestriction protocol, this information is veried and discarded by each server. For
write markers, this information is instead stored by each server in the verication
stage as a write marker. It is sent along with the data value as part of the
candidate to the client during any read operation. If the server is non-faulty
a fact of which a non-faulty client cannot be certainthe access set used for
the operation was indeed chosen according to the access strategy because the
server performed verication before accepting the candidate. However, because
the server may be faulty, the client performs verication as well; it veries the
write marker and that the server is a member of the access set. This allows us
to guarantee points W1W3. As such, faulty non-qualied servers are unable to
vote for the candidates for which qualied servers can vote.

Write Markers for Probabilistic Quorum Systems

17

Figures 1, 2, 3, and 4
illustrate
relevant pieces
access set
promise certificate status
of the preexisting protosolution
data value
col and our modications
Opaque write
for write markers in the
a access set
b status
context of read and write
solution
operations in probabilistic
data value
masking and opaque quoRead
rum systems. The gures
i query
ii data value
highlight that the additions
certificate
(masking)
access set, solution
(opaque)
to the protocol for write
markers involve saving the
Fig. 2. Message types (Write marker emphasized with write markers and returngray)
ing them to clients so that
clients can also verify them.
The dierences in the structure of the write marker for probabilistic opaque
and masking quorum systems mentioned above results in subtly dierent guarantees. The remainder of the section discusses these details.
Masking write

4.1

Probabilistic Opaque Quorums

As seen in Figure 2 (message ii), a write marker for a probabilistic opaque


quorum system consists of the write-access-set identier (including the VRV)
and the solution to the puzzle that unlocks the use of this access set. Unlike
a non-faulty server that veries the access set at the time of use, a non-faulty
client cannot verify that an access set was not already stale when the access set
was accepted by a faulty server. Initially, this may appear problematic because
it is clear that, given sucient time, a faulty client will eventually be able to
solve the puzzle for its preferred access set to use for a conicting writethis
access set may contain all of the servers in B. In addition, the faulty client can
delay the use of this access set because non-faulty clients will be unable to verify
whether it was already stale when it was used.
Fortunately, because non-faulty servers will not accept a stale candidate (i.e.,
a candidate accompanied by a stale access set), the fact that a stale access set
may be accepted by a faulty server does not impact the benet of write markers
for opaque quorum systems. In general, consistency requires (7), i.e.,
E [|(Qrd Qwt ) \ B|] > E [|(Ard Awt ) B|] + E [| ((Ard Awt ) \ B) \ Qwt |] .
However, only faulty servers will accept a stale candidate. Therefore, if the candidate was stale when written to Awt , no non-faulty server would have accepted
it. Thus, in this case, the consistency constraint is equivalent to,
E [|(Qrd Qwt ) \ B|] > E [|(Ard Awt ) B|] .

18

M.G. Merideth and M.K. Reiter

Verify Access Set

Choose Access Set

a
b
However, this is (6), the constraint
Client
on probabilistic masking quorum
S0
systems without write markers. In
S1
eect, a faulty client must either:
(i) use a recent access set that
S2
is therefore chosen approximately
S3
uniformly at random, and be lim
ited by (7); or (ii), use a stale acSn
cess set and be limited by (6). If
quorums are the sizes of access sets,
both inequalities have the same up- Fig. 3. Write operation in opaque quorum sysper bound on b (see [15]); other- tems: messages and stages of verication of
wise, a faulty client is disadvan- write marker (Changes in gray)
taged by using a stale access set
because a system that satises (6) can tolerate more faults than one that satises (7), and is therefore less likely to result in error (see [15]). Even if the access
set contains all of the faulty servers, i.e., B Awt , then this becomes,

E [|(Qrd Qwt ) \ B|] > E [|Ard B|] .


4.2

Probabilistic Masking Quorums

Verify Certificate

Collect
Cert.

Verify Access Set

Choose Access Set

Protocols for masking quorum systems involve an additional round of communication (an echo phase, c.f., [8] or broadcast phase, c.f., [18]) during write operations in order to tolerate Byzantine or concurrent clients. This round prevents
non-faulty servers from accepting conicting data values, as assumed by (2).
In order to write a data value, a client must rst obtain a write certicate (a
quorum of replies that together attest that the non-faulty servers will accept
no conicting data value). In contrast to optimistic protocols that use opaque
quorum systems, these protocols are pessimistic.
This additional round allows us to prevent clients from using stale access sets.
Specically, in the request to authorize a data value (message in Figure 2 and
Figure 4), the client
sends the access set

identier (including
Client
the VRV), the soS0
lution to the puzzle
enabling use of this
S1
access set, and the
S2
data value. We reS3
quire that the cer
ticate come from
Sn
servers in the access
set that is chosen for
the write operation. Fig. 4. Write operation in masking quorum systems: messages
Each server veries and stages of verication of write marker (Changes in gray)

Write Markers for Probabilistic Quorum Systems

19

the VRV and that the puzzle solution enables use of the indicated access set
before returning authorization (message in Figure 2 and Figure 4). The nonfaulty servers that contribute to the certicate all implicitly agree that the access
set is not stale, for otherwise they would not agree to the write. This certicate
(sent to each server in message in Figure 2 and Figure 4) is stored along with
the data value as a write marker. Thus, unlike in probabilistic opaque quorum
systems, a veriable write marker in a probabilistic masking quorum system
implies that a stale access set was not used. The reading client veries the certicate (returned in message ii in Figure 1 and Figure 2) before accepting a vote
for a candidate. Because a writing client will be unable to obtain a certicate for
a stale access set, votes for such a candidate will be rejected by reading clients.
Therefore, the analysis in Section 3 applies without additional complications.

Additional Related Work

Probabilistic quorum systems were explored in the context of dynamic systems


with non-uniform access strategies by Abraham and Malkhi [19]. Recently, probabilistic quorum systems have been used in the context of security for wireless
sensor networks [20] as well as storage for mobile ad hoc networks [21]. Lee and
Welch make use of probabilistic quorum systems in randomized algorithms for
distributed read-write registers [22] and shared queue data structures [23].
Signed quorum systems presented by Yu [13] also weaken the requirements
of strict quorum systems but use dierent techniques. However, signed quorum
systems have not been analyzed in the context of Byzantine faults, and so they
are not presently aected by write markers.
Another implementation of write markers was introduced by Alvisi et al. [24]
for purposes dierent than ours. We achieve the goals of (i) improving the load,
and (ii) increasing the maximum fraction of faults that the system can tolerate by
using write markers to prevent some faulty servers from colluding. In contrast to
this, Alvisi et al. use write markers in order to increase accuracy in estimating the
number of faults present in Byzantine quorum systems, and for identifying faulty
servers that consistently return incorrect results. Because the implementation of
Alvisi et al. does not prevent faulty servers from lying about the write quorums of
which they are members, it cannot be used directly for our purposes. In addition,
our implementation is designed to tolerate Byzantine clients, unlike theirs.

Conclusion

We have presented write markers, a way to improve the load of masking and
opaque quorum systems asymptotically. Moreover, our new masking and opaque
probabilistic quorum systems with write markers can tolerate an additional 24%
and 17% of faulty replicas, respectively, compared with the proven bounds of
probabilistic quorum systems without write markers. Write markers achieve this
by limiting the extent to which Byzantine-faulty servers may cooperate to provide incorrect values to clients. We have presented a proposed implementation

20

M.G. Merideth and M.K. Reiter

of write markers that is designed to be eective even while tolerating Byzantinefaulty clients and servers.

References
1. Lamport, L., Shostak, R., Pease, M.: The Byzantine generals problem. ACM Transactions on Programming Languages and Systems 4, 382401 (1982)
2. Malkhi, D., Reiter, M.: Byzantine quorum systems. Distributed Computing 11,
203213 (1998)
3. Naor, M., Wool, A.: The load, capacity, and availability of quorum systems. SIAM
Journal on Computing 27, 423447 (1998)
4. Abd-El-Malek, M., Ganger, G.R., Goodson, G.R., Reiter, M.K., Wylie, J.J.: Faultscalable Byzantine fault-tolerant services. In: Symposium on Operating Systems
Principles (2005)
5. Castro, M., Liskov, B.: Practical Byzantine fault tolerance. In: Symposium on
Operating Systems Design and Implementation (1999)
6. Goodson, G.R., Wylie, J.J., Ganger, G.R., Reiter, M.K.: Ecient Byzantinetolerant erasure-coded storage. In: International Conference on Dependable Systems and Networks (2004)
7. Kong, L., Manohar, D., Subbiah, A., Sun, M., Ahamad, M., Blough, D.: Agile store:
Experience with quorum-based data replication techniques for adaptive Byzantine
fault tolerance. In: IEEE Symposium on Reliable Distributed Systems, pp. 143154
(2005)
8. Malkhi, D., Reiter, M.K.: An architecture for survivable coordination in large distributed systems. IEEE Transactions on Knowledge and Data Engineering 12, 187
202 (2000)
9. Martin, J.P., Alvisi, L.: Fast Byzantine consensus. IEEE Transactions on Dependable and Secure Computing 3, 202215 (2006)
10. Malkhi, D., Reiter, M.K., Wool, A., Wright, R.N.: Probabilistic quorum systems.
Information and Computation 170, 184206 (2001)
11. Merideth, M.G., Reiter, M.K.: Probabilistic opaque quorum systems. In: International Symposium on Distributed Computing (2007)
12. Malkhi, D., Reiter, M.K., Wool, A.: The load and availability of Byzantine quorum
systems. SIAM Journal of Computing 29, 18891906 (2000)
13. Yu, H.: Signed quorum systems. Distributed Computing 18, 307323 (2006)
14. Herlihy, M., Wing, J.: Linearizability: A correctness condition for concurrent objects. ACM Transactions on Programming Languages and Systems 12, 463492
(1990)
15. Merideth, M.G., Reiter, M.K.: Write markers for probabilistic quorum systems.
Technical Report CMU-CS-07-165R, Computer Science Department, Carnegie
Mellon University (2008)
16. Juels, A., Brainard, J.: Client puzzles: A cryptographic countermeasure against
connection depletion attacks. In: Network and Distributed Systems Security Symposium, pp. 151165 (1999)
17. Malkhi, D., Mansour, Y., Reiter, M.K.: Diusion without false rumors: On propagating updates in a Byzantine environment. Theoretical Computer Science 299,
289306 (2003)
18. Martin, J.P., Alvisi, L., Dahlin, M.: Minimal Byzantine storage. In: International
Symposium on Distributed Computing (2002)

Write Markers for Probabilistic Quorum Systems

21

19. Abraham, I., Malkhi, D.: Probabilistic quorums for dynamic systems. Distributed
Computing 18, 113124 (2005)
20. Du, W., Deng, J., Han, Y.S., Varshney, P.K., Katz, J., Khalili, A.: A pairwise
key predistribution scheme for wireless sensor networks. ACM Transactions on
Information and System Security 8, 228258 (2005)
21. Luo, J., Hubaux, J.P., Eugster, P.T.: Pan: providing reliable storage in mobile ad
hoc networks with probabilistic quorum systems. In: International symposium on
mobile ad hoc networking and computing, pp. 112 (2003)
22. Lee, H., Welch, J.L.: Applications of probabilistic quorums to iterative algorithms.
In: International Conference on Distributed Computing Systems, pp. 2130 (2001)
23. Lee, H., Welch, J.L.: Randomized shared queues applied to distributed optimization
algorithms. In: International Symposium on Algorithms and Computation (2001)
24. Alvisi, L., Malkhi, D., Pierce, E., Reiter, M.K.: Fault detection for Byzantine quorum systems. IEEE Transactions on Parallel and Distributed Systems 12, 9961007
(2001)

Byzantine Consensus with Unknown Participants


Eduardo A. P. Alchieri1 , Alysson Neves Bessani2 ,
Joni da Silva Fraga1 , and Fabola Greve3
1 Department of Automation and Systems
Federal University of Santa Catarina (UFSC)
Florianopolis, SC - Brazil
[email protected],[email protected]
2 Large-Scale Informatics Systems Laboratory
Faculty of Sciences, University of Lisbon
Lisbon, Portugal
[email protected]
3 Department of Computer Science
Federal University of Bahia (UFBA)
Bahia, BA - Brazil
[email protected]

Abstract. Consensus is a fundamental building block used to solve many practical problems that appear on reliable distributed systems. In spite of the fact
that consensus is being widely studied in the context of classical networks, few
studies have been conducted in order to solve it in the context of dynamic and
self-organizing systems characterized by unknown networks. While in a classical network the set of participants is static and known, in a scenario of unknown
networks, the set and number of participants are previously unknown. This work
goes one step further and studies the problem of Byzantine Fault-Tolerant Consensus with Unknown Participants, namely BFT-CUP. This new problem aims at
solving consensus in unknown networks with the additional requirement that participants in the system can behave maliciously. This paper presents a solution for
BFT-CUP that does not require digital signatures. The algorithms are shown to be
optimal in terms of synchrony and knowledge connectivity among participants in
the system.
Keywords: Consensus, Byzantine fault tolerance, Self-organizing systems.

1 Introduction
The consensus problem [1,2,3,4,5], and more generally the agreement problems, form
the basis of almost all solutions related to the development of reliable distributed systems. Through these protocols, participants are able to coordinate their actions in order
to maintain state consistency and ensure system progress. This problem has been extensively studied in classical networks, where the set of processes involved in a particular
computation is static and known by all participants in the system. Nonetheless, even in
these environments, the consensus problem has no deterministic solution in presence of
one single process crash, when entities behave asynchronously [2].
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 2240, 2008.
c Springer-Verlag Berlin Heidelberg 2008


Byzantine Consensus with Unknown Participants

23

In self-organizing systems, such as wireless mobile ad-hoc networks, sensor networks and, in a different context, unstructured peer to peer networks (P2P), solving
consensus is even more difficult. In these environments, an initial knowledge about participants in the system is a strong assumption to be adopted and the number of participants and their knowledge cannot be previously determined. These environments define
indeed a new model of distributed systems which has essential differences regarding the
classical one. Thus, it brings new challenges to the specification and resolution of fundamental problems. In the case of consensus, the majority of existing protocols are not
suitable for the new dynamic model because their computation model consists of a set
of initially known nodes. The only notably exceptions are the works of Cavin et al. [6,7]
and Greve et al. [8].
Cavin et al. [6,7] defined a new problem named FT-CUP (fault-tolerant consensus with unknown participants) which keeps the consensus definition but assumes that
nodes are not aware of , the set of processes in the system. They identified necessary
and sufficient conditions in order to solve FT-CUP concerning knowledge about the
system composition and synchrony requirements regarding the failure detection. They
concluded that in order to solve FT-CUP in a scenario with the weakest knowledge connectivity, the strongest synchrony conditions are necessary, which are represented by
failures detectors of the class P [4].
Greve and Tixeuil [8] show that there is in fact a trade-off between knowledge connectivity and synchrony for consensus in fault-prone unknown networks. They provide
an alternative solution for FT-CUP which requires minimal synchrony assumptions;
indeed, the same assumptions already identified to solve consensus in a classical environment, which are represented by failure detectors of the class S [4]. The approach followed on the design of their FT-CUP protocol is modular: Initially, algorithms
identify a set of participants in the network that share the same view of the system.
Subsequently, any classical consensus like for example, those initially designed for
traditional networks can be reused and executed by these participants.
Our work extends these results and study the problem of Byzantine Fault-Tolerant
Consensus with Unknown Participants (BFT-CUP). This new problem aims at solving CUP in unknown networks with the additional requirement that participants in
the system can behave maliciously [1]. The main contribution of the paper is then
the identification of necessary and sufficient conditions in order to solve BFT-CUP.
More specifically, an algorithm for solving BFT-CUP is presented for a scenario which
does not require the use of digital signatures (a major source of performance overhead on Byzantine fault-tolerant protocols [9]). Finally, we show that this algorithm
is optimal in terms of synchrony and knowledge connectivity requirements,
establishing then the necessary and sufficient conditions for BFT-CUP solvability in
this context.
The paper is organized in the following way. Section 2 presents our system model
and the concept of participant detectors, among other preliminary definitions used in
this paper. Section 3 describes a basic dissemination protocol used for process communication. BFT-CUP protocols and respective necessary and sufficient proofs are described in Section 4. Section 5 presents some comments about our protocol. Section 6
presents our final remarks.

24

E.A.P. Alchieri et al.

2 Preliminaries
2.1 System Model
We consider a distributed system composed by a finite set of n processes (also called
participants or nodes) drawn from a larger universe U. In a known network, and n is
known to every participanting process, while in an unknown network, a process i
may only be aware of a subset i .
Processes are subject to Byzantine failures [1], i.e., they can deviate arbitrarily from
the algorithm they are specified to execute and work in collusion to corrupt the system
behavior. Processes that do not follow their algorithm in some way are said to be faulty.
A process that is not faulty is said to be correct. Despite the fact that a process does
not know all participants of the system, it does know the expected maximum number
of process that may fail, denoted by f . Moreover, we assume that all processes have a
unique id, and that it is infeasible for a faulty process to obtain additional ids to be able
to launch a sybil attack [10] against the system.
Processes communicate by sending and receiving messages through authenticated
and reliable point to point channels established between known processes1 . Authenticity of messages disseminated to a not yet known node is verified through message channel redundancy, as explained in Section 3. A process i may only send a message directly
to another process j if j i , i.e., if i knows j. Of course, if i sends a message to j such
that i j , upon receipt of the message, j may add i to j , i.e., j now knows i and
become able to send messages to it. We assume the existence of an underlying routing
layer resilient to Byzantine failures [11,12,13], in such a way that if j i and there
is sufficient network connectivity, then i can send a message reliably to j. For example,
[12] presents a secure multipath routing protocol that guarantees a proper communication between two processes provided that there is at least one path between these
processes that is not compromised, i.e., none of its processes or channels are faulty.
There are no assumptions on the relative speed of processes or on message transfer
delays, i.e., the system is asynchronous. However, the protocol presented in this paper
uses an underlying classical Byzantine consensus that could be implemented over an
eventually synchronous system [14] (e.g., Byzantine Paxos [9]) or over a completely
asynchronous system (e.g., using a randomized consensus protocol [5,15,16]). Thus,
our protocol requires the same level of synchrony required by the underlying classical
Byzantine consensus protocol.
2.2 Participant Detectors
To solve any nontrivial distributed problem, processes must somehow get a partial
knowledge about the others if some cooperation is expected. The participant detector oracle, namely PD, was proposed to handle this subset of known processes [6]. It
can be seen as a distributed oracle that provides hints about the participating processes
in the computation. Let i.PD be defined as the participant detector of a process i. When
1

Without authenticated channels it is not possible to tolerate process misbehavior in an asynchronous system since a single faulty process can play the roles of all other processes to some
(victim) process.

Byzantine Consensus with Unknown Participants

25

queried by i, i.PD returns a subset of processes in with whom i can collaborate.


Let i.PD(t) be the query of i at time t. The information provided by i.PD can evolve
between queries, but must satisfy the following two properties:
Information Inclusion: The information returned by the participant detectors is nondecreasing over time, i.e., i , t t : i.PD(t) i.PD(t );
Information Accuracy: The participant detectors do not make mistakes, i.e., i
, t : i.PD(t) .
Participant detectors provide an initial context about participants present in the system by which it is possible to expand the knowledge about . Thus, the participant detector abstraction enriches the system with a knowledge connectivity graph. This graph
is directed since the knowledge provided by participant detectors is not necessarily bidirectional [6].
Definition 1. Knowledge Connectivity Graph: Let Gdi = (V, ) be the directed graph
representing the knowledge relation determined by the PD oracle. Then, V = and
(i, j) iff j i.PD, i.e., i knows j.
Definition 2. Undirected Knowledge Connectivity Graph: Let G = (V, ) be the undirected graph representing the knowlegde relation determined by the PD oracle. Then,
V = and (i, j) iff j i.PD or i j.PD, i.e., i knows j or j knows i.
Based on the properties of the knowledge connectivity graph, some classes of participant detectors have been proposed to solve CUP [6] and FT-CUP [7,8]. Before defining how a participant detector encapsulates the knowledge of a system, let us define
some graph notations. We say that a component Gc of Gdi is k-strongly connected if
for any pair (vi ,v j ) of nodes in Gc , vi can reach v j through k node-disjoint paths. A
component Gs of Gdi is a sink component when there is no path from a node in Gs to
other nodes of Gdi , except nodes in Gs itself. In this paper we use the weakest participant
detector defined to solve FT-CUP, which is called k-OSR [8].
Definition 3. k-One Sink Reducibility (k-OSR) PD: The knowledge connectivity graph
Gdi , which represents the knowledge induced by PD, satisfies the following conditions:
1. the undirected knowledge connectivity graph G obtained from Gdi is connected;
2. the directed acyclic graph obtained by reducing Gdi to its k-strongly connected
components has exactly one sink;
3. consider any two k-strongly connected components G1 and G2 , if there is a path
from G1 to G2 , then there are k node-disjoint paths from G1 to G2 .
To better illustrate Definition 3, Figure 1 presents two graphs Gdi induced by a k-OSR
participant detector. Figures 1(a) and 1(b) show knowledge relations induced by participant detectors of the class 2-OSR and 3-OSR, respectively. For example, in Figure
1(a), the value returned by 1.PD is the subset {2, 3} .
In our algorithms, we assume that for each process i, its participant detector i.PD
is queried exactly once at the beginning of the protocol execution. This can be implemented by caching the result of the first query to i.PD and returning that value in

26

E.A.P. Alchieri et al.


Component B

Component A

Component B

11

16

11

10
13
12

14

10

6
8

15

Component A

12

Sink Component

(a) 2-OSR

Sink Component

(b) 3-OSR

Fig. 1. Knowledge Connectivity Graphs Induced by k-OSR Participant Detectors

subsequent calls. This ensures that the partial view about the initial composition of the
system is consistent for all nodes in the system, what defines a common knowledge
connectivity graph Gdi . Also, in this work we say that some participant p is neighbor
of another participant i iff p i.PD.
2.3 The Consensus Problem
In a distributed system, the consensus problem consists of ensuring that all correct processes eventually decide the same value, previously proposed by some processes in the
system. Thus, each process i proposes a value vi and all correct processes decide on
some unique value v among the proposed values. Formally, consensus is defined by the
following properties [4]:

Validity: if a correct process decides v, then v was proposed by some process;


Agreement: no two correct processes decide differently;
Termination: every correct process eventually decides some value2;
Integrity: every correct process decides at most once.

The Byzantine Fault-Tolerant Consensus with Unknown Participants, namely BFTCUP, proposes to solve consensus in unknown networks with the additional requirement
that a bounded number of participants in the system can behave maliciously.

3 Reachable Reliable Broadcast


This section introduces a new primitive, namely reachable reliable broadcast, used by
processes of the system to communicate. It is invoked by two basic operations:
reachable send(m,p) through which the participant p sends the message m to all
reachable participants from p. A participant q is reachable from another participant
2

If a randomized protocol such as [5,15,17] is used as an underlying Byzantine consensus, the


termination is ensured only with probability 1.

Byzantine Consensus with Unknown Participants

27

p if there is enough connectivity from p to q (see below). In this case, q is a receiver


of messages disseminated by p.
reachable deliver(m,p) invoked by the receiver to deliver a message m disseminated by the participant p.
This primitive should satisfy the following four properties:
Validity: If a correct participant p disseminates a message m, then m is eventually
delivered by a correct participant reachable from p or there is no correct participant
reachable from p;
Agreement: If a correct participant delivers some message m, disseminated by a correct participant p, then all correct participants reachable from p eventually deliver
m;
Integrity: For any message m, every correct participant p delivers m only if m was
previously disseminated by some participant p , in this case p is reachable from p .
Notice that these properties establish a communication primitive with specification
similar to the usual reliable broadcast [4,5,15]. Nonetheless, the proposed primitive
ensures the delivery to all correct processes reachable in the system.
Implementation. The main idea of our implementation is that participants execute a
flood of their messages to all reachable processes, which, in turn, will deliver these
messages as soon as its authenticity has been proved. Assuming a k-OSR PD, a participant q is reachable from a participant p if there is enough connectivity in the knowlegde
graph, i.e., if there are at least 2 f + 1 node-disjoint paths from p to q (k 2 f + 1). This
connectivity is necessary to ensure that all reachable processes will be able to receive
and authenticate messages.
In our implementation, formally described in Algorithm 1, a process i disseminates
a message m through the system by executing the procedure reachable send. In this
procedure (line 6), i sends m to its neighbors (i.e., processes in i.PD) and when m is
received at some process p, p forwards m to its neighbors and so on, until that m arrives
at all reachable participants (line 17). Moreover, p stores m together with the route
traversed by m in a buffer (line 11). Also, p delivers m if it has received m through f + 1
node-disjoint paths (lines 13-14), i.e., the authenticity of m has been verified. Afterward,
since m has been delivered, p removes it from the buffer of received messages (line
15). The function computeRoutes(m.message, i.received msgs) computes the number
of node-disjoint paths through which m.message has been received at participant i.
An important feature of this dissemination is that each message has the accumulated
route according with the path traversed from the sender to some destination. A participant will process a received message only if the participant that is sending (or forwarding) this message appears at the end of the accumulated route (line 8). This solution is
based on the approach used in [18] and it enforces that each participant appends itself at
the end of the routing information in order to send or forward a message. Nonetheless,
a malicious participant is able to modify the accumulated route (removing or adding
participants) and modify or block the message being propagated. Notice, however, that
the connectivity of the knowledge graph (k 2 f + 1) ensures that messages will be received at all reachable participants. Moreover, since a process delivers a message only

28

E.A.P. Alchieri et al.

Algoritm 1. Dissemination algorithm executed at participant i.


constant:
1. f : int
// upper bound on the number of failures
variables:
2. i.received msgs : set of message, route tuples

// set of received messages

message:
3. REACHABLE FLOODING:
4.
message :value to flood
5.
route : ordered list of nodes

// struct of this message


// value to be disseminated
// path traversed by message

** Initiator Only **
procedure: reachable send(message, sender)
6. j i.PD, send REACHABLE FLOODING(message, sender) to j;

// sender = i

** All Nodes **
INIT:
7. i.received msgs ;
upon receipt of REACHABLE FLOODING(m.message, m.route) from j
8. if getLastElement(m.route) = j i m.route then
9.
append(m.route, i);
10.
initiator getFirstElement(m.route);
11.
i.received msgs i.received msgs {m.message, m.route};
12.
routes computeRoutes(m.message, i.received msgs);
13.
if routes f + 1 then
14.
trigger reachable deliver(m.message, initiator);
15.
i.received msgs i.received msgs \ {m.message, };
16.
end if
17.
z i.PD \ { j}, send REACHABLE FLOODING(m.message, m.route) to z;
18. end if
after it has been received through f + 1 node disjoint paths, it is able to verify its authenticity. These measures prevent the delivery of forged messages (generated by malicious
participants), because the authenticity of them cannot be verified by correct processes.
An undesirable property of the proposed solution is that the same message, sent
by some participant, could be delivered more than once by its receivers. This property
does not affect the use of this protocol in our consensus protocol (Section 4). Thus, we
do not deal with this limitation of the algorithm. However, it can be easily solved by
using buffers to store delivered messages that must have unique identifiers.
Additionaly, each message receiver, disseminated by some participant p, is able
to send back a reply to p using some routing protocol resilient to Byzantine failures [11,12,13]. Our BFT-CUP protocol (Section 4) uses this algorithm to disseminate
messages.
Sketch of Proof. The correctness of this protocol is based on the proof of the properties
defined for the reachable reliable broadcast.

Byzantine Consensus with Unknown Participants

29

Validity: By assumption, the connectivity of the system is k 2 f + 1. Thus, according


to Definition 3, there are at least 2 f + 1 node-disjoint paths from the sender of a message m to the receivers (nodes that are reachable from the sender). Moreover, as validity
is established over messages sent by correct participants (correct sender), there are at
least f + 1 node-disjoint paths formed only by correct participants, through which it is
guaranteed that the same message m will reach the correct receivers. In this case, the
predicate of line 8 will be true at least f + 1 times and the authenticity of m can be
verified through redundancy. This is done by the execution of lines 912, which are responsible to maintain information regarding the different routes from which m has been
received. Whenever the message authenticity is proved, i.e., m has been received by at
least f + 1 different routes (line 13), the delivery of m is authorized by the invocation
of reachable deliver (line 14).
Agreement: As the agreement is established over messages sent by correct participants,
this proof is identical to the validity proof.
Integrity: A message is delivered only after its reception through f + 1 node-disjoint
paths (lines 13-14), what guarantees that this message is authentic, i.e., this message
was really sent by its sender (sender). Thus, a malicious participant j is not able to
forge that message m was sent by a participant i because the autenticity of m will not
be proven. That is, a receiver r will not be able to find f + 1 node-disjoint paths from
i to r through which m has been received. Even with a collusion of up to f malicious
participants, r will obtain at most f node-disjoint paths through which m was received
from i (each of these f paths could contain one malicious participant).


4 BFT-CUP: Byzantine Consensus with Unknown Participants


This section presents our solution for BFT-CUP. Our protocol is based on the dissemination algorithm presented in Section 3, which, together with the underlying routing
layer resilient to Byzantine failures, hides all details related to participants communication. Thereafter, as in [8], the consensus protocol with unknown participants is divided
into three phases. In the first phase called participants discovery (Section 4.1) each
participant increases its knowledge about other processes in the system, discovering the
maximum possible number of participants that are present in some computation. The
second phase called sink component determination (Section 4.2) defines which participants belong to the sink component of the knowlegde graph induced by a k-OSR
PD. Thus, each participant will be able to determine whether it belongs to the sink
component or not. In the last phase (Section 4.3), members of the sink component execute a classical Byzantine fault tolerant consensus and disseminate the decision value
to other participants in the system. The number of participants in the sink component,
namely nsink , should be enough in order to e xecute a classical Byzantine fault-tolerant
consensus. Usually nsink 3 f + 1, to run, for example, Byzantine Paxos [9,19].
4.1 Participants Discovery
The first step to solve consensus in a system with unknown participants is to provide
processes with the maximum possible knowledge about the system. Notice that, through

30

E.A.P. Alchieri et al.

its local participant detector, a process is able to get an initial knowledge about the
system that is not enough to solve BFT-CUP. Then, a process expands this knowledge
by executing the DISCOVERY protocol, presented in Algorithm 2. The main idea is
that each participant i broadcasts a message requesting information about neighbors of
each reachable participant, making a sort of breadth-first search in the knowledge graph.
At the end of the algorithm, i obtains the maximal set of reachable participants, which
represents the participants known by i (a partial view of the system).
The algorithm uses three sets:
1. i.known set containing identifiers of all processes known by i;
2. i.msg pend this set contains identifiers of processes that should send a message
to i, i.e., for each j i.msg pend, i should receive a message from j;
3. i.nei pend this set contains identifiers of processes that i knows, but does not
know all of their neighbors (i is still waiting for information about them), i.e., for
each  j, j.neighbor i.nei pend, i knows j but does not know all neighbors of j.
In the initialization phase of the algorithm for participant i, the set i.known is updated
to itself plus its neighbors, returned by i.PD, and the set i.msg pend to its neighbors
(line 7). Moreover, a message requesting information about neighbors is disseminated
to all participants reachable from i (line 8). When a participant p delivers this message,
p sends back to i a reply indicating its neighbors (line 9).
Upon receipt of a reply at participant i, the set of known participants is updated,
along with the set of pending neighbors3 and the set of pending messages (lines 10 - 12).
The next step is to verify whether i has acquired knowledge about any new participant
(line 13 - 16). Thus, i gets to know other participant j if at least f + 1 other processes
known by i reported to i that j is their neighbor (line 13). After this verification, the
set of pending neighbors is updated (lines 17 - 21), according to the new participants
discovered.
To determine if there is still some participant to be discovered, i uses the sets
i.msg pend and i.nei pend, which store the pendencies related to the replies received
by i. Then, the algorithm ends when there remain at most f pendencies (lines 22 - 24).
The intuition behind this condition is that if there are at most f pendencies at process i,
then i already has discovered all processes reachable from it because k 2 f + 1. Thus,
the algorithm ends by returning the set of participants discovered by i (line 23), which
contains all participants (correct or faulty) reachable from it. Algorithm 2 satisfies some
properties that are stated by Lemma 1.
Lemma 1. Consider Gdi a knowlegde graph induced by a k-OSR PD. Let f < 2k < n
be the number of nodes that may fail. Algorithm DISCOVERY executed by each correct
participant p satisfies the following properties:
Termination: p terminates the execution of the algorithm and returns a set of known
processes;
Accuracy: the algorithm returns the maximal set of processes reachable from p in
Gdi .
3

If i reaches p, i also reaches all neigbours of p and should receive a reply to its initial dissemination (line 8) from all of them.

Byzantine Consensus with Unknown Participants

31

Algorithm 2. Algorithm DISCOVERY executed at participant i.


constant:
1. f : int
// upper bound on the number of failures
variables:
2. i.known : set of nodes
// set of known nodes
3. i.nei pend : set of node, node.neighbor tuples
// i does not know all neighbors of node
4. i.msg pend : set of nodes
// nodes that i is waiting for messages (replies)
message:
5. SET NEIGHBOR :
6.
neighbor : set of nodes

// struct of the message SET NEIGHBOR


// neighbors of the node that is sending the message

** All Nodes **
INIT:
7. i.known {i} i.PD; i.nei pend ; i.msg pend i.PD;
8. reachable send(GET NEIGHBOR , i);
upon execution of reachable deliver(GET NEIGHBOR , sender)
9. send SET NEIGHBOR (i.PD) to sender;
upon receipt of SET NEIGHBOR (m.neighbor) from sender
10. i.known i.known {sender};
11. i.nei pend i.nei pend {sender, m.neighbor};
12. i.msg pend i.msg pend \ {sender};
13. if ( j : #, ji.nei pend > f ) ( j i.known) then
14.
i.known i.known { j};
15.
i.msg pend i.msg pend { j};
16. end if
17. for all  j, j.neighbor i.nei pend do
18.
if (z j.neighbor : z i.known) then
19.
i.nei pend i.nei pend \ { j, j.neighbor};
20.
end if
21. end for
22. if (|i.nei pend| + |i.msg pend|) f then
23.
return i.known;
24. end if
Sketch of Proof. Termination: In the worst case, the algorithm ends when p receives
replies from at least all correct reachable participants (line 22). By dissemination protocol properties, even in the presence of f < 2k failures, all messages disseminated by p is
delivered by its correct receivers (processes reachable from p). Thus, each correct participant reachable from p receives a request (line 8) and sends back a reply (line 9) that is
received by p (lines 10 - 24). Then, as is finite, it is guaranteed that p receives replies
from at least all correct reachable participants and ends the algorithm by returning a set
of known processes.
Accuracy: The algorithm only ends when there remain at most f pendencies, which
may be divided between processes that supply information about neighbors that do not

32

E.A.P. Alchieri et al.

exist in the system (i.nei pend) and processes from which p is still waiting for their
messages/replies (i.msg pend). Moreover, each participant z (being z reachable from p)
is neighbor of at least 2 f + 1 other participants, because f < 2k < n. Now, we have to
consider two cases:
If z is malicious and does not send back a reply to p (line 9), then p computes
messages (replies) from at least f + 1 correct neighbors of z, discovering z (lines
13 - 16).
If z is correct, in the worst case, the message from z to p is delayed and f neighbors
of z are malicious and do not inform p that z is in the system. However, as f < 2k ,
there remain f + 1 correct neighbors of z in the system that inform p about the
presence of z in the system.
As the algorithm only ends when there remain at most f pendencies, in both cases it
is guaranteed that p only ends after discovering z, even if it firstly computes messages
from the f malicious processes.

4.2 Sink Component Determination
The objective of this phase is to define which participants belong to the sink component
of the knowlegde graph induced by a k-OSR PD. More specifically, through Algorithm
3 (SINK), each participant is able to determine whether or not it is member of the sink
component. The idea behind this algorithm is that after the execution of the procedure
DISCOVERY, members in the sink component obtain the same partial view of the system, whereas in the other components, nodes have strictly more knowledge than in the
sink, i.e., each node knows at least members of the component to which it belongs and
members of the sink (see Definition 3).
In the initialization phase of the algorithm for participant i, i executes the DISCOVERY procedure in order to obtain its partial view of the system (line 8) and sends this
view to all reachable/known participant (line 10). When these messages are delivered
by some participant j, j sends back an ack response to i if it has the same knowledge of
i (i.e., j belongs to the same component of i). Otherwise, j sends back a nack response
(lines 11-15).
Upon receipt of a reply (lines 16-27), i updates the set of processes that have already answered (line 16). Moreover, if the reply received is a nack, the set of processes
that belong to other components (i.nacked) is updated (line 18) and if the number of
processes that do not belong to the same component of i is greater than f (line 19), i
concludes that it does not belong to the sink component (lines 20-21). This condition
holds because the system has at least 3 f + 1 processes in the sink, known by all participants, that have strictly less knowledge about than processes not in the sink (Lemma
1). On the other hand, if i has received replies from all known processes, excluding f
possible faulty (line 24), and the number of processes that belong to other components
is not greater than f , i concludes that it belongs to the sink component (lines 25-26).
This condition holds because processes in the sink receive messages only from members of this component. Moreover, in both cases, a collusion of f malicious participants
cannot lead a process to decide incorrectly. Lemma 2 states the properties satisfied by
Algorithm 3.

Byzantine Consensus with Unknown Participants

33

Algorithm 3. Algorithm SINK executed at participant i.


constant:
1. f : int
// upper bound on the number of failures
variables:
2. i.known : set of nodes
3. i.responded : set of nodes
4. i.nacked : set of nodes
5. i.in the sink : boolean

// set of known nodes


// set of nodes that has sent a reply to i
// set of processes not in the same component of i
// is i in the sink?

message:
6. RESPONSE :
7.
ack/nack : boolean

// struct of the message RESPONSE

** All Nodes **
INIT:
8. i.known DISCOVERY ();
9. i.responded {i}; i.nacked ;
10. reachable send(i.known, i);
upon execution of reachable deliver(sender.known, sender)
11. if i.known = sender.known then
12.
send RESPONSE(ack) to sender;
13. else
14.
send RESPONSE(nack) to sender;
15. end if
upon receipt of RESPONSE(m) from sender
16. i.responded i.responded {sender}
17. if m.nack then
18.
i.nacked i.nacked {sender};
19.
if |i.nacked| f + 1 then
20.
i.in the sink f alse;
21.
return i.in the sink, i.known;
22.
end if
23. end if
24. if |i.responded| |i.known| f then
25.
i.in the sink true;
26.
return i.in the sink, i.known;
27. end if
Lemma 2. Consider a k-OSR PD. Let f < 2k < n be the number of nodes that may fail.
Algorithm SINK, executed by each correct participant p of the system that has at least
3 f + 1 nodes in the sink component, satisfies the following properties:
Termination: p terminates the execution by deciding whether it belongs (true) or
not (false) to the sink;
Accuracy: p is in the unique k-strongly connected sink component iff algorithm
SINK returns true.

34

E.A.P. Alchieri et al.

Sketch of Proof. Termination: For each participant p, the algorithm returns in two
cases: (i) when it receives f + 1 replies from processes that belong to other components (processes not in the sink line 19) or (ii) when it receives replies from at least
all correct known processes (processes in the sink line 24). By properties of the dissemination protocol, even in the presence of f < 2k failures, all messages disseminated
by p are delivered by its receivers (processes reachable from p). Thus, each correct participant known by p (reachable from p) receives the request (line 10) and sends back a
reply (lines 11-15) that is received by p (lines 16-27). Then, it is guaranteed that either
(i) or (ii) always occur.
Accuracy: By Lemma 1, after execution of the DISCOVERY algorithm, each correct
participant discovers the maximal set of participants reachable from it. Then, by Lemma
1 and by k-OSR PD properties, it is guaranteed that all correct processes that belong to
the same component obtain the same partial view of the system. Thus, as members in the
sink component receive replies only from members of this component, it is guaranteed
that these participants end correctly (line 26). Moreover, as the sink has at least 3 f + 1
nodes, members in other components know at least 2 f + 1 correct members in the sink
(Lemma 1). Then, before making a wrong decision, these members must compute at
least f + 1 replies from correct members in the sink (that have strictly less knowledge
about , due to Lemma 1), what makes it possible for correct members not in the sink
to end correctly (line 21).

4.3 Achieving Consensus
This is the last phase of the protocol for solving BFT-CUP. Here, the main idea is to
make members of the sink component execute a classical Byzantine consensus and send
the decision value to other participants of the system. The optimal resilience of these
algorithms to solve a classical consensus is 3 f + 1 [3,9]. Thus, it is necessary at least
3 f + 1 participants in the sink component.
The Algorithm 4 (CONSENSUS) presents this protocol. In the initialization, each
participant executes the SINK procedure (line 11) in order to get its partial view of
the system and decide whether or not it belongs to the sink component. Depending on
whether or not the node belongs to the sink, two distinct behaviors are possible:
1. Nodes in the sink execute a classical consensus (line 13) and send the decision value
to other participants (lines 18 and 20-24). By construction, all correct nodes in the
sink component share the same partial view of the system (exactly the members in
the sink Lemma 1). Thus, these nodes know at least 2 f + 1 correct members that
belong to the sink component, what makes possible to reach the properties of the
classical Byzantine consensus (Section 2.3);
2. Other nodes (in the remaining components) do not participate to the classical consensus. These nodes ask for the decison value to all known nodes, i.e., all reachable
nodes, what includes all nodes in the sink (line 15). Each node decides for a value
v only after it has received v from at least f + 1 other participants, ensuring that v is
gathered from at least one correct participant (lines 25-31). Theorem 1 shows that
Algorithm 4 solves the BFT-CUP problem as defined in Section 2.3 with the stated
participant detector and connectivity requirements.

Byzantine Consensus with Unknown Participants

35

Algorithm 4. Algorithm CONSENSUS executed at participant i.


constant:
1. f : int
// upper bound on the number of failures
input:
2. i.initial : value

// proposal value (input)

variables:
3. i.in the sink : boolean
// is i in the sink?
4. i.known : set of nodes
// partial view of i
5. i.decision : value
// decision value
6. i.asked : set of nodes
// nodes that have required the decision value
7. i.values : set of node, value tuples
// reported decisions
message:
8. SET DECISION :
9.
decision : value

// struct of the message SET DECISION


// the decided value

** All Nodes **
INIT: {Main Decision Task}
10. i.decision ; i.values ; i.asked ;
11. (i.in the sink, i.known) SINK();
12. if i.in the sink then
13.
Consensus.propose(i.initial);
// underlying Byzantine consensus with all
p i.known
14. else
15.
reachable send(GET DECISION, i);
16. end if
** Node In Sink **
upon Consensus.decide(v)
17. i.decision v;
18. j i.asked, send SET DECISION (i.decision) to j;
19. return i.decision;
upon execution of reachable deliver(GET DECISION , sender)
20. if i.decision = then
21.
i.asked i.asked {sender};
22. else
23.
send SET DECISION (i.decision) to sender;
24. end if
** Node Not In Sink **
upon receipt of SET DECISION (m.decision) from sender
25. if i.decision = then
26.
i.values i.values {sender, m.decision};
27.
if #,m.decision i.values f + 1 then
28.
i.decision m.decision;
29.
return i.decision;
30.
end if
31. end if

36

E.A.P. Alchieri et al.

Theorem 1. Consider a classical Byzantine consensus protocol. Algorithm CONSENSUS solves BFT-CUP, in spite of f < 2k < n failures, if k-OSR PD is used and assuming
at least 3 f + 1 participants in the sink.
Sketch of Proof. In this proof we have to consider two cases:
Processes in the sink: All correct participants in the sink component determine that they
belong to the sink (Lemma 2) (line 12) and start the execution of an underlying classical
Byzantine consensus algorithm (line 13). Then, as the sink has at least 2 f + 1 correct
nodes, it is guaranteed that all properties of the classical consensus will be met, i.e., validity, integrity, agreement and termination. Thus, nodes in the sink obtain the decision
value (line 17), send this value to other participants (line 18) and return the decided
value to the application (line 19), ensuring termination. Whenever a process in the sink
receives a request for decision from other processes (lines 2024), it will send the value
if it has already decided (line 23); otherwise, it will store the senders identity in order
to send the decision value later (line 18) after the consensus has been achieved.
Processes not in the sink: Processes not in the sink request the decision value to all participants in the sink (line 15). Notice that if there is enough connectivity (k 2 f + 1),
nodes in the sink are reachable from any node of the system. Moreover, by properties of
the reachable reliable broadcast, all correct participant in the sink will receive requests
sent by correct participants not in the sink, even in the presence of f < 2k failures (lines
2024). Thus, as there are at least 2 f + 1 correct participants in the sink able to send
back replies for these requests (lines 18, 23), it is guaranteed that nodes not in the sink
will receive at least f + 1 messages with the same decision value (lines 25-31) and the
predicate of line 27 will be true, allowing the process to terminate and return the decided value (line 28). Moreover, a collusion of up to f malicious participants cannot
lead a process to decide for incorrect values (line 27), guaranteeing thus agreement. Integrity is ensured through the verification of predicate on line 25, by which each correct
participant decides only once. Notice that validity is ensured through the underlying
classical Byzantine consensus protocol, i.e., the decided value is a value proposed by
nodes in the sink. This proves that k-OSR PD is sufficient to solve BFT-CUP.

4.4 Necessity of k-OSR Participant Detector to Solve BFT-CUP
Using a k-OSR PD, our protocol requires a degree of connectivity k 2 f + 1 to solve
BFT-CUP. Theorem 2 states that a participant detector of this class and this connectivity
degree are necessary to solve BFT-CUP.
Theorem 2. A participant detector PD k-OSR is necessary to solve BFT-CUP, in
spite of f < 2k < n failures.
Sketch of Proof. This proof is based on the same arguments to prove the necessity of
OSR (One Sink Reducibility) for solving CUP [6]. Assume by contradiction that there
is an algorithm which solves BFT-CUP with a PD k-OSR. Let Gdi be the knowledge graph induced by PD, then two scenarios are possible: (i.) there are less than k
node-disjoint paths connecting a participant p in Gdi ; or (ii.) the directed acyclic graph

Byzantine Consensus with Unknown Participants

37

obtained by reduction of Gdi to its k-strongly connected components has at least two
sinks. There are two possible scenarios to be considered.
In the first scenario, let at most 2 f node-disjoint paths connect p in Gdi . Then, the
simple crash failure of f neighbors of p makes it impossible for a participant i (being
p reachable from i) to discover p, because only f processes are able to inform i about
the presence of p in the system. In fact, i is not able to determine if p really exists, i.e.,
it is not guaranteed that i has received this information from a correct process. Then,
the partial view obtained by i will be inconsistent, what makes it impossible to solve
BFT-CUP. Thus, we reach a contradiction.
In the second scenario, let G1 and G2 be two of the sink components and consider
that participants in G1 have proposition value v and participants in G2 value w, with
v = w. By Termination property of consensus, processes in G1 and G2 must eventually
decide. Let us assume that the first process in G1 that decides, say p, does so at time t1 ,
and the first process in G2 that decides, say q, does so at time t2 . Delay all messages sent
to G1 and G2 such that they are received after max{t1 ,t2 }. Since the processes in a sink
component are unaware of the existence of other participants, p decides v and q decides
w, violating the Agreement property of consensus and reaching thus a contradiction. 

5 Discussion
This section presents some comments about the protocol presented in this paper.
5.1 Digital Signatures
It is worth to notice that the lower bound required to solve BFT-CUP in terms of connectivity and resiliency is k 2 f + 1, and it holds even if digital signatures are used. By
using digital signatures, it is possible to exchange messages among participants, since
there is at least one path formed only by correct processes (k f + 1). However, even
with digital signatures, a connectivity of k 2 f + 1 is still required in order to discover
the participants properly (first phase of the protocol). In fact, if k < 2 f + 1, a malicious
participant can lead a correct participant p not to discover every node reachable from it,
what makes it impossible to use this protocol to solve BFT-CUP (the partial view of p
will be inconsistent).
For example, Figure 2 presents a knowledge connectivity graph induced by a 2-OSR
PD (k = 2) in which the system does not support any fault (to support f = 1, k 3).
Now, consider that process 2 is malicious and that process 1 is starting the DISCOVERY
phase. Then, process 2 could inform to process 1 that it only knows process 3. At this
point, process 1 will break the search because it is only waiting for a message from
process 3, i.e., number of pending messages less or equal to f . Thus, process 1 obtains
the wrong partial view {1, 2, 3} of the system.
5.2 Protocol Limitations
The model used in this study, as well as in all solutions for FT-CUP [7,8], supports
mobility of nodes, but it is not strong enough to tolerate arbitrary churn (arrivals and

38

E.A.P. Alchieri et al.

4
6

Fig. 2. 2-OSR with Process 2 Faulty

departures of processes) during protocol executions. This happens because, after the
relations of knowledge have been established (first phase of the protocol), new participants will be considered only in future executions of consensus.
In current algorithms, process departures can be considered as failures. Nonetheless, this is not the optimal approach, since our protocols tolerate Byzantine faults and
the behaviour of a departing process resembles a simple crash failure. An alternative
approach consists in specifying an additional parameter d to indicate the number of
supported departures, separating departures from malicious faults. In this way, the degree of connectivity in the knowledge graph should be k 2 f + d + 1 to support up
to f malicious faults and up to d departures. Moreover, even with departures, the sink
component should remains with enough participants to execute a classical consensus,
i.e., nsink 3 f + 2d + 1, following the same reasoning as [19].
5.3 Other Participant Detectors
Although k-OSR PD is the weakest participant detector defined to solve FT-CUP, there
are other (stronger) participant detectors able to solve BFT-CUP [6,8]:
FCO (Full Connectivity PD): the knowledge connectivity graph Gdi = (V, ) induced by the PD oracle is such that for all p, q , we have (p, q) .
k-SCO (k-Strong Connectivity PD): the knowledge connectivity graph Gdi = (V, )
induced by the PD oracle is k-strongly connected.
Notice that a characteristic common to all participant detectors able to solve BFTCUP (except for the FCO PD that is fully connected) is the degree of connectivity k,
which makes possible the proper work of the protocol even in the presence of failures.
Using these participant detectors (FCO or k-SCO) the partial view obtained by each
process in the system contains exactly all processes in the system (first phase of the protocol). Thereafter, the consensus problem is trivially solved using a classical Byzantine
consensus protocol, since all processes have the same (complete) view of the system.

6 Final Remarks
Most of the studies about consensus found in the literature consider a static known
set of participants in the system (e.g., [1,3,4,5,17,19]). Recently, some works which

Byzantine Consensus with Unknown Participants

39

Table 1. Comparing solutions for the consensus with unknown participants problem
Approach

failure participant
model
detector
without
OSR
failures
crash
OSR

CUP
[6]
FT-CUP
[7]
FT-CUP
crash
[8]
BFT-CUP Byzantine
(this paper)

participants
connectivity
in the sink between components
1
OSR

k-OSR

f +1

2f +1

k-OSR

2f +1

3f +1

OSR + safe
crash pattern
k node-disjoint
paths
k node-disjoint
paths

synchrony
model
asynchronous
asynchronous + P
asynchronous + S
same of the underlying
consensus protocol

deal with a partial knowledge about the system composition have been proposed. The
works of [6,7,8] are worth noticing. They propose solutions and study conditions in
order to solve consensus whenever the set of participants is unknown and the system is
asynchronous. The work presented herein extends these previous results and presents
an algorithm for solving FT-CUP in a system prone to Byzantine failures. It shows
that to solve Byzantine FT-CUP in an environment with little synchrony requirements,
it is necessary to enrich the system with a greater degree of knowledge connectivity
among its participants. The main result of the work is to show that it is possible to solve
Byzantine FT-CUP with the same class of participant detectors (k-OSR) and the same
synchrony requirements (S ) necessary to solve FT-CUP in a system prone to crash
failures [8]. As a side effect, a Byzantine fault-tolerant dissemination primitive, namely
reachable reliable broadcast, has been defined and implemented and can be used in
other protocols for unknown networks.
Table 1 summarizes and presents a comparison with the known results regarding the
consensus solvability with unknown participants.

Acknowledgements
Eduardo Alchieri is supported by a CAPES/Brazil grant. Joni Fraga and Fabola Greve
are supported by CNPq/Brazil grants. This work was partially supported by the EC,
through projects IST-2004-27513 (CRUTIAL), by the FCT, through the Multiannual
(LaSIGE) and the CMU-Portugal Programmes, and by CAPES/GRICES (project TISD).

References
1. Lamport, L., Shostak, R., Pease, M.: The Byzantine generals problem. ACM Transactions on
Programing Languages and Systems 4(3), 382401 (1982)
2. Fischer, M.J., Lynch, N.A., Paterson, M.S.: Impossibility of distributed consensus with one
faulty process. Journal of the ACM 32(2), 374382 (1985)
3. Toueg, S.: Randomized Byzantine Agreements. In: Proceedings of the 3rd Annual ACM
Symposium on Principles of Distributed Computing, pp. 163178 (1984)
4. Chandra, T.D., Toueg, S.: Unreliable failure detectors for reliable distributed systems. Journal
of the ACM 43(2), 225267 (1996)

40

E.A.P. Alchieri et al.

5. Correia, M., Neves, N.F., Verssimo, P.: From consensus to atomic broadcast: Time-free
Byzantine-resistant protocols without signatures. The Computer Journal 49(1) (2006)
6. Cavin, D., Sasson, Y., Schiper, A.: Consensus with unknown participants or fundamental
self-organization. In: Nikolaidis, I., Barbeau, M., Kranakis, E. (eds.) ADHOC-NOW 2004.
LNCS, vol. 3158, pp. 135148. Springer, Heidelberg (2004)
7. Cavin, D., Sasson, Y., Schiper, A.: Reaching agreement with unknown participants in mobile
self-organized networks in spite of process crashes. Technical Report IC/2005/026, EPFL LSR (2005)
8. Greve, F.G.P., Tixeuil, S.: Knowledge connectivity vs. synchrony requirements for faulttolerant agreement in unknown networks. In: Proceedings of the International Conference
on Dependable Systems and Networks - DSN, pp. 8291 (2007)
9. Castro, M., Liskov, B.: Practical Byzantine fault-tolerance and proactive recovery. ACM
Transactions on Computer Systems 20(4), 398461 (2002)
10. Douceur, J.: The sybil attack. In: Proceedings of the 1st International Workshop on Peer-toPeer Systems (2002)
11. Awerbuch, B., Holmer, D., Nita-Rotaru, C., Rubens, H.: An on-demand secure routing protocol resilient to byzantine failures. In: Proceedings of the 1st ACM workshop on Wireless
security - WiSE, pp. 2130. ACM, New York (2002)
12. Kotzanikolaou, P., Mavropodi, R., Douligeris, C.: Secure multipath routing for mobile ad
hoc networks. In: Wireless On-demand Network Systems and Services - WONS, pp. 8996
(2005)
13. Papadimitratos, P., Haas, Z.: Secure routing for mobile ad hoc networks. In: Proceedings of
SCS Communication Networks and Distributed Systems Modeling and Simulation Conference - CNDS (2002)
14. Dwork, C., Lynch, N.A., Stockmeyer, L.: Consensus in the presence of partial synchrony.
Journal of ACM 35(2), 288322 (1988)
15. Bracha, G.: An asynchronous (n 1)/3-resilient consensus protocol. In: Proceedings of
the 3rd ACM symposium on Principles of Distributed Computing, pp. 154162 (1984)
16. Ben-Or, M.: Another advantage of free choice: Completely asynchronous agreement protocols (extended abstract). In: Proceedings of the 2nd Annual ACM Symposium on Principles
of Distributed Computing, pp. 2730 (1983)
17. Friedman, R., Mostefaoui, A., Raynal, M.: Simple and efficient oracle-based consensus protocols for asynchronous Byzantine systems. IEEE Transactions on Dependable and Secure
Computing 2(1), 4656 (2005)
18. Dolev, D.: The Byzantine generals strike again. Journal of Algorithms (3), 1430 (1982)
19. Martin, J.P., Alvisi, L.: Fast Byzantine consensus. IEEE Transactions on Dependable and
Secure Computing 3(3), 202215 (2006)

With Finite Memory


Consensus Is Easier Than Reliable Broadcast
Carole Delporte-Gallet1 , Stephane Devismes2 , Hugues Fauconnier1 ,
Franck Petit3, , and Sam Toueg4
1

LIAFA, Universite D. Diderot, Paris, France


{cd,hf}@liafa.jussieu.fr
2
VERIMAG, Universite Joseph Fourier, Grenoble, France
[email protected]
3
INRIA/LIP Laboratory,Univ. of Lyon/ENS Lyon, Lyon, France
[email protected]
Department of Computer Science, University of Toronto, Toronto, Canada
[email protected]

Abstract. We consider asynchronous distributed systems with message


losses and process crashes. We study the impact of nite process memory
on the solution to consensus, repeated consensus and reliable broadcast.
With nite process memory, we show that in some sense consensus is
easier to solve than reliable broadcast, and that reliable broadcast is as
dicult to solve as repeated consensus: More precisely, with nite memory, consensus can be solved with failure detector S, and P (a variant
of the perfect failure detector which is stronger than S) is necessary and
sucient to solve reliable broadcast and repeated consensus.

Introduction

Designing fault-tolerant protocols for asynchronous systems is highly desirable


but also highly complex. Some classical agreement problems such as consensus
and reliable broadcast are well-known tools for solving more sophisticated tasks
in faulty environments (e.g., [1,2]). Roughly speaking, with consensus processes
must reach a common decision on their inputs, and with reliable broadcast processes must deliver the same set of messages.
It is well known that consensus cannot be solved in asynchronous systems
with failures [3], and several mechanisms were introduced to circumvent this
impossibility result: randomization [4], partial synchrony [5,6] and (unreliable)
failure detectors [7].
Informally, a failure detector is a distributed oracle that gives (possibly incorrect) hints about the process crashes. Each process can access a local failure
detector module that monitors the processes of the system and maintains a list
of processes that are suspected of having crashed.


This work was initiated while Franck Petit was with MIS Lab., Universite of Picardie,
France. Research partially supported by Region Picardie, Proj. APREDY.

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 4157, 2008.
c Springer-Verlag Berlin Heidelberg 2008


42

C. Delporte-Gallet et al.

Several classes of failure detectors have been introduced, e.g., P, S, , etc.


Failure detectors classes can be compared by reduction algorithms, so for any
given problem P , a natural question is What is the weakest failure detector
(class) that can solve P ? . This question has been extensively studied for several problems in systems with innite process memory (e.g., uniform and nonuniform versions of consensus [8,9,10], non-blocking atomic commit [11], uniform
reliable broadcast [12,13], implementing an atomic register in a message-passing
system [11], mutual exclusion [14], boosting obstruction-freedom [15], set consensus [16,17], etc.). This question, however, has not been as extensively studied
in the context of systems with nite process memory.
In this paper, we consider systems where processes have nite memory, processes can crash and links can lose messages (more precisely, links are fair lossy
and FIFO1 ). Such environments can be found in many systems, for example in
sensor networks, sensors are typically equipped with small memories, they can
crash when their batteries run out, and they can experience message losses if
they use wireless communication.
In such systems, we consider (the uniform versions of) reliable broadcast, consensus and repeated consensus. Our contribution is threefold: First, we establish
that the weakest failure detector for reliable broadcast is P a failure detector
that is almost as powerful than the perfect failure detector P. Next, we show
that consensus can be solved using failure detector S. Finally, we prove that P
is the weakest failure detector for repeated consensus. Since S is strictly weaker
than P , in some precise sense these results imply that, in the systems that we
consider here, consensus is easier to solve than reliable broadcast, and reliable
broadcast is as dicult to solve as repeated consensus.
The above results are somewhat surprising because, when processes have innite memory, reliable broadcast is easier to solve than consensus2 , and repeated
consensus is not more dicult to solve than consensus.
Roadmap. The rest of the paper is organized as follows: In the next section,
we present the model considered in this paper. In Section 4, we show that in
case of process memory limitation and possibility of crashes, P is necessary
and sucient to solve reliable broadcast. In Section 5, we show that consensus
can be solved using a failure detector of type S in our systems. In Section 6,
we show that P is necessary and sucient to solve repeated consensus in this
context.
For space considerations, all the proofs are omitted, see the technical report
for details ([20], http://hal.archives-ouvertes.fr/hal-00325470/fr/).

The FIFO assumption is necessary because, from the results in [18], if lossy links are
not FIFO, reliable broadcast requires unbounded message headers.
With innite memory and fair lossy links, (uniform) reliable broadcast can be solved
using [19], and is strictly weaker than (, ) which is necessary to solve consensus.

With Finite Memory Consensus Is Easier Than Reliable Broadcast

43

Model

Distributed System. A system consists of a set = {p1 , ..., pn } of processes. We


consider asynchronous distributed systems where each process can communicate
with each other through directed links.3 By asynchronous, we mean that there
is no bound on message delay, clock drift, or process execution rate.
A process has a local memory, a local sequential and deterministic algorithm,
and input/output capabilities. In this paper we consider systems of processes
having either a nite or an innite memory. In the sequel, we denote such systems
by F and I , respectively.
We consider links with unbounded capacities. We assume that the messages
sent from p to q are distinguishable, i.e., if necessary, the messages can be numbered with a non-negative integer. These numbers are used for notational purpose only, and are unknown to the processes. Every link satises the integrity,
i.e., if a message m from p is received by q, m is received by q at most once, and
only if p previously sent m to q. Links are also unreliable and fair. Unreliable
means that the messages can be lost. Fairness means that for each message m, if
process p sends innitely often m to process q and if q tries to receive innitely
often a message from p, then q receives innitely often m from p. Each link are
FIFO, i.e., the messages are received in the same order as they were sent.
To simplify the presentation, we assume the existence of a discrete global
clock. This is merely a ctional device: the processes do not have access to it.
We take the range T of the clocks ticks to be the set of natural numbers.
Failures and Failure Patterns. Every process can fail by permanently crashing,
in which case it denitively stops to execute its local algorithm. A failure pattern
F is a function from T to 2 , where F (t) denotes the set of processes that have
crashed through time t. Once crashed,
a process never recoves, i.e., t : F (t)

F (t+1). We dene crashed(F ) = tT F (t) and correct(F ) = \crashed(F ). If
p crashed(F ) we say that p crashes in F (or simply crashed when it is clear in
the context) and if p correct(F ) we say that p is correct in F (or simply correct
when it is clear in the context). An environment is a set of failure patterns. We
do not restrict here the number of crash and we consider as environment E the
set of all failure patterns.
Failure Detectors. A failure detector [7] is a local module that outputs a set
of processes that are currently suspected of having crashed. A failure detector
history H is a function from T to 2 . H(p,t) is the value of the failure
detector module of process p at time t. If q H(p,t), we say that p suspects q
at time t in H. We omit references to H when it is obvious from the context.
Formally, failure detector D is a function that maps each failure pattern F to
a set of failure detector histories D(F ).
A failure detector can be dened in terms of two abstract properties: Completeness and Accuracy [7] . Let us recall one type of completeness and two types
of accuracy.
3

We assume that each process knows the set of processes that are in the system; some
papers related to failure detectors do not make this assumption e.g. [21,22,23].

44

C. Delporte-Gallet et al.

Denition 1 (Strong Completeness). Eventually every process that crashes


is permanently suspected by every correct process. Formally, D satises strong
completeness if: F E, H D(F ), t T , p crashed(F ), q correct(F ),
t t : p H(q, t )
Denition 2 (Strong Accuracy). No process is suspected before it crashes.
Formally, D satises strong accuracy if: F E, H D(F ), t T , p, q
\ F (t) : p
/ H(q, t)
Denition 3 (Weak Accuracy). A correct process is never suspected. Formally, D satises weak accuracy if: F E, H D(F ), tT , p correct(F ),
q : p
/ H(q, t)
We introduce a last type of accuracy:
Denition 4 (Almost Strong Accuracy). No correct process is suspected.
Formally, D satises almost strong accuracy if: F E, H D(F ), t
T , p correct(F ), q : p H(q, t)
This denition was the denition of strong accuracy in [24].
For all these aformentioned properties, we can assume, without loss of generality, that when a process is suspected it remains suspected forever.
We now recall the denition of the perfect and the strong failure detectors [7]
and we introduce our almost perfect failure detector:
Denition 5 (Perfect). A failure detector is said to be perfect if it satises
the strong completeness and the strong accuracy properties. This failure detector
is denoted by P.
Denition 6 (Almost Perfect). A failure detector is said to be almost perfect
if it satises the strong completeness and the almost strong accuracy properties.
This failure detector is denoted by P .
Note that P was given as the denition of the perfect failure detector in the
very rst paper on unreliable failure detector in [24]. In fact, failure detector in
P can suspect faulty processes before they crash and be unrealistic according
to the denition in [25].
Denition 7 (Strong). A failure detector is said to be strong if it satises the
strong completeness and the weak accuracy properties. This failure detector is
denoted by S.
Algorithms, Runs, and Specication. A distributed algorithm is a collection of n
sequential and deterministic algorithms, one for each process in . Computations
of distributed algorithm A proceed in atomic steps.
In a step, a process p executes each of the following actions at most once:
p try to receive a message from another process, p queries its failure detector
module, p modies its (local) state. and p sends a message to another process.

With Finite Memory Consensus Is Easier Than Reliable Broadcast

45

A run of Algorithm A using a failure detector D is a tuple F ,HD ,init ,E,T 


where F is a failure pattern, HD D(F ) is an history of failure detector D
for the failure pattern F , init is an initial conguration of A, E is an innite
sequence of steps of A, and T is a list of increasing time values indicating when
each step in E occurred. A run must satisfy certain well-formedness and fairness
properties. In particular:
1. E is applicable to init .
2. A process cannot take steps after it crashes.
3. When a process takes a step and queries its failure detector module, it gets
the current value output by its local failure detector module.
4. Every correct process takes an innite number of local steps in E.
5. Any message sent is eventually received or lost.
A problem P is dened by a set of properties that runs must satisfy. An
algorithm A solves a problem P using a failure detector D if and only if all the
runs of A using D satisfy the properties required by P .
A failure detector D is said to be weaker than another failure detector D
(denote D D ) if there is an algorithm that uses only D to emulate the output
of D for every failure pattern. If D is weaker than D but D is not weaker than
D we say that D is strictly weaker than D (denote D < D ).
From [7] and our denition of P , we get:
Proposition 1

S < P < P

The weakest [8] failure detector D to solve a given problem is a failure detector
D that is sucient to solve the problem and that is also necessary to solve the
problem, i.e. D is weaker than any failure detector that solves the problem.
Notations. In the sequel, vp denotes the value of the variable v at process p.
Finally, a datum in a message can be replaced by when this value has no
impact on the reasonning.

Problem Specications

Reliable Broadcast. The reliable broadcast [26] is dened with two primitives:
BROADCAST(m) and DELIVER(m). Informally, any reliable broadcast algorithm
guarantees that after a process p invokes BROADCAST(m), every correct process
eventually executes DELIVER(m). In the formal denition below, we denote by
sender(m) the process that invokes BROADCAST(m).
Specication 1 (Reliable Broadcast). A run R satises the specication
Reliable Broadcast if and only if the following three requirements are satised in
R:
Validity: If a correct process invokes BROADCAST(m), then it eventually executes DELIVER(m).

46

C. Delporte-Gallet et al.

(Uniform) Agreement: If a process executes DELIVER(m), then all other correct processes eventually execute DELIVER(m).
Integrity: For every message m, every process executes DELIVER(m) at most
once, and only if sender(m) previously invokes BROADCAST(m).
Consensus. In the consensus problem, all correct processes propose a value and
must reach a unanimous and irrevocable decision on some value that is chosen
between the proposed values. We dene the consensus problem in terms of two
primitives, PROPOSE(v) and DECIDE(u). When a process executes PROPOSE(v), we
say that it proposes v; similarly, when a process executes DECIDE(u), we say that
it decides u.
Specication 2 (Consensus). A run R satises the specication Consensus
if and only if the following three requirements are satised in R:
(Uniform) Agreement: No two processes decide dierently.
Termination: Every correct process eventually decides some value.
Validity: If a process decides v, then v was proposed by some process.
Repeated Consensus. We now dene repeated consensus. Each correct process
has as input an innite sequence of proposed values, and outputs an innite
sequence of decision values such that:
1. Two correct processes have the same output. (The output of a faulty process
is a prex of this output.)
2. The ith value of the output is the ith value of the input of some process.
We dene the repeated consensus in terms of two primitives, R-PROPOSE(v) and
R-DECIDE(u). When a process executes the ith R-PROPOSE(v), v is the ith value
of its input (we say that it proposes v for the ith consensus); similarly, when a
process executes the ith R-DECIDE(u) u is the ith value of its output (we say that
it decides v for the ith consensus).
Specication 3 (Repeated Consensus). A run R satises the specication
Repeated Consensus if and only if the following three requirements are satised
in R:
Agreement: If u and v are the outputs of two processes, then u is a prex of
v or v is a prex of u.
Termination: Every correct process has an innite output.
Validity: If the ith value of the output of a process is v, then v is the ith
value of the input of some process.

Reliable Broadcast in F

In this section, we show that P is the weakest failure detector to solve the
reliable broadcast in F .

With Finite Memory Consensus Is Easier Than Reliable Broadcast

47

P is Necessary. To show that P is necessary to solve the reliable broadcast


the following lemma is central to the proof:
Lemma 1. Let A be an algorithm solving Reliable Broadcast in F with a failure
detector D. There exists an integer k such that for every process p and every
correct process q, for every run R of A where process p BROADCASTs and DELIVERs
k messages, at least one message from q has been received by some process.
Assume now that there exists an algorithm A that implements the reliable broadcast in F using the failure detector D. To show our result we have to give an
algorithm that uses only D to emulate the output of P for every failure pattern.
Actually, we give an algorithm A(p,q) (Figure 1) where a given process p
monitors a given process q. This algorithm uses one instance of A with D. Note
that all processes except q participate to this algorithm following the code of A.
In this algorithm Output q is equal to either {q} (q is faulty) or (q is correct).
The algorithm A(p,q) works as follows: p tries to BROADCAST k messages, all
processes execute the code of the algorithm A using D except q that does nothing.
If p DELIVERs k messages, it sets Output q to q and never changes the values of
Output q. By lemma 1, if q is correct p cant DELIVER k messages and so it never
sets Output q to {q}. If q is faulty and p is correct: as A solve reliable broadcast,
p has to deliver DELIVER k messages and so p sets Output q to {q}.4
To emulate P , each process p uses algorithm A(p,q) for every process q. As
D is a failure detector it can be used for each instance. The output of P at p
(variable Output) is then the union of Output q for every process q.
1:
/ Code for process p /
2: begin
3:
Output q
4:
for i = 1 to k do
5:
BROADCAST(m)
/ using A with D /
6:
wait for DELIVER(m)
7:
end for
8:
Output q {q}
9: end
10:
/ Code for process q /
11: begin
12: end
13:
/ Code for every process {p, q} /
14: begin
15:
execute the code of A with D for these messages
16: end
Fig. 1. A(p,q)

Theorem 1. P is necessary to solve Reliable Broadcast in F .


P is Sucient. In Algorithm B (Figure 2), every process uses a failure detector
module of type P and a nite memory. Theorem 2 shows that Algorithm B
solves the reliable broadcast in F and directly implies that P is sucient to
solve the reliable broadcast in F (Corollary 1).
4

If q is faulty and p is faulty, the property of failure detector is trivially ensured.

48

C. Delporte-Gallet et al.

/ Code for every process q /


1:
2: variables:
2
3:
Flag[1 . . . n][1 . . . n] {0,1}n ; (i, j) 2 , Flag[i][j] is initialized to 0
4:
FD: failure detector of type P
5:
Mes[1 . . . n]: array of data messages; i , Mes[i] is initialized to
6: function:
7:
MesToBrd(): returns a message or
8: begin
9:
repeat forever
10:
if Mes[p] = then
11:
Mes[p] MesToBrd()
12:
if Mes[p] = then
13:
Flag[p][p] (Flag[p][p] + 1) mod 2
14:
end if
15:
end if
16:
for all i \ FD do
17:
for all j\(FD{p,i}),Flag[i][p] =Flag[i][j] do
18:
if (receive i-ACK, F from j) (F =Flag[i][p]) then
19:
Flag[i][j] F
20:
else
21:
send i-BRD, Mes[i], Flag[i][p] to j
22:
end if
23:
end for
24:
if (Mes[i] =) (q \ FD, Flag[i][i] = Flag[i][q]) then
25:
DELIVER(Mes[i]); Mes[i]
26:
end if
27:
end for
28:
for all i \ FD \ {p} do
29:
for all j \ (FD {p}) do
30:
if (receive i-BRD, m, F from j) then
31:
if (q \ FD, Flag[i][q] = Flag[i][i]) (F = Flag[i][p]) then
32:
Mes[i] m; Flag[i][p] F
33:
end if
34:
if i = j then
35:
Flag[i][i] F
36:
end if
37:
if (i = j) (q \ FD, Flag[i][q] = Flag[i][i]) then
38:
send i-ACK, Flag[i][p] to j
39:
end if
40:
end if
41:
end for
42:
end for
43:
end repeat
44: end

Fig. 2. Algorithm B

In Algorithm B, each process p executes broadcasts sequentially: p starts a


new broadcast only after the termination of the previous one. To that goal,
any process p initializes Mes[p] to . Then, p periodically checks if an external
application invokes BROADCAST(). In this case, MesToBrd() returns the message
to broadcast, say m. When this event occurs, Mes[p] is set to m and the broadcast
procedure starts. Mes[p] is set to at the end of the broadcast, p checks again,
and so on.
Algorithm B has to deal with two types of faults: process crashes and message
loss.
- Dealing with process crashes. Every process uses a failure detector of type
P to detect the process crashes. Note that, as mentionned in Section 2,

With Finite Memory Consensus Is Easier Than Reliable Broadcast

49

we assume that when a process is suspected by some process it remains


suspected forever.
Assume that a process p broadcasts the message m: p sends a broadcast
message (p-BRD) with the datum m to any process it believes to be correct.
In Algorithm B, p executes DELIVER(m) only after all other processes it
does not suspect receive m. To that goal, we use acknowledgment mechanisms. When p received an acknowledgment for m (p-ACK) from every other
process it does not suspect, p executes DELIVER(m) and the broadcast of m
terminates (i.e., Mes[p] is set to ).
To ensure the agreement property, we must guarantee that if p crashes
but another process q already executes DELIVER(m), then any correct process eventually executes DELIVER(m). To that goal, any process can execute
DELIVER(m) only after all other processes it does not suspect except p receive m. Once again, we use acknowledgment mechanisms to that end: q also
broadcasts m to every other process it does not suspect except p (this induces
that a process can now receive m from a process dierent of p) until receiving an acknowledgment for m from all these processes and the broadcast
message from p if q does not suspect it.
To prevent m to be still broadcasted when p broadcasts the next message, we synchronize the system as follows: any process acknowledges m to
p only after it received an acknowledgment for m from every other process
it does not suspect except p. By contrast, if a process i receives a message
broadcasted by p (p-BRD) from the process j = p, i directly acknowledges
the message to j.
- Dealing with message loss. The broadcast messages have to be periodically
retransmitted until they are acknowledged. To that goal, any process q stores
the last broadcasted message from p into its variable Mesq [p] (initialized
to ). However, some copies of previously received messages can be now
in transit at any time in the network. So, each process must be able to
distinguish if a message it receives is copy of a previously received message
or a new one, say valid. To circumvent this problem, we use the traditional
alternating-bit mechanism [27,28]: a ag value (0 or 1) is stored into any
message and a two-dimentionnal array, noted Flag[1 . . . n][1 . . . n], allows us
to distinguish if the messages are valid or not. Initially, any process sets
Flag[i][j] to 0 for all pairs of processes (i,j). In the code of process p, the
value Flagp [p][p] is used to mark every p-BRD messages sent by p. In the code
of every process q = p, Flagq [p][q] is equal to the ag value of the last valid
p-BRD message q receives (not necessarily from p). For all q  = q, Flagq [p][q  ]
is equal to the ag value of the last valid p-BRD message q receives from q  .
At the beginning of any broadcast at p, p increments Flagp [p][p] modulus
2. The broadcast terminates at p when for every other process q that p does
not suspect, Flagp [p][q] = Flagp [p][p], Flagp [p][q] being set to Flagp [p][p]
only when p received a valid acknowledgement from q, i.e., an acknowledgment marked with the value Flagp [p][p].
Upon receiving a p-BRD message marked with the value F , a process q = p
detects that it is a new valid message broadcasted by p (but not necessarily

50

C. Delporte-Gallet et al.

sent by p) if for every non-suspected process j, (Flagq [p][j] = Flagq [p][p])


and (F = Flagq [p][q]). In this case, p sets Mesq [p] to m and sets Flagq [p][q]
to F . From this point on, q periodically sends p-BRD,Mesq [p],Flagq [p][q] to
any other process it does not suspect except p until receiving a valid acknowledgment (i.e., an acknowledgment marked with the value Flagq [p][q])
from all these processes. For any non-suspected process j dierent from
p and q, Flagq [p][j] is set to Flagq [p][q] when q received an acknowledgment marked with the value Flagq [p][q] from j. Finally, Flagq [p][p] is set to
Flagq [p][q] when q received the broadcast message from p (marked with the
value Flagq [p][q]). Hence, q can execute DELIVER(Mes[p]) when (Mes[p] =)
and (j \ FD, Flagq [p][j] = Flagq [p][p]) because (1) it receives a valid
broadcast message from p if p was not suspected and it has the guarantee
that any non-suspected process dierent of p receives m in a valid message.
To ensure that q executes DELIVER(Mes[p]) at most one, q just has to set
Mes[p] to after.
It is important to note that q acknowledges the valid p-BRD messages
it receives from p only when the predicate (j \ FD, Flagq [p][j] =
Flagq [p][p]) holds. However, to guarantee the liveness, q acknowledges any
p-BRD message that it receives from any other process. Every p-ACK messages
sent by q is marked with the value Flagq [p][q].
Finally, p stops its current broadcast when the following condition holds:
(Mesp [p] =) (q \ FD, Flagp [p][p] = Flagp [p][q]), i.e., any nonsuspected process has acknowledged Mesp [p]. In this case, p sets Mes[p] to
.
Theorem 2. Algorithm B is a Reliable Broadcast algorithm in F with P .
Corollary 1. P is sucient for solving Reliable Broadcast in F .

Consensus in F

In this section, we show that we can solve consensus in system F with a failure detector that is strictly weaker than the failure detector necessary to solve
reliable broadcast and repeated consensus. We solve consensus with the strong
failure detector S. S is not the weakest failure detector to solve consensus whatever the number of crash but it is strictly weaker than P and so enough to
show our results.
We customize the algorithm of Chandra and Toueg [7] that works in an
asynchronous message-passing system with reliable links and augmented with
a strong failure detector (S), to our model.
In this algorithm, called CS in the following (Figure 3), the processes execute
n asynchronous rounds. First, processes execute n 1 asynchronous rounds (r
denotes the current round number) during which they broadcast and relay their
proposed values. Each process p waits until it receives a round r message from
every other non-suspected process (n.b. as mentionned in Section 2, we assume
that when a process is suspected it remains suspected forever) before proceeding

With Finite Memory Consensus Is Easier Than Reliable Broadcast

51

to round r + 1. Then, processes execute a last asynchronous round during which


they eliminate some proposed values. Again each process p waits until it receives
a round n message from every other process it does not suspected. By the end of
these n rounds, correct processes agree on a vector based on the proposed values
of all processes. The ith element of this vector either contains the proposed value
of process i or . This vector contains the proposed value of at least one process:
a process that is never suspected by all processes. Correct processes decide the
rst nontrivial component of this vector.
To customize this algorithm to our model, we have to ensure the progress of
each process: While a process has not ended the asynchronous round r it must
be able to retransmit all the messages5 that it has previously sent in order to
allow others processes to progress despite the link failures. As we used a strong
failure detector and unreliable but fair links, it is possible that one process has
decided and the other ones still wait messages of asynchronous rounds. When
a process has terminated the n asynchronous rounds, it uses a special Decide
message to allow others processes to progress.
In the algorithm, we rst use a function consensus. This function takes the
proposed value in parameter and returns the decision value and uses a failure
detector. Then, at processes request, we propagate the Decide message.
Theorem 3 shows that Algorithm CS solves the consensus in F and directly
implies that S is sucient to solve te consensus problem in F (Corollary 2).
Theorem 3. Algorithm CS is a Consensus algorithm in F with S.
Corollary 2. S is sucient for solving Consensus in F .

Repeated Consensus in F

We show in this section that P is the weakest failure detector to solve the
reliable consensus problem in F .
P is Necessary. The proof is similar to the one in Section 4, and here the
following lemma is central to the proof:
Lemma 2. Let A be an algorithm solving Repeated Consensus in F with a
failure detector D. There exists an integer k such that for every process p and
every correct process q for every run R of A where process p R-PROPOSEs and
R-DECIDEs k times, at least one message from q has been received by some process.
Assume that there exists an algorithm A that implements Repeated Consensus in
F using the failure detector D. To show our result we have to give an algorithm
that uses only D to emulate the output of P for every failure pattern.
In fact we give an algorithm Aq (Figure 4) where processes monitor a given
process q. This algorithm uses one instance of A with D. Note that all processes
except q participate to this algorithm following the code of A. In this algorithm
Output q is equal to either {q} (q is crashed) or (q is correct).
5

Notice that they are in nite number.

52

C. Delporte-Gallet et al.

/ Code for process p /


1:
2: function consensus(v) with the failure detector fd
3:
variables:
4:
Flag[1 . . . n] {true,f alse}n ; i , Flag[i] is initialized to false
5:
V[1 . . . n]: array of propositions; i , V[i] is initialized to
6:
Mes[1 . . . n]: array of arrays of propositions; i , Mes[i] is initialized
7:
r: integer; r is initialized to 1
8:
begin
9:
V[p] v the proposed values
10:
Mes[1] V
11:
while (r n) do
12:
send R-r, Mes[r] to every process, except p
13:
for all i \ (fd {p}), Flag[i] = f alse do
14:
if (receive R-r, W from i) then
15:
Flag[i] true
16:
if r < n then
17:
if V[i] = then
18:
V[i] W [i]; Mes[r + 1][i] W [i]
19:
end if
20:
else
21:
if V[i] = then
22:
V[i] W [i]
23:
end if
24:
end if
25:
end if
26:
end for
27:
for all i \ {p} do
28:
if (receive R-x, W from i), x < r then
29:
send R-x, Mes[x] to i
30:
end if
31:
end for
32:
if q \ (fd {p}), Flag[q] = true then
33:
if r < n then
34:
for all i do
35:
Flag[i] f alse
36:
end for
37:
end if
38:
if r = n 1 then
39:
Mes[n] V
40:
end if
41:
r r+1
42:
end if
43:
for all i \ {p} do
44:
if (receive Decide, d from i) then
45:
return(d)
46:
end if
47:
end for
48:
end while
49:
d the rst component of V dierent from ; return(d)
50:
end
51: end function
52: procedure PROPOSE(v)
53:
variables: u: integer; FD: failure detector of type S
54:
begin
55:
u consensus(v) with FD
56:
DECIDE(u)
57:
repeat forever
58:
for all j \ {p}, x {1, ..., n} do
59:
if (receive R-x, W from j) then
60:
send Decide, u to j
61:
end if
62:
end for
63:
end repeat
64:
end
65: end procedure

Fig. 3. Algorithm CS, Consensus with S

to

With Finite Memory Consensus Is Easier Than Reliable Broadcast

53

The algorithm Aq works as follows: processes try to R-DECIDE k times, all


processes execute the code of the algorithm A using D except q that does nothing.
If p R-DECIDE k messages, it sets Output q to q and never changes the values of
Output q.
By lemma 2, if q is correct p cannot decides k times and so it never sets
Output q to q. If q is faulty and p is correct6 : as A solve Repeated Consensus, p
has to R-DECIDE k times and so p sets Output q to {q}.
To emulate P , each process p uses Algorithm Aq for every process q. As D
is a failure detector it can be used for each instance. The output of P at p
(variable Output) is then the union of Output q for every process q.
1:
/ Code for process p of \ q /
2: begin
3:
Output q
4:
for i = 1 to k do
5:
R-PROPOSED(v)
/ using A with D
6:
wait for R-DECIDE(v)
7:
end for
8:
Output q {q}
9: end
10:
/ Code for process q /
11: begin
12: end

Fig. 4. Aq

Theorem 4. P is necessary to solve Repeated Consensus problem in F .


P is Sucient. In this section, we show that P is sucient to solve the repeated consensus in F . To that goal, we consider an algorithm called Algorithm
RCP (Figures 5 and 6). In this algorithm, any process uses a failure detector
module of type P (again we assume that since a process is suspected by some
process it is suspected forever) and a nite memory. Theorem 5 shows that Algorithm RCP solves the repeated consensus in F and directly implies that P
is sucient to solve the repeated consensus in F (Corollary 3).
We assume that each correct processes has an innite sequence of input and
when it terminates R-PROPOSED(v) where v is the ith value of its input, it executes
R-PROPOSED(w) where w is the (i + 1)th value of its input.
When a process executes R-PROPOSED(v), it rst executes a consensus in which
it proposes v. The decision of this consensus is then outputted. Then, processes
have to avoid that the messages of two consecutive consensus are mixed up.
We construct a synchronization barrier. Without message loss, and with a perfect failure detector, it is sucient that each process waits a Decide message
from every process trusted by its failure detector module. By FIFO property, no
message R-x,  sent before this Decide message can be received in the next
consensus.
6

If q is faulty and p is faulty, the property of failure detector is trivially ensured.

54

C. Delporte-Gallet et al.

/ Code for process p /


1:
2: function consensus(v) with the failure detector fd
3:
variables:
4:
Flag[1 . . . n] {true,f alse}n ; i , Flag[i] is initialized to false
5:
V[1 . . . n]: array of propositions; i , V[i] is initialized to
6:
Mes[1 . . . n]: array of arrays of propositions; i , Mes[i] is initialized
7:
r: integer; r is initialized to 1
8:
begin
9:
V[p] v the proposed values; Mes[1] V
10:
while (r n) do
11:
send R-r, Mes[r] to every process, except {p} fd
12:
for all i \ (fd {p}), Flag[i] = f alse do
13:
if (receive R-r, W from i) then
14:
Flag[i] true
15:
if r < n then
16:
if V[i] = then
17:
V[i] W [i]; Mes[r + 1][i] W [i]
18:
end if
19:
else
20:
if V[i] = then
21:
V[i] W [i]
22:
end if
23:
end if
24:
end if
25:
end for
26:
if q \ (fd {p}), Flag[q] = true then
27:
if r < n then
28:
for all i do
29:
Flag[i] f alse
30:
end for
31:
end if
32:
if r = n 1 then
33:
Mes[n] V
34:
end if
35:
r r+1
36:
end if
37:
for all i \ (fd {p}) do
38:
if (receive Decide, d from i) then
39:
return(d)
40:
end if
41:
end for
42:
end while
43:
d the rst component of V dierent from ; return(d)
44:
end
45: end function

to

Fig. 5. Algorithm RCP, Repeated Consensus with P . Part 1: function consensus().

To deal with message loss, the synchronization barrier is obtained by two


asynchronous rounds: In the rst asynchronous rounds, each process sends a
Decide message and waits to receive a Decide message or a Start message from
every other process it does not suspect. In the second one, each process sends
a Decide message and waits to receive a Start message or a R-x,  message.
Actually, due to message loss it is possible that a process goes to its second
round despite some process have not received its Decide message, but it cannot
nish the second round before every correct processes have nished the rst one.
As a faulty process can be suspected before it crashes (due to the quality of
P ), it is possible that a faulty process will not be waited by other processes
although it is still alive. To avoid that this process disturbs the round, since
a process p suspects a process q, p stops to wait messages from q and to send
messages to q.

With Finite Memory Consensus Is Easier Than Reliable Broadcast

55

/ Code for process p /


1:
2: variables:
3:
FD: failure detector of type P
4: procedure R-PROPOSED(v)
5:
variables:
6:
FlagR[1 . . . n] {true,f alse}n ; i , FlagR[i] is initialized to false
7:
stop: boolean; stop is initialized to false
8:
u: integer;
9:
begin
10:
u consensus(v) with FD
11:
R-DECIDE(u)
12:
repeat
13:
send Decide, u to every process, except {p} FD
14:
for all i \ (FD {p}), FlagR[i] = f alse do
15:
if (receive Decide, u from i) (receive Start from i) then
16:
FlagR[i] true
17:
end if
18:
end for
19:
if q \ (FD {p}), FlagR[q] = true then
20:
stop true
21:
end if
22:
until stop
23:
for all i do
24:
FlagR[i] f alse
25:
end for
26:
stop f alse
27:
repeat
28:
send Start to every process, except {p} FD
29:
for all i \ (FD {p}), FlagR[i] = f alse do
30:
if (receive Start from i) (receive R-1, W from j) then
31:
FlagR[i] true
32:
end if
33:
end for
34:
if q \ (FD {p}), FlagR[q] = true then
35:
stop true
36:
end if
37:
until stop
38:
end
39: end procedure

Fig. 6. Algorithm RCP, Repeated Consensus with P . Part 2.

Note also that if the consensus function is executed with P , then there is
no need to send R-x,  in round r > x again. We have rewritten the consensus
function to take account of these facts, but the behaviour remains the same.
Theorem 5. Algorithm RCP (Figure 5 and 6) is a Repeated Consensus algorithm in F with P .
Corollary 3. P is sucient for solving Repeated Consensus in F .
Contrary to these results in system F , in system I , we have the same weakest failure detector to solve the consensus problem and the repeated consensus
problem:
Proposition 2. In system I , if there is an algorithm A with failure detector
D solving Consensus, then there exists an algorithm solving Repeated Consensus
with D.

56

C. Delporte-Gallet et al.

References
1. Guerraoui, R., Schiper, A.: The generic consensus service. IEEE Transactions on
Software Engineering 27(1), 2941 (2001)
2. Gafni, E., Lamport, L.: Disk paxos. Distributed Computing 16(1), 120 (2003)
3. Fischer, M.J., Lynch, N.A., Paterson, M.: Impossibility of distributed consensus
with one faulty process. Journal of the ACM 32(2), 374382 (1985)
4. Chor, B., Coan, B.A.: A simple and ecient randomized byzantine agreement
algorithm. IEEE Trans. Software Eng. 11(6), 531539 (1985)
5. Dolev, D., Dwork, C., Stockmeyer, L.J.: On the minimal synchronism needed for
distributed consensus. Journal of the ACM 34(1), 7797 (1987)
6. Dwork, C., Lynch, N.A., Stockmeyer, L.J.: Consensus in the presence of partial
synchrony. Journal of the ACM 35(2), 288323 (1988)
7. Chandra, T.D., Toueg, S.: Unreliable failure detectors for reliable distributed systems. Journal of the ACM 43(2), 225267 (1996)
8. Chandra, T.D., Hadzilacos, V., Toueg, S.: The weakest failure detector for solving
consensus. Journal of the ACM 43(4), 685722 (1996)
9. Delporte-Gallet, C., Fauconnier, H., Guerraoui, R.: Shared memory vs message
passing. Technical report, LPD-REPORT-2003-001 (2003)
10. Eisler, J., Hadzilacos, V., Toueg, S.: The weakest failure detector to solve nonuniform consensus. Distributed Computing 19(4), 335359 (2007)
11. Delporte-Gallet, C., Fauconnier, H., Guerraoui, R., Hadzilacos, V., Kouznetsov, P.,
Toueg, S.: The weakest failure detectors to solve certain fundamental problems in
distributed computing. In: Twenty-Third Annual ACM Symposium on Principles
of Distributed Computing (PODC 2004), pp. 338346 (2004)
12. Aguilera, M.K., Toueg, S., Deianov, B.: Revisiting the weakest failure detector for
uniform reliable broadcast. In: Jayanti, P. (ed.) DISC 1999. LNCS, vol. 1693, pp.
1333. Springer, Heidelberg (1999)
13. Halpern, J.Y., Ricciardi, A.: A knowledge-theoretic analysis of uniform distributed
coordination and failure detectors. In: Eighteenth Annual ACM Symposium on
Principles of Distributed Computing (PODC 1999), pp. 7382 (1999)
14. Delporte-Gallet, C., Fauconnier, H., Guerraoui, R., Kouznetsov, P.: Mutual exclusion in asynchronous systems with failure detectors. Journal of Parallel and
Distributed Computing 65(4), 492505 (2005)
15. Guerraoui, R., Kapalka, M., Kouznetsov, P.: The weakest failure detectors to boost
obstruction-freedom. In: Dolev, S. (ed.) DISC 2006. LNCS, vol. 4167, pp. 399412.
Springer, Heidelberg (2006)
16. Raynal, M., Travers, C.: In search of the holy grail: Looking for the weakest failure
detector for wait-free set agreement. In: Shvartsman, M.M.A.A. (ed.) OPODIS
2006. LNCS, vol. 4305, pp. 319. Springer, Heidelberg (2006)
17. Zielinski, P.: Anti-omega: the weakest failure detector for set agreement. Technical Report UCAM-CL-TR-694, Computer Laboratory, University of Cambridge,
Cambridge, UK (July 2007)
18. Lynch, N.A., Mansour, Y., Fekete, A.: Data link layer: Two impossibility results.
In: Symposium on Principles of Distributed Computing, pp. 149170 (1988)
19. Bazzi, R.A., Neiger, G.: Simulating crash failures with many faulty processors
(extended abstract). In: Segall, A., Zaks, S. (eds.) WDAG 1992. LNCS, vol. 647,
pp. 166184. Springer, Heidelberg (1992)
20. Delporte-Gallet, C., Devismes, S., Fauconnier, H., Petit, F., Toueg, S.: With nite
memory consensus is easier than reliable broadcast. Technical Report hal-00325470,
HAL (October 2008)

With Finite Memory Consensus Is Easier Than Reliable Broadcast

57

21. Cavin, D., Sasson, Y., Schiper, A.: Consensus with unknown participants or fundamental self-organization. In: Nikolaidis, I., Barbeau, M., Kranakis, E. (eds.)
ADHOC-NOW 2004. LNCS, vol. 3158, pp. 135148. Springer, Heidelberg (2004)
22. Greve, F., Tixeuil, S.: Knowledge connectivity vs. synchrony requirements for faulttolerant agreement in unknown networks. In: DSN, pp. 8291. IEEE Computer
Society, Los Alamitos (2007)
23. Fern
andez, A., Jimenez, E., Raynal, M.: Eventual leader election with weak assumptions on initial knowledge, communication reliability, and synchrony. In: DSN,
pp. 166178. IEEE Computer Society, Los Alamitos (2006)
24. Chandra, T.D., Toueg, S.: Unreliable failure detectors for asynchronous systems
(preliminary version). In: 10th Annual ACM Symposium on Principles of Distributed Computing (PODC 1991), pp. 325340 (1991)
25. Delporte-Gallet, C., Fauconnier, H., Guerraoui, R.: A realistic look at failure detectors. In: DSN, pp. 345353. IEEE Computer Society, Los Alamitos (2002)
26. Hadzilacos, V., Toueg, S.: A modular approach to fault-tolerant broadcasts and
related problems. Technical Report TR 94-1425, Department of Computer Science,
Cornell University (1994)
27. Bartlett, K.A., Scantlebury, R.A., Wilkinson, P.T.: A note on reliable full-duplex
transmission over halfduplex links. Journal of the ACM 12, 260261 (1969)
28. Stenning, V.: A data transfer protocol. Computer Networks 1, 99110 (1976)

Group Renaming
Yehuda Afek1 , Iftah Gamzu1, , Irit Levy1 , Michael Merritt2 , and Gadi Taubenfeld3
1

School of Computer Science, Tel-Aviv University, Tel-Aviv 69978, Israel


{afek,iftgam,levyirit}@tau.ac.il
2
AT&T Labs, 180 Park Ave., Florham Park, NJ 07932, USA
[email protected]
3
The Interdisciplinary Center, P.O. Box 167, Herzliya 46150, Israel
[email protected]

Abstract. We study the group renaming task, which is a natural generalization


of the renaming task. An instance of this task consists of n processors, partitioned
into m groups, each of at most g processors. Each processor knows the name of
its group, which is in {1, . . . , M }. The task of each processor is to choose a new
name for its group such that processors from different groups choose different
new names from {1, . . . , }, where  < M . We consider two variants of the
problem: a tight variant, in which processors of the same group must choose the
same new group name, and a loose variant, in which processors from the same
group may choose different names. Our findings can be briefly summarized as
follows:
1. We present an algorithm that solves the tight variant of the problem with  =
2m 1 in a system consisting of g-consensus objects and atomic read/write
registers. In addition, we prove that it is impossible to solve this problem
in a system having only (g 1)-consensus objects and atomic read/write
registers.
2. We devise an algorithm for the loose variant ofthe problem that only uses
atomic read/write registers, and has  = 3n n 1. The algorithm also
guarantees that the number of different new group names chosen by proces
sors from the same group is at most min{g, 2m, 2 n}. Furthermore, we
consider the special case when the groups are uniform in size and show
that
our algorithm is self-adjusting to have  = m(m + 1)/2, when m < n,

and  = 3n/2 + m n/2 1, otherwise.

1 Introduction
1.1 The Group Renaming Problem
We investigate the group renaming task which generalizes the well known renaming
task [3]. In the original renaming task, each processor starts with a unique identifier taken from a large domain, and the goal of each processor is to select a new
unique identifier from a smaller range. Such an identifier can be used, for example,


Supported by the Binational Science Foundation, by the Israel Science Foundation, and by
the European Commission under the Integrated Project QAP funded by the IST directorate as
Contract Number 015848.

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 5872, 2008.
c Springer-Verlag Berlin Heidelberg 2008


Group Renaming

59

to mark a memory slot in which the processor may publish information in its possession. In the group renaming task, groups of processors may hold some information
which they would like to publish, preferably using a common memory slot for each
group. An additional motivation for studying the group version of the problem is to
further our understanding about the inherent difficulties in solving tasks with respect to
groups [10].
More formally, an instance of the group renaming task consists of n processors partitioned into m groups, each of which consists of at most g processors. Each processor has
a group name taken from some large name space [M ] = {1, . . . , M }, representing the
group that the processor affiliates with. In addition, every processor has a unique identifier taken from [N ]. The objective of each processor is to choose a new group name
from [
], where
< M . The collection of new group names selected by the processors
must satisfy the uniqueness property meaning that any two processors from different
groups choose distinct new group names. We consider two variants of the problem:
a tight variant, in which in addition to satisfying the uniqueness property, processors
of the same group must choose the same new group name (this requirement is called
the consistency property), and
a loose variant, in which processors from the same group may choose different
names, rather than a single one, as long as no two processors from different groups
choose the same new name.
1.2 Summary of Results
We present a wait-free algorithm that solves the tight variant of the problem with
=
2m 1 in a system equipped with g-consensus objects and atomic read/write registers.
This algorithm extends the upper bound result of Attiya et al. [3] for g = 1. On the
lower bound side, we show that there is no wait-free implementation of tight group
renaming in a system equipped with (g 1)-consensus objects and atomic read/write
registers. In particular, this result implies that there is no wait-free implementation of
tight group renaming using only atomic read/write registers for g 2.
We then restrict our attention to shared memory systems which support only atomic
read/write reagisters and study the loose variant. We develop a self-adjusting algorithm,
namely, an algorithm that achieves distinctive performance guarantees conditioned
on
the number of groups and processors. On worst case, this algorithm has
= 3n n1,
while guaranteeing that the number of different
new group names chosen by processors
from the same group is at most min{g, 2m, 2 n}. It seems worthy to note that the
algorithm is built around a filtering technique that overcomes scenarios in which both
the size of the maximal group and the number of groups are large, i.e., g = (n) and
m = (n). Essentially, such scenario arises when there are (n) groups containing
only few members and few groups containing (n) members.
We also consider the special case when the groups are uniform in size, and refine
the analysis of our loose group
we demonstrate that
=

renaming algorithm. Notably,


m(m + 1)/2, when m < n, and
= 3n/2 + m n/2 1, otherwise. This last
result settles, to some extent, an open question posed by Gafni [10].

60

Y. Afek et al.

1.3 Related Work


Group solvability was first introduced and investigated in [10]. The renaming problem
was first solved for message-passing systems [3], and then for shared memory systems
using atomic registers [6]. Both these papers present one-shot algorithms (i.e., solutions that can be used only once). In [8] the first long-lived renaming algorithm was
presented: The
-assignment algorithm presented in [8] can be used as an optimal longlived (2n 1)-renaming algorithm with exponential step complexity. Several of the
many papers on renaming using atomic registers are [1,2,4,11,14,15]. Other references
are mentioned later in the paper.

2 Model and Definitions


Our model of computation consists of an asynchronous collection of n processors communicating via shared objects. Each object has a type which defines the set of operations
that the object supports. Each object also has sequential specification that specifies how
the object behaves when these operations are applied sequentially. Asynchrony means
that there is no assumptions on the relative speeds of the processors.
Various systems differ in the level of atomicity that is supported. Atomic (or indivisible) operations are defined as operations whose execution is not interfered with by other
concurrent activities. This definition of atomicity is too restrictive, and it is safe to relax
it by assuming that processors can try to access the object at the same time, however,
although operations of concurrent processors may overlap, each operation should appear to take effect instantaneously. In particular, operations that do not overlap should
take effect in there real-time order. This type of correctness requirement is called
linearizability [13].
We will always assume that the system supports atomic registers, which are shared
objects that support atomic reads and writes operations. In addition, the system may
also support forms of atomicity which are stronger than atomic reads and writes. One
specific atomic object that will play an important role in our investigation is the consensus object. A consensus object o supports one operation: o.propose(v), satisfying:
1. Agreement. In any run, the o.propose() operation returns the same value, called the
consensus value, to every processor that invokes it.
2. Validity. In any run, if the consensus value is v, then some processor invoked
o.propose(v).
Throughout the paper, we will use the notation g-consensus to denote a consensus object
for g processors.
An object is wait-free if it guarantees that every processor is always able to complete
its pending operation in a finite number of its own steps regardless of the execution
speed of other processors (does not admit starvation). Similarly, an implementation or
an algorithm is wait-free, if every processor makes a decision within a finite number of
its own steps. We will focus only on wait-free objects, implementations or algorithms.
Next, we define two notions for measuring the relative computational power of shared
objects.

Group Renaming

61

The consensus number of an object of type o, is the largest n for which it is possible
to implement an n-consensus object in a wait-free manner, using any number of
objects of type o and any number of atomic registers. If no largest n exists, the
consensus number of o is infinite.
The consensus hierarchy (also called the wait-free hierarchy) is an infinite hierarchy
of objects such that the objects at level i of the hierarchy are exactly those objects
with consensus number i.
It has been shown in [12], that in the consensus hierarchy, for any positive i, in a system
with i processors: (1) no object at level less than i together with atomic registers can
implement any object at level i; and (2) each object at level i together with atomic
registers can implement any object at level i or at a lower level, in a system with i
processors. Classifying objects by their consensus numbers is a powerful technique for
understanding the relative power of shared objects.
Finally, for simplicity, when refereing to the group renaming problem, we will assume that m, the number of groups, is greater or equal to 2.

3 Tight Group Renaming


3.1 An Upper Bound
In what follows, we present a wait-free algorithm that solves tight group renaming using
g-consensus objects and atomic registers. Essentially, we prove the following theorem.
Theorem 1. For any g 1, there is a wait-free implementation of tight group renaming
with
= 2m 1 in a system consisting of g-consensus objects and atomic registers.
Corollary 1. The consensus number of tight group renaming is at most g.
Our implementation, i.e., Algorithm 1, is inspired by the renaming algorithm of Attiya
et al. [3], which achieves an optimal new names space size of 2n 1. In this renaming
algorithm, each processor iteratively picks some name and suggests it as its new name
until an agreement on the collection of new names is reached. The communication
between the processors is done using an atomic snapshot object. Our algorithm deviates
from this scheme by adding an agreement step between processors of the same group,
implemented using g-consensus objects. Intuitively, this agreement step ensures that all
the processors of any group will follow the decisions made by the fastest processor
in the group. Consequently, the selection of the new group names can be determined
between the representatives of the groups, i.e., the fastest processors. This enables us
to obtain the claimed new names space size of 2m 1. It is worthy to note that the
fastest processor of some group may change over time, and hence our agreement step
implements a follow the (current) group leader strategy. We believe that this concept
may be of independent interest. Note that the group name of processor i is designated
by GIDi , and the overall number of iterations executed is marked by I.
We now turn to establish Theorem 1. Essentially, this is achieved by demonstrating
that Algorithm 1 maintains the consistency and uniqueness properties (Lemmas 2 and

62

Y. Afek et al.

Algorithm 1. Tight group renaming algorithm: code for processor i [N ].


In shared memory:
SS[1, . . . , N ] array of swmr registers, initially .
HIS[1, . . . , N ][1, . . . , I][1, . . . , N ] array of swmr registers, initially .
CON[1, . . . , M ][1, . . . , I] array of g-consensus objects.
1: p 1
2: k 1
3: while true do
4:
SS[i] GIDi , p, k
5:
HIS[i][k][1, . . . , N ] Snapshot(SS)
6:
7:

 Agree on w, the winner of group GIDi in iteration k, and import its snapshot:
w CON[GIDi ][k].Compete(i)
(GID1 , p1 , k1 , . . . , GIDN , pN , kN ) HIS[w][k][1, . . . , N ]

 Check if pw , the proposal of w, can be chosen as the new name of group GIDi :
8:
P = {pj : j [N ] has GIDj
= GIDw and kj = maxq[N] {kq : GIDq = GIDj }}
9:
if pw P then
10:
r the rank of GIDw in {GIDj
= : j [N ]}
11:
p the r-th integer not in P
12:
else return pw
13:
end if
14:
k k+1
15: end while

3), that it has


= 2m 1 (Lemma 4), and that it terminates after a finite number of
steps (Lemma 5). Let us denote the value of p written to the snapshot array (see line 4)
in some iteration as the proposal value of the underlying processor in that iteration.
Lemma 1. The proposal values of processors from the same group is identical in any
iteration.
Proof. Consider some group. One can easily verify that the processors of that group,
and in fact all the processors, have an identical proposal value of 1 in the first iteration.
Thus, let us consider some iteration k > 1 and prove that all these processors have an
identical proposal value. Essentially, this is done by claiming that all the processors update their value of p in the preceding iteration in an identical manner. For this purpose,
notice that all the processors compete for the same g-consensus object in that iteration, and then import the same snapshot of the processor that won this consensus (see
lines 67). Consequently, they execute the code in lines 813 in an identical manner. In
particular, this guarantees that the update of p in line 11 is done exactly alike.


Lemma 2. All the processors of the same group choose an identical new group name.
Proof. The proof of this lemma follows the same line of argumentation presented in the
proof of Lemma 1. Again, the key observation is that in each iteration, all the processors
of some group compete for the same g-consensus object, and then import the same
snapshot. Since the decisions made by the processors in lines 813 are solely based on
this snapshot, it follows that they are identical. In particular, this ensures that once a

Group Renaming

63

processor chooses a new group name, all the other processors will follow its lead and
choose the same name.


Lemma 3. No two processors of different groups choose the same new group name.
Proof. Recall that we know, by Lemma 2, that all the processors of the same group
choose an identical new group name. Hence, it is sufficient that we prove that no two
groups select the same new name. Assume by way of contradiction that this is not the
case, namely, there are two distinct groups G and G  that select the same new group
name p . Let k and k  be the iteration numbers in which the decisions on the new
names of G and G  are done, and let w G and w G  be the corresponding processors that won the g-consensus objects in that iterations. Now, consider the snapshot
(GID1 , p1 , k1 , . . . , GIDN , pN , kN ), taken by w in its k-th iteration. One can easily
validate that pw = p since w writes its proposed value before taking a snapshot. Sim
ilarly, it is clear that pw = p in the snapshot (GID1 , p1 , k1 , . . . , GIDN , pN , kN
),
taken by w in its k  -th iteration. By the linearizability property of the atomic snapshot
object and without loss of generality, we may assume that snapshot of w was taken before the snapshot of w . Consequently, w must have captured the proposal value of w
in its snapshot, i.e., pw = p . This implies that p appeared in the set P of w . However,


this violates the fact that w reached the decision step in line 12, a contradiction.
Lemma 4. All the new group names are from the range [
], where
= 2m 1.
Proof. In what follows, we prove that the proposal value of any processor in any iteration is in the range [
]. Clearly, this proves the lemma as the chosen name of any group
is a proposal value of some processor. Consider some processor. It is clear that its first
iteration proposal value is in the range [
]. Thus, let us consider some iteration k > 1
and prove that its proposal value is at most 2m1. Essentially, this is done by bounding
the value of p calculated in line 11 of the preceding iteration. For this purpose, we first
claim that the set P consists of at most m 1 values. Notice that P holds the proposal
values of processors from at most m 1 groups. Furthermore, observe that for each of
those groups, it holds the proposal values of processors having the same maximal iteration counter. This implies, in conjunction with Lemma 1, that for each of those groups,
the proposal values of the corresponding processors are identical. Consequently, P consists of at most m 1 distinct values. Now, one can easily verify that the rank of every
group calculated in line 10 is at most m. Therefore, the new value of p is no more than
2m 1.


Lemma 5. Any processor either takes finite number of steps or chooses a new group
name.
Proof. The proof of this theorem is a natural generalization of the termination proof of
the renaming algorithm (see, e.g., [5, Sec. 16.3]). Thus, we defer it to the final version
of the paper.


3.2 An Impossibility Result
In Appendix A.1, we provide an FLP-style proof of the following theorem.

64

Y. Afek et al.

Theorem 2. For any g 2, it is impossible to wait-free implement tight group renaming in a system having (g 1)-consensus objects and atomic registers.
In particular, Theorem 2 implies that there is no wait-free implementation of tight group
renaming, even when g = 2, using only atomic registers.

4 Loose Group Renaming


In this section, we restrict our attention to shared memory systems which support only
atomic registers. By Theorem 2, we know that it is impossible to solve the tight group
renaming problem unless we relax our goal. Accordingly, we consider a variant in which
processors from the same group may choose a different new group name, as long as the
uniqueness property is maintained. The objective in this case is to minimize both the
inner scope size, which is the upper bound on the number of new group names selected
by processors from the same group, and the outer scope size, which is the new group
names range size. We use the notation, (, )-group renaming algorithm to designate
an algorithm yielding an inner scope of and an outer scope of .
4.1 The Non-uniform Case
In the following we consider the task when group sizes are not uniform. We present
a
n}
group renaming algorithm having a worst case
inner
scope
size
of
min{g,
2m,
2

and a worst case outer scope size of 3n n 1. The algorithm is self-adjusting


with respect to the input properties. Namely, it achieves better performance guarantees
conditioned on the number of groups and processors. It seems worthy to emphasize that
the
performance guarantees of our algorithm are not only based on g and m, but also
on n, which is crucial in several cases.
The algorithm is built upon a consolidation of two algorithms, denoted as
Algorithm 2 and Algorithm 3. Both algorithms are adaptations of previously known
renaming methods for groups (see, e.g., [10]). Algorithm 2, which efficiently handles
small-sized groups, is a (g, n + m 1)-group renaming algorithm, while Algorithm 3,
which efficiently attends to small number of groups, is a (min{m, g}, m(m + 1)/2)group renaming algorithm.
Theorem 3. Algorithm 2 is a wait-free (g, n + m 1)-group renaming algorithm.
Proof. The algorithm is very similar to the original renaming algorithm of Attiya et. al.
[3]. While there processors select a new name by computing the rank of their original
large id among the ids of participating processors, here processors consider the rank of
their original group name among the already published (participating) original group
names. One can prove that Algorithm 2 maintains the uniqueness property and terminates after finite number of steps by applying nearly identical arguments to those used
in the analogous proofs of the underlying renaming method (see, e.g., [5, Sec. 16.3]).
Therefore, we only focus on analyzing the size of the resulting new name-spaces. The
inner scope size of the algorithm is trivially g since there are at most g processors in
any group. We turn to bound the outer scope size. This is done by demonstrating that

Group Renaming

65

the proposal value pi of any processor i in any iteration is at most n + m 1. Clearly,


pi satisfies this requirement in the first iteration as its value is 1. Hence, let us consider
some other iteration and bound its proposal value. This is accomplished by bounding
the value of pi calculated in line 7 of the preceding iteration. For this purpose, notice
that the rank of every group calculated in line 6 is at most m. Furthermore, there are at
most n 1 values proposed by other processors. Thus, the new value of pi is at most
n + m 1.


Algorithm 2. code for processor i [N ].
In shared memory: SS[1, . . . , N ] array of swmr registers, initially .
1: pi 1
2: while true do
3:
SS[i] GIDi , pi 
4:
(GID1 , p1 , . . . , GIDN , pN ) Snapshot(SS)
5:
if pi = pj for some j [N ] having GIDj
= GIDi then
6:
r the rank of GIDi in {GIDk
= : k [N ]}
7:
pi the r-th integer not in {pk
= : k [N ] \ {i}}
8:
else return pi
9:
end if
10: end while

Theorem 4. Algorithm 3 is a wait-free (min{m, g}, m(m + 1)/2)-group renaming


algorithm.
Proof. In this algorithm each processor records its participation by publishing its id
and its group original name. Each processor then takes a snapshot of the memory and
returns as its new group name the size of the snapshot it had obtained, concatenated
with its group id rank among the group ids recorded in the snapshot. One can prove that
Algorithm 3 supports the uniqueness property by applying nearly identical arguments
to those used in the corresponding proof of the underlying renaming method (see, e.g.,
[7, Sec. 6]). Moreover, it is clear that the algorithm terminates after finite number of
steps. Thus, we only focus on analyzing the performance properties of the algorithm.
We begin with the inner scope size. Particularly, we prove a bound of m, noting that
a bound of g is trivial since there are at most g processors in any group. Consider
the case that two processors of the same group obtain the same number of observable
groups m
in line 3. We argue that they also choose the same new group name. For
this purpose, notice that the set of GIDs that reside in SS may only grow during any
execution sequence. Hence, if two processors have an identical m
then their snapshot
holds the same set of GIDs. Consequently, if those processors are of the same group
then their group rank calculated in line 4 is also the same, and therefore the new names
they select are identical. This implies that the number of new group names selected by
processors from the same group is bound by the maximal value of m,
which is clearly
never greater than m. We continue by bounding the outer scope size. As already noted,
m
m, and the rank of every group is at most m. Thus, the maximal group name is no
more than m(m 1)/2 + m.



66

Y. Afek et al.

Algorithm 3. code for processor i [N ].


In shared memory: SS[1, . . . , N ] array of swmr registers, initially .
1: SS[i] GIDi
2: (GID1 , . . . , GIDN ) Snapshot(SS)
3: m
the number of distinct GIDs in {GIDj
= : j [N ]}
4: r the rank of GIDi in {GIDj
= : j [N ]}
5: return m(
m
1)/2 + r

We are now ready to present our self-adjusting loose group renaming algorithm. The
algorithm has its roots in the natural approach that applies the best response with respect to the instance under consideration. For example, it is easy to see that Algorithm 3
outperforms Algorithm 2 with respect to
the inner scope size, for any instance. In adn, Algorithm 3 has an outer scope size of at
dition, one can
verify
that
when
m
<

2
has an outer scope size of at least n. Hence,
most n/2 n/2, whereas Algorithm

given an instance having m < n, the best response would be to execute Algorithm 3.
Unfortunately, a straight-forward application of this approach has several difficulties.
One immediate difficulty concerns the implementation since none of the processors
have prior knowledge of the real values of m or g. Our algorithm bypasses this difficulty by maintaining an estimation of these parameters using an atomic snapshot object.
Another difficulty concerns with performance issues. Specifically, both algorithms have
poor inner scope size guarantees for instances which simultaneously satisfy g = (n)
and m = (n). One concrete example having g = n/2 and m = n/2 + 1 consists
of a single group having n/2 members and n/2 singleton groups. In this case, both
algorithms have an inner scope size guarantee of n/2. We overcome this difficulty by
sensibly
combining the algorithms, therefore yielding an inner scope size guarantee of

2 n for these hard cases. The key observation utilized in this context is that if there
are many groups then most of them must be small. Consequently, by filtering out the
small-sized groups, we are left with a small number of large groups that we can handle efficiently. Note that Algorithm 4 employs Algorithm 3 as sub-procedure in two
cases (see lines 6 and 12). It is assumed that the shared memory space used by each
application of the algorithm is distinct.
Theorem 5. Algorithm 4 is
a group renaming algorithm having a worstcase inner
scope size of min{g, 2m, 2 n} and a worst case outer scope size of 3n n 1.
Proof. We begin by establishing the correctness of the algorithm. For this purpose, we
demonstrate that it maintains the uniqueness property and terminates after finite number of steps. One can easily validate that the termination property holds since both
Algorithm 2 and Algorithm 3 terminate after finite number of steps. It is also easy to
verify that the uniqueness property is maintained. This follows by recalling that both
Algorithm 2 and Algorithm 3 maintain the uniqueness property, and noticing that each
case of the if statement (see lines 514) utilizes a distinct set of new names. To be
precise, one should observe that any processor that
executes Algorithm 3 in line 6 is
assigned a new name in the range {1, . . . , n/2 n/2}, any processor that executes

Group Renaming

67

Algorithm
2 in line 9 is assigned a new name in the range {n/2 n/2+1, . . . , 5n/2

n/2 1}, and any processor that executes


Algorithm 3 in line 12 is assigned a new

name whose value is at least 5n/2 n/2. The first claim results bythe outer scope
properties of Algorithm 3 and the fact that processors from less than n groups may
execute this algorithm. The second argument follows by the outer scope properties of
Algorithm 2, combined with the observation that m n, and
the fact that the value
of the name returned by the algorithm is increased by n/2 n/2 in line 10. Finally,
the last claim holds since Algorithm 3 is guaranteed to attain
a positive-valued integer
name, and the value of this name is increased by 5n/2 n/2 1 in line 13.
Algorithm 4. Adjusting group renaming algorithm: code for processor i [N ].
In shared memory: SS[1, . . . , N ] array of swmr registers, initially .
1: SS[i] GIDi
2: (GID1 , . . . , GIDN ) Snapshot(SS)
3: m
the number of distinct GIDs in {GIDj
= : j [N ]}
4: g thenumber of processors j [N ] having GIDj = GIDi
5: if m
< n then
6:
x the outcome of Algorithm 3 (using shared memory SS1 [1, . . . , N ])
7:
return x
8: else if g n then
9:
x the outcome of
Algorithm 2 (using shared memory SS2 [1, . . . , N ])
10:
return x + n/2 n/2
11: else
12:
x the outcome of Algorithm 3 (using shared memory SS3 [1, . . . , N ])

13:
return x + 5n/2 n/2 1
14: end if

We now turn to establish the performance properties of the algorithm. We demonstrate that it is self-adjusting and has the following (inner scope, outer scope) properties:

+ 1)/2 )
m< n
( min{m, g}, m(m

m n and g n
( g, 3n/2 + m n/2 1 )

( min{g, 2 n}, 3n n 1 )
m n and g > n

always satisfy m
m. Therefore, all the
Case I: m < n. The estimation value m
processors execute Algorithm 3 in line 6. The properties of Algorithm 3 guarantee that
the inner scope size is min{m, g}and the outer scope size is m(m
+ 1)/2. Take notice

that min{m, g} min{g, 2m, 2 n} and m(m + 1)/2 3n n 1 since m < n.


Thus, the performance properties of the algorithm in this case support the worst case
analysis.

Case II: m n and g n. The estimation values never exceed their real values,
namely, m
m and g g. Consequently, some processors may execute Algorithm 3 in
line 6 and some may execute Algorithm 2 in line 9, depending on the concrete execution
sequence. The inner scope size guarantee is trivially satisfied since there are at most g
processors in each group. Furthermore, one can establish the outer scope size guarantee

68

Y. Afek et al.

by simply
summing the size of the name space that may be used by Algorithm 3, which
is n/2 n/2, with the size of the name space thatmay be used byAlgorithm 2,
which is n +m 1. Notice that
g min{g, 2m, 2 n} since g n m, and
3n/2 + m n/2 1 3n n 1 as m n. Hence, the performance properties
of the algorithm in this case support the worst case analysis.

Case III: m n and g > n. Every processors may execute any of the algorithms,
depending of the concrete
execution sequence. The first observation one should make

is that no more than n new names may be collectively assigned to processors of the
same group by Algorithm 3 in line 6 and Algorithm 2 in line 9. Moreover, one should
notice that any processor
that executes Algorithm 3 in line 12
is part of a group of

size greater than n. Consequently, processors from less than n groups may execute
it.
no more than
This implies, in conjunction with the properties of Algorithm 3, that
n new names may be assigned to each group, and at most n/2 n/2 names are
assigned by this
algorithm. Putting everything together,
we attain that the inner scope
n}
and
the
outer
scope
size
is
3n

n 1. It is easy to see that


size is min{g,
2

min{g, 2 n} min{g, 2m, 2 n} since m n, and thus the performance properties of the algorithm in this case also support the worst case analysis.


4.2 The Uniform Case
In what follows, we study the problem when the groups are guaranteed to be uniform
in size. We refine the analysis of Algorithm 4 by establishing that it is a loose group
renaming algorithm having aworst case inner scope size of min{m,
g}, and an outer
scope size of 3n/2 + m n/2 1. Note that min{m, g} n in this case. In
particular, we demonstrate that the algorithm is self-adjusting and has the following
(inner scope, outer scope) properties:


( min{m, g}, m(m + 1)/2 )


m< n

( g, 3n/2 + m n/2 1 )
m n
This result settles, to some extent, an open question posed by Gafni [10], which called
for a self-adjusting group renaming algorithm that requires at most m(m + 1)/2 names
on one extreme, and no more than 2n 1 names on the other.
The key observation required to establish this refinement
m g when
is that n =
the groups are uniform in size. Consequently, either m < n or g n. Since the
estimation values that each processor sees cannot exceed the corresponding real values,
no processor can ever reach the second execution of Algorithm 3 in line 12. Now, the
proof of the performance properties follows the same line of argumentation presented
in the proof of Theorem 5.

5 Discussion
This paper has considered and investigated the tight and loose variants of the group renaming problem. Below we discuss few ways in which our results can be extended. An
immediate open question is whether a g-consensus task can be constructed from group

Group Renaming

69

renaming tasks for groups of size g, in a system with g processes. Another question
is to design an adaptive group renaming algorithm in which a processor is assigned
a new group name, from the range 1 through k where k is a constant multiple of the
contention (i.e., the number of different active groups) that the processor experiences.
We have considered only one-shot tasks (i.e., solutions that can be used only once), it
would be interesting to design long-lived group renaming algorithms. We have focused
in this work mainly on reducing the new name space as much as possible, it would be
interesting to construct algorithms also with low space and time (step) complexities. Finally, the k-set consensus task, a generalization of the consensus task, enables for each
processor that starts with an input value from some domain, to choose some participating processor input as its output, such that all processors together may choose no more
than k distinct output values. It is interesting to find out what type of group renaming
task, if any, can be implemented using k-set consensus tasks and registers.

References
1. Afek, Y., Attiya, H., Fouren, A., Stupp, G., Touitou, D.: Long-lived renaming made adaptive.
In: Proc. 18th ACM Symp. on Principles of Distributed Computing, pp. 91103 (May 1999)
2. Afek, Y., Stupp, G., Touitou, D.: Long lived adaptive splitter and applications. Distributed
Computing 30, 6786 (2002)
3. Attiya, H., Bar-Noy, A., Dolev, D., Peleg, D., Reischuk, R.: Renaming in an asynchronous
environment. J. ACM 37(3), 524548 (1990)
4. Attiya, H., Fouren, A.: Algorithms adapting to point contention. Journal of the ACM 50(4),
144468 (2003)
5. Attiya, H., Welch, J.: Distributed Computing: Fundamentals, Simulations and Advanced Topics. John Wiley Interscience, Chichester (2004)
6. Bar-Noy, A., Dolev, D.: Shared memory versus message-passing in an asynchronous. In:
Proc. 8th ACM Symp. on Principles of Distributed Computing, pp. 307318 (1989)
7. Bar-Noy, A., Dolev, D.: A partial equivalence between shared-memory and message-passing
in an asynchronous fail-stop distributed environment. Mathematical Systems Theory 26(1),
2139 (1993)
8. Burns, J., Peterson, G.: The ambiguity of choosing. In: Proc. 8th ACM Symp. on Principles
of Distributed Computing, pp. 145158 (August 1989)
9. Fischer, M.J., Lynch, N.A., Paterson, M.: Impossibility of distributed consensus with one
faulty process. J. ACM 32(2), 374382 (1985)
10. Gafni, E.: Group-solvability. In: Proceedings 18th International Conference on Distributed
Computing, pp. 3040 (2004)
11. Gafni, E., Merritt, M., Taubenfeld, G.: The concurrency hierarchy, and algorithms for unbounded concurrency. In: Proc. 20th ACM Symp. on Principles of Distributed Computing,
pp. 161169 (August 2001)
12. Herlihy, M.: Wait-free synchronization. ACM Transactions on Programming Languages and
Systems 13(1), 124149 (1991)
13. Herlihy, M.P., Wing, J.M.: Linearizability: a correctness condition for concurrent objects.
ACM Transactions on Programming Languages and Systems 12(3), 463492 (1990)
14. Inoue, M., Umetani, S., Masuzawa, T., Fujiwara, H.: Adaptive long-lived O(k2 )-renaming
with O(k2 ) steps. In: Welch, J.L. (ed.) DISC 2001. LNCS, vol. 2180, pp. 123135. Springer,
Heidelberg (2001)
15. Moir, M., Anderson, J.H.: Wait-free algorithms for fast, long-lived renaming. Science of
Computer Programming 25(1), 139 (1995)

70

Y. Afek et al.

A Tight Group Renaming


A.1 An Impossibility Result
In what follows, we establish the proof of Theorem 2. Our impossibility proof follows
the high level FLP-approach employed in the context of the consensus problem (see,
e.g., [12,9]). Namely, we assume the existence of a tight group renaming algorithm,
and then derive a contradiction by constructing a sequential execution in which the
algorithm fails, either because it is inconsistent, or since it runs forever. Prior to delving
into technicalities, we introduce some terminology.
The decision value of a processor is the new group name selected by that processor.
Analogously, the decision value of a group is the new group name selected by all processors of that group. An algorithm state is multivalent with respect to group G if the
decision value of G is not yet fixed, namely, the current execution can be extended to
yield different decision values of G. Otherwise, it is univalent. In particular, an x-valent
state with respect to G is a univalent state with respect to G yielding a decision value of
x. A decision step with respect to G is an execution step that carries the algorithm from
a multivalent state with respect to G to a univalent state with respect to G. A processor is
active with respect to a algorithm state if its decision value is still not fixed. A algorithm
state is critical with respect to G if it is multivalent with respect to G and any step of
any active processor is a decision step with respect to G.
Lemma 6. Every group renaming algorithm admits an input instance whose initial
algorithm state is multivalent with respect to a maximal size group.
Proof. We begin by establishing that every group renaming algorithm admits an input instance whose initial algorithm state is multivalent with respect to some group.
Consider some group renaming algorithm, and assume by contradiction that the initial
algorithm state is univalent with respect to all groups for every input instance. We argue
that all processors implement some function f : [M ] [
] for computing their new
group name. For this purpose, consider some processor whose group name is a [M ].
Notice that this processor may be scheduled to execute a solo run. Let us assume
that its decision value in this case is x [
]. Since the initial algorithm state is univalent with respect to the group of that processor, it follows that in any execution this
processor must decide x, regardless of the other groups, their name, and their scheduling. The above-mentioned argument follows by recalling that all processors execute the
same algorithm, and noticing that a could have been any initial group name. Now, recall
that M >
. This implies that there are at least two group names a1 , a2 [M ] such
that f (a1 ) = f (a2 ). Correspondingly, there are input instances in which two processors
from two different groups decide on the same new group name, violating the uniqueness
property.
We now turn to prove that every group renaming algorithm admits an input instance
whose initial algorithm state is multivalent with respect to a maximal size group. Consider some group renaming algorithm, and suppose its initial algorithm state is multivalent with respect to group G. Namely, there are two execution sequences 1 , 2 that
lead to different decision values of G. Now, if G is maximal in size then we are done.

Group Renaming

71

Otherwise, consider the input instance obtained by adding processors to G until it becomes maximal in size. Notice that the execution sequences 1 and 2 are valid with
respect to the new input instance. In addition, observe that each possessor must decide
on the same value as in the former instance. This follows by the assumption that none
of the processors has prior knowledge about the other processors and groups, and thus
each processor cannot distinguish between the two instances. Hence, the initial algorithm state is also multivalent with respect to G in this new instance.


Lemma 7. Every group renaming algorithm admits an input instance for which a critical state with respect to a maximal size group may be reached.
Proof. We prove that every group renaming algorithm which admits an input instance
whose initial algorithm state is multivalent with respect to some group may reach a critical state with respect to that group. Notice that having this claim proved, the lemma
follows as consequence of Lemma 6. Consider some group renaming algorithm, and
suppose its initial algorithm state is multivalent with respect to group G. Consider the
following sequential execution, starting from this state. Initially, some arbitrary processor executes until it reaches a state where its next operation leaves the algorithm in a
univalent state with respect to G, or until it terminates and decides on a new group name.
Note that the latter case can only happen if the underlying processor is not affiliated to
G. Also note that the processor must eventually reach one of the above-mentioned states
since the algorithm is wait-free and cannot run forever. Later on, another arbitrary processor executes until it reaches a similar state, and so on. This sequential execution
continues until reaching a state in which any step of any active processor is a decision
step with respect to G. Again, since the algorithm cannot run forever, it must eventually
reach such state, which is, by definition, critical.


We are now ready to prove the impossibility result.
Proof of Theorem 2. Assume that there is a group renaming algorithm implemented
from atomic registers and r-consensus objects, where r < g. We derive a contradiction
by constructing an infinite sequential execution that keeps such algorithm in a multivalent state with respect to some maximal size group. By Lemma 7, we know that there
is an input instance and a corresponding execution of the algorithm that leads to a critical state s with respect to some group G of size g. Keep in mind that there are at least
g active processors in this critical state since, in particular, all the processors of G are
active. Let p and q be two active processors in the critical state which respectively carry
the algorithm into an x-valent and a y-valent states with respect to G, where x and y
are distinct. We now consider four cases, depending on the nature of the decision steps
taken by the processors:
Case I: One of the processors reads a register. Let us assume without loss of generality that this processor is p. Let s be the algorithm state reached if ps read step
is immediately followed by qs step, and let s be the algorithm state following qs
step. Notice that s and s differ only in the internal state of p. Hence, any processor
p G, other than p, cannot distinguish between these states. Thus, if it executes a solo
run, it must decide on the same value. However, an impossibility follows since s is

72

Y. Afek et al.

x-valent with respect to G whereas s is y-valent. This case is schematically described
in Figure 1(a).
Case II: Both processors write to the same register. Let s be the algorithm state
reached if ps write step is immediately followed by qs write step, and let s be the algorithm state following qs write step. Observe that in the former scenario q overwrites
the value written by p. Hence, s and s differ only in the internal state of p. Therefore, any processor p G, other than p, cannot distinguish between these states. The
impossibility follows identically to Case I.
Case III: Each processor writes to or competes for a distinct register or consensus
object. In what follows, we prove impossibility for the scenario in which both processors write to different registers, noting that impossibility for other scenarios can be
easily established using nearly identical arguments. The algorithm state that results if
ps write step is immediately followed by qs write step is identical to the state which
results if the write steps occur in the opposite order. This is clearly impossible as one
state is x-valent and the other is y-valent. This case is schematically illustrated in Figure 1(b).
Case IV: All active processors compete for the same consensus object. As mentioned above, there are at least g active processors in the critical state. Additionally, we
assumed that the algorithm uses r-consensus objects, where r < g. This implies that
the underlying consensus object is accessed by more processors then its capacity, which
is illegal.

q step

p read step

p write step

q write step

q write step

p write step

s
q step

y -valent
?

x-valent

(a)
Fig. 1. The decision steps cases

(b)

Global Static-Priority Preemptive


Multiprocessor Scheduling with Utilization
Bound 38%
Bj
orn Andersson
IPP Hurray Research Group,
Polytechnic Institute of Porto, Portugal

Abstract. Consider the problem of scheduling real-time tasks on a multiprocessor with the goal of meeting deadlines. Tasks arrive sporadically
and have implicit deadlines, that is, the deadline of a task is equal
to its minimum inter-arrival time. Consider this problem to be solved
with global static-priority scheduling. We present a priority-assignment
scheme with the property that if at most 38% of the processing capacity
is requested then all deadlines are met.

Introduction

Consider the problem of preemptively scheduling n sporadically arriving tasks


on m 2 identical processors. A task i is uniquely indexed in the range 1..n
and a processor likewise in the range 1..m. A task i generates a (potentially
innite) sequence of jobs. The arrival times of these jobs cannot be controlled by
the scheduling algorithm and are a priori unknown. We assume that the arrival
time between two successive jobs by the same task i is at least Ti . Every job
by i requires at most Ci time units of execution over the next Ti time units
after its arrival. We assume that Ti and Ci are real numbers and 0 Ci
Ti . A processor executes at most one job at a time and a job is not permitted
to execute on 
multiple processors simultaneously. The utilization is dened as
n
i
Us = (1/m) i=1 C
Ti . The utilization bound U BA of an algorithm A is the
maximum number such that all tasks meet their deadlines when scheduled by
A, if Us U BA .
Static-priority scheduling is a specic class of algorithms where each task is
assigned a priority, a number which remains unchanged during the operation
of the system. At every moment, the highest-priority task is selected for execution among tasks that are ready to execute and has remaining execution.
Static-priority scheduling is simple to implement in operating systems and it
can be implemented eciently. Therefore, it is implemented in virtually all realtime operating systems and many desktop operating systems support it as well,
accessible through system calls specied according to the POSIX-standard [1].
Because of these reasons, a comprehensive toolbox (see [2, 3]) of results (priorityassignment schemes, schedulability analysis algorithms, etc) has been developed
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 7388, 2008.
c Springer-Verlag Berlin Heidelberg 2008


74

B. Andersson

for static-priority scheduling on a single processor. The success story of staticpriority scheduling on a single processor started with the development of the
rate-monotonic (RM) priority-assignment scheme [4]. It assigns task j a higher
priority than task i if Tj < Ti . RM is an optimal priority-assignment scheme,
meaning that for every task set, it holds that if there is an assignment of priorities that causes deadlines to be met then deadlines are met as well when RM
is used. It is also known [4] that U BRM = 0.69 for the case that m = 1. This
result is important because it gives designers an intuitive idea of how much a
processor can be utilized without missing a deadline.
Multiprocessor scheduling algorithms are often categorized as partitioned or
global. Global scheduling stores tasks which have arrived but not nished execution in one queue, shared by all processors. At any moment, the m highestpriority tasks among those are selected for execution on the m processors. In
contrast, partitioned scheduling algorithms partition the task set such that all
tasks in a partition are assigned to the same processor. Tasks may not migrate
from one processor to another. The multiprocessor scheduling problem is thus
transformed to many uniprocessor scheduling problems.
Real-time scheduling on a multiprocessor is much less developed than realtime scheduling on a single processor. And this applies to static-priority scheduling as well. In particular, it is known that it is impossible to design a partitioned
algorithm with U B > 0.5 [5]. It is also known that for global static-priority
scheduling, RM is not optimal. In fact, global RM can miss a deadline although
Us approaches zero [6]. For a long time, the research community dismissed global
static-priority scheduling for this reason. But later, it was realized that other
priority-assignment schemes (not necessarily RM) can be used for global staticpriority scheduling and the research community developed such schemes. Many
priority-assignment schemes and analysis techniques for global static-priority
scheduling are available (see for example [7, 8, 9, 10]) but so far, only two
priority-assignment schemes, RM-US(m/(3m 2)) [11] and RM-US(x) [12] have
known (and non-zero) utilization bounds. These two algorithms categorize a task
i
as heavy or light. A task is said to be heavy if C
Ti exceeds a certain threshold
number and a task is said to be light otherwise. Heavy tasks are assigned the
highest priority and the light tasks are assigned a lower priority; the relative
priority order among light tasks is given by RM. It was shown that among the
algorithms that separate heavy and light tasks and use RM for light tasks, no
algorithm can achieve a utilization bound greater than 0.374 [12]. And in fact,
the current state-of-art oers no algorithm with utilization bound greater than
0.374.
In this paper, we present a new priority-assignment scheme SM-US(2/(3 +
5)). It categorizes tasks as heavy and light and assigns the highest priority to
heavy tasks. The relative priority order of light tasks is given by slack-monotonic
i if Tj
(SM) though, meaning that task j is assigned higher priority than task
- Cj <
Ti - Ci . We prove that the utilization bound of SM-US(2/(3 + 5)) is
2/(3 + 5), which is approximately 0.382.

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

75

We consider
this result to be signicant because (i) the new algorithm SMUS(2/(3 + 5)) breaks free from the performance limitations
of the RM-US

framework, (ii) the utilization bound of SM-US(2/(3 + 5)) is higher than the
utilization bound of the previously-known best algorithm in global
static-priority
5)) is reasonably
scheduling and (iii)
the
utilization
bound
of
SM-US(2/(3
+

close to the limit 21 0.41 which is known (from Theorem 8 in [13]) to be an


upper bound on the utilization bound of every global static-priority scheduling
algorithm which assigns a priority to a task i as a function only of Ti and Ci .
Section 2 gives a background on the subject, presenting the main ideas behind algorithms that achieve a utilization bound greater than zero. It also
presents results that we will use, in particular (i) lemmas expressing inequalities, (ii) a lemma from previous research on the amount of execution performed
and (iii) anew schedulability test. Section 3 presents the new algorithm SMUS(2/(3+ 5)) and proves its utilization bound using the schedulability test in
Section 2. Conclusions are given in Section 4.

Background

2.1

Understanding Global Static-Priority Scheduling

The inventor of RM observed [14] that


Few of the results obtained for a single processor generalize directly to
the multiple processor case; bringing in additional processors adds a new
dimension to the scheduling problem. The simple fact that a task can
use only one processor even when several processors are free at the same
time adds a surprising amount of diculty to the scheduling of multiple
processors.
Example 1 gives a good illustration of this.
Example 1. [From [6]]. Consider a task set with n=m+1 tasks to be scheduled on
m processors. The tasks are characterized as i {1, 2, . . . , m} : Ti = 1, Ci = 2
and Tm+1 = 1 + , Cm+1 = 1. If we assign priorities according to RM then m+1
is given the lowest priority and when all tasks arrive simultaneously then m+1
misses a deadline. Letting  0 and m gives us a task set with Us 0
and it misses a deadline.
Based on Example 1, one can see that better performance can be achieved by givi
ing high priority to tasks with high C
Ti . And in fact this is what the algorithms,
RM-US(m/(3m 2)) [11] and RM-US(x) [12] do. The algorithm RM-US(x)
[12] computes the value of x and its utilization bound is x. The value of x depends on the number of processors; it is given as (1-y)/(m (1+y))+ln(1+y)=(1y)/(1+y)=x. Solving it for m gives us that y=0.454 and x=0.375. One
can see that m gives us the least value of x. Hence the utilization bound
of RM-US(0.375) is 0.375. And there is no other choice of x which gives a higher
utilization bound. Example 2 illustrates this.

76

B. Andersson

Fig. 1. An example of a task set where RM-US(0.375) performs poorly. All tasks arrive
at time 0. Tasks 1 , 2 ,. . ., m are assigned the highest priority and execute on the m
processors during [0,). Then the tasks m+1 , m+2 ,. . ., 2m execute on the m processors
during [,2). The other groups of tasks execute in analogous manner. Task n executes
then until time 1. Then the groups of tasks arrive again. The task set meets its deadlines
but an arbitrarily small increase in execution times causes a deadline miss.

Example 2. [Partially taken from [12]]. Figure 1 illustrates the example. Consider
n = m q + 1 tasks to be scheduled on m processors, where q is a positive integer.
The task n is characterized by Tn = 1 + y and Cn = 1 y. The tasks with index
i {1, 2, . . . , n 1} are organized into groups, where each group comprises m
tasks. One group is the tasks with index i {1, 2, . . . , m}. Another group is
the tasks with index i {m + 1, m + 2, . . . , 2 m} and so on. The r:th group
comprises the tasks with index i {r m + 1, r m + 2, . . . , r m + m}. All
tasks belonging to the same group have the same Ti and Ci . Clearly there are
q groups. The tasks in the r:th group have the parameters Ti = 1 + r and
Ci = , where is selected as y = q . Hence, specifying m and y gives us the
task set. By letting y = 0.454 and m we have a task set that where all
tasks are light. The resulting task set is depicted in Figure 1. Also, all tasks meet
their deadlines but an arbitrarily small increase in execution time of n causes

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

77

it to miss a deadline. That is, RM-US(0.375) misses a deadline at a utilization


just slightly higher than 0.375.
One can see that if the light tasks in Example 2 would have been assigned
priorities such that Tj Cj < Ti Ci implies that j has higher priority than
i then deadlines would have been met. In fact, we will use this idea when we
design the new algorithm in Section 3.
2.2

Results We Will Use

Lemma 1-4 state four simple inequalities that we will nd useful; their proofs
are available in the Appendix.
Lemma 1. Let m denote a positive integer. Consider ui to be a real number
such that 0 ui < 3+25 and consider S to denote a set of non-negative real
numbers uj such that

2
m
(1)
uj ) + ui
(
3+ 5
jS
then it follows that

1
( (2 ui ) uj ) + ui 1
m

(2)

jS

Lemma 2. Consider two non-negative real numbers uj and ui such that 0


uj < 1 and 0 ui < 1. For those numbers, it holds that:
uj

1 ui
1 ui
+ (1 uj
) uj (2 ui ) uj
1 uj
1 uj

(3)

Lemma 3. Consider two non-negative real numbers uj and ui such that 0


uj < 1 and 0 ui < 1. And two non-negative real numbers Tj and Ti such that
Tj (1 uj ) Ti (1 ui )

(4)

For those numbers, it holds that:


uj

Tj
Tj
1 ui
1 ui
+ (1 uj ) uj uj
+ (1 uj
)
Ti
Ti
1 uj
1 uj

(5)

Lemma 4. Consider two integers Tj and Cj such that 0 Cj Tj . For every


t > 0 it holds that:


t
Cj
t
 Cj + min(t   Tj , Cj ) Cj + (t Cj )
Tj
Tj
Tj

(6)

Predictable scheduling. Ha and Liu [15] have studied real-time scheduling of


jobs on a multiprocessor; a job is characterized by its arrival time, its deadline,
its minimum execution time and its maximum execution time. The execution
time of a job is unknown but it is no less than its minimum execution time and
no greater than its maximum execution time. A scheduling algorithm A is said
to be predictable if for every set J of jobs it holds that:

78

B. Andersson

Scheduling all jobs by A with execution times equal to their maximum


execution times causes deadlines to be met. Scheduling all jobs by A
with execution times being at least their minimum execution times and
at most their maximum execution times causes deadlines to be met.
Intuitively, the notion of predictability means that we only need to analyze the
case when all jobs execute according to their maximum execution time. Ha and
Liu also found that global static priority scheduling of jobs on a multiprocessor
is predictable. Our paper deals with tasks that generate jobs with a certain
constraint (given by the minimum inter-arrival time, Ti ). But since our model
is a special case of the model used by Ha and Liu, it also follows that global
static-priority scheduling with our model is predictable as well.
The notion of active. We let active( t, i ) be true if at time t, there is a
job of i which has arrived no later than t and has a deadline no earlier than
t; otherwise active( t, i ) is false. Observe that a task i may release a job and
at time t this job has no remaining execution but its deadline is greater than t.
Because of our notion active, this task i is active at time t. Note that with our
notion of active, a periodically arriving task is active all the time after its rst
arrival. Because we study sporadically arriving tasks, there may be moments
when a task is not active though. The notion of gap measures that.
The notion of gap. We let gap( [t0 ,t1 ), i ) denote the amount of time during
[t0 ,t1 ) where active( t, i ) is false.
Optimal algorithm. Consider a task i and a time interval of duration  such
that the task i is active during the entire time interval. Let OP T denote an
algorithm which executes task i for (Ci /Ti ) time unit during the time interval
of duration , where  is arbitrarily small.
Work-conserving. We say that a scheduling algorithm is work-conserving if it
holds for every t that: if there are at least k tasks with unnished execution at
time t then at least k processors are busy at time t. In particular, we note that
global static-priority scheduling is work-conversing.
Execution. Let t0 denote a time such that no tasks have arrived before t0 .
Let W( A, , [t0 ,t1 )) denote the amount of execution performed by tasks in
during [t0 ,t1 ) when scheduled by algorithm A. Philips et al. [16] studied the
amount of execution performed by a work-conserving algorithm. They found
that the amount of execution in a time interval performed by work-conserving
algorithm is at least as much as the amount of execution performed by any other
algorithm assuming that the work-conserving algorithm is given processors that
are (2m 1)/m times faster. Previous research [11] in real-time computing has
used this result by comparing the amount of execution performed by global
static-priority scheduling against the algorithm OPT but that work considered
only the model of periodically arriving tasks. That result can be extended in a
straightforward manner to the model we use in this paper (the sporadic model)
though, as expressed by Lemma 5.

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

79

Lemma 5. Let G denote an algorithm with global static-priority scheduling. If


j :
and

Cj
m

Tj
2m 1

(7)

Cj
m

m
T
2m
1
j

(8)

then
W (G, , [t0 , t1 ))

(t1 t0 gap([t0 , t1 ], j ))

Cj
Tj

(9)

Proof. From Equation 7 and Equation 8 it follows that the task set can be
scheduled to meet deadlines by OP T on a multiprocessor with m processors of
speed m/(2m 1). The amount of execution during [t0 ,t1 ) is then given by the
right-hand side of Equation 9. And the result by Philips et al gives us that also
algorithm G performs as much execution during [t0 ,t1 ). Hence Equation 9 is true
and it gives us that the lemma is true.
Schedulability analysis. Let t0 denote a time such that no tasks arrive before
t0 . Let us consider a time interval that begins at time t0 ; let [t0 , t2 ) denote this
time interval. We obtain that the amount of execution performed by the task
set during [t0 , t2 ) is at most:

t2 t0 gap([t0 , t2 ), j )

 Cj +
Tj
j hp(i)

min(t2 t0 gap([t0 , t2 ), j ) 


t2 t0 gap([t0 , t2 ), j )
 Tj , Cj )
Tj

(10)

From Lemma 5 we obtain that the amount of execution performed by the task
set during [t0 , t1 ) is at least:

Cj
(t1 t0 gap([t0 , t1 ], j ))
(11)
Tj
j hp(i)

Let us consider the case that a deadline was missed. Let us consider the earliest
time when a deadline was missed. Let t1 denote the arrival time of the job that
missed this deadline and let i denote the task that generated this job. Let hp(i)
denote the set of tasks with higher priority than i . Let t2 denote the deadline
that was missed; that is, t2 =t1 +Ti . Applying Equation 8 and Equation 9 on
hp(i) gives us that the amount of execution by hp(i) during [t1 ,t2 ) is at most:

t2 t0 gap([t0 , t2 ), j )

 Cj +
Tj
j hp(i)


t2 t0 gap([t0 , t2 ), j )
 Tj , Cj )
Tj

Cj
(t1 t0 gap([t0 , t1 ], j ))

Tj

min(t2 t0 gap([t0 , t2 ), j ) 

j hp(i)

(12)

80

B. Andersson

Using t2 = t1 + Ti and rewriting gives us that the amount of execution by


hp(i) during [t1 ,t2 ) is at most:

Ti + t1 t0 gap([t0 , t1 ), j ) gap([t1 , t2 ), j )

 Cj +
Tj

j hp(i)

min(Ti + t1 t0 gap([t0 , t1 ), j ) gap([t1 , t2 ), j )



Ti + t1 t0 gap([t0 , t1 ), j ) gap([t1 , t2 ), j )

 Tj , Cj )
Tj

Cj
(t1 t0 gap([t0 , t1 ], j ))

Tj
j hp(i)

(13)
Applying Lemma 4 on Equation 13 gives us that the amount of execution by
hp(i) during [t1 ,t2 ) is at most:

Cj
Cj + (Ti + t1 t0 gap([t0 , t1 ), j ) gap([t1 , t2 ), j ) Cj )
Tj
j hp(i)

(t1 t0 gap([t0 , t1 ], j ))

j hp(i)

Cj
Tj
(14)

Simplifying Equation 14 gives us that the amount of execution by hp(i) during


[t1 ,t2 ) is at most:

Cj
Cj + (Ti gap([t1 , t2 ), j ) Cj )
(15)
Tj
j hp(i)

Relaxing gives that the amount of execution by tasks in hp(i) during [t1 ,t2 )
is at most:

Cj
Cj + (Ti Cj )
(16)
Tj
j hp(i)

From Equation 16 it follows that the amount of time during during [t1 ,t2 )
where all processors are busy executing tasks in hp(i) is at most:

1
Cj

Cj + (Ti Cj )
(17)
m
Tj
j hp(i)

Lemma 6. Consider global static-priority scheduling. Consider a task i . If all


tasks in hp(i) meet their deadlines and
j hp(i) :

Cj
m

Tj
2m 1

(18)

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

and

m
Ci
m

Ti
2m 1

and


j hp(i)

and

m
Cj Ci

+
m
Tj
Ti
2m 1

Cj
Cj + (Ti Cj )
+ Ci Ti

m
Tj

81

(19)

(20)

(21)

j hp(i)

then all deadline of i are met.


Proof. Follows from the discussion above.

The New Algorithm

Section 3.1 presents Slack-monotonic (SM) scheduling and analyzes its performance for restricted task sets (called light tasks). This restriction is then removed
in Section 3.2; the new algorithm is presented and its utilization bound is proven.
3.1

Light Tasks

2
i
We say that a task i is light if C
Ti 3+ 5 . We let Slack-Monotonic (SM) denote
a priority assignment scheme which assigns priorities such that task j is assigned
higher priority than task i if Tj Cj < Ti Ci .

Lemma 7. Consider global static-priority scheduling with SM. Consider a task


i. If all tasks in hp(i) meet their deadlines and
j hp(i) :
and

Cj
2

Tj
3+ 5

2
Ci

Ti
3+ 5

and
(
j

Cj
Ci
2
m
)+

Tj
Ti
3+ 5
hp(i)

(22)

(23)

(24)

then all deadline of i are met.


Proof. The Inequalities 22,23 and 24 imply that Inequalities 18,19 and 20 are
true. Applying Lemma 1 on Inequalities 24 gives us:

1
Ci Cj
Ci
(
(2
)
)+
1
m
Ti
Tj
Ti
jhp(i)

(25)

82

B. Andersson

Applying Lemma 2 on Inequalities 25 gives us:




C
i
Cj 1 C
1
Ci
Cj 1 Tii
Cj 
Ti

+ (1

)
1
m
Tj 1 Cj
Tj 1 Cj
Tj
Ti
Tj

jhp(i)

(26)

Tj

From the fact that SM is used we obtain that


j hp(i) : Tj Cj < Ti Ci

(27)

Considering Inequality 26 and Inequality 27 and Lemma 3 gives us:




Cj Tj
1
Ci
Cj Tj Cj 

+ (1
)
1
m
Tj Ti
Tj Ti Tj
Ti

(28)

jhp(i)

Multiplying both the left-hand side and the right-hand side of Inequality 28
by Ti and rewriting yields:



1
Cj 

Cj + (Ti Cj )
+ Ci Ti
(29)
m
Tj
jhp(i)

Using Inequality 29 and Lemma 6 gives us that all deadline of i are met.
This states the lemma.
Lemma 8. Consider global static-priority scheduling with SM. If it holds for
the task set that
Cj
2

j :
(30)

Tj
3+ 5
and

Cj
2
m

T
3+ 5
j j

(31)

then all deadline of i are met.


Proof. Follows from Lemma 7.
3.2

Light and Heavy Tasks

We
say that a task is heavy if it is not light. We let the algorithm SM-US(2/(3 +
5)) denote a priority assignment scheme which assigns the highest priority to
heavy tasks and assigns a lower priority to light tasks; the priority order between
light tasks is given by SM.
Theorem
1. Consider global static-priority

(3 + 5)). If it holds for the task set that


j :

Cj
1
Tj

scheduling

with

SM-US(2/

(32)

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

and

Cj
2
m

T
3
+
5
j
j

83

(33)

then all deadlines are met.


Proof. The proof is by contradiction. If the lemma was false then it follows that
there is a task set such that Inequality 32 and
Inequality 33 are true and when
this task set was scheduled by SM-US(2/(3 + 5)) a deadline was missed. Let
f ailed failed denote this task set and let m denote the number of processors.
Let k denote the number of heavy tasks. Because of Inequality 33 it follows that
k m. Also, because of Lemma 8 is follows that k 1.
Let f ailed2 denote a set which is constructed from f ailed as follows. For
every light task in f ailed there is a light task in f ailed2 and their Ti and Ci are
the same. For every heavy task in f ailed there is a heavy task in f ailed2 and
its Ti is the same. For the heavy tasks in f ailed2 it holds that Ci = Ti . From
Inequality 33 it follows that

Cj
2
(m k)
(34)

T
3+ 5
j
light( f ailed )
j

where light( f ailed ) denotes the set of light tasks in f ailed . Since the light tasks
are the same in f ailed and f ailed2 it clearly follows that

Cj
2
(m k)
(35)

Tj
3+ 5
light( f ailed2 )
j

f ailed2
If the task
would meet all deadlines when scheduled by SM set
US(2/(3 + 5)) then it would follow (from the fact that global static-priority
scheduling is predictable) that alldeadlines would have been met when f ailed
was scheduled by SM-US(2/(3 + 5)). Hence it follows that at least one deadline was missed by f ailed2 . And since there are at most k m 1 heavy tasks
it follows that no deadline miss occurs for the heavy tasks. Hence it must have
been that a deadline miss occurred from a light task in f ailed2 . But the scheduling of the light tasks in f ailed2 is identical to what is would have been if we
deleted the heavy tasks in f ailed2 and deleted the k processors. That is, we
have that scheduling the light tasks on m k processor causes a deadline miss.
But Inequality 35 and Lemma 8 gives that no deadline miss occurs. This is a
contradiction. Hence the theorem is correct.

Conclusions

We have presented a new priority-assignment scheme, SM-US(2/(3 + 5)), for


global static-priority
multiprocessor scheduling and proven that its utilization
bound is 2/(3 + 5, which is approximately, 0.382. We
left open the question
whether it is possible to achieve a utilization bound of 2 1 with global staticpriority scheduling.

84

B. Andersson

Acknowledgements
This work was partially funded by the Portuguese Science and Technology Foundation (Fundacao para a Ciencia e a Tecnologia - FCT) and the ARTIST2 Network of Excellence on Embedded Systems Design.

References
[1] Gallmeister, B.: POSIX.4 Programmers Guide: Programming for the Real World.
OReilly Media, Sebastopol (1995)
[2] Sha, L., Rajkumar, R., Sathaye, S.: Generalized Rate-Monotonic Scheduling Theory: A Framework for Developing Real-Time Systems. Proceedings of the IEEE 82,
6882 (1994)
[3] Tindell, K.W.: An Extensible Approach for Analysing Fixed Priority Hard RealTime Tasks. Technical Report, Department of Computer Science, University of
York, UK YCS 189 (1992)
[4] Liu, C.L., Layland, J.W.: Scheduling Algorithms for Multiprogramming in a HardReal-Time Environment. Technical Report, Department of Computer Science, University of York, UK YCS 189., 1992. Journal of the ACM, vol. 20, pp. 4661 (1973)
[5] Oh, D., Baker, T.P.: Utilization Bounds for N-Processor Rate Monotone Scheduling with Static Processor Assignment. Real-Time Systems 5, 183192 (1998)
[6] Dhall, S., Liu, C.: On a real-time scheduling problem. Operations Research 6,
127140 (1978)
[7] Baker, T.P.: An Analysis of Fixed-Priority Schedulability on a Multiprocessor.
Real-Time Systems 32, 4971 (2006)
[8] Bertogna, M., Cirinei, M.: Response-Time Analysis for Globally Scheduled Symmetric Multiprocessor Platforms. In: IEEE Real-Time Systems Symposium, Tucson, Arizona (2007)
[9] Bertogna, M., Cirinei, M., Lipari, G.: New Schedulability Tests for Real-Time Task
Sets Scheduled by Deadline Monotonic on Multiprocessors. In: 9th International
Conference on Principles of Distributed Systems, Pisa, Italy (2005)
[10] Cucu, L.: Optimal priority assignment for periodic tasks on unrelated processors.
In: Euromicro Conference on Real-Time Systems (ECRTS 2008), WIP session,
Prague, Czech Republic (2008)
[11] Andersson, B., Baruah, S., Jonsson, J.: Static-Priority Scheduling on Multiprocessors. In: IEEE Real-Time Systems Symposium, London, UK (2001)
[12] Lundberg, L.: Analyzing Fixed-Priority Global Multiprocessor Scheduling. In:
Eighth IEEE Real-Time and Embedded Technology and Applications Symposium
(RTAS 2002) (2002)
[13] Andersson, B., Jonsson, J.: The utilization bounds of partitioned and pfair staticpriority scheduling on multiprocessors are 50%. In: Euromicro Conference on RealTime Systems, Porto, Portugal (2003)
[14] Liu, C.L.: Scheduling algorithms for multiprocessors in a hard real-time environment. JPL Space Programs Summary 37-60, 2831 (1969)
[15] Ha, R., Liu, J.W.S.: Validating timing constraints in multiprocessor and distributed real-time systems. In: Proceedings of the 14th International Conference
on Distributed Computing Systems, Pozman, Poland (1994)
[16] Phillips, C.A., Stein, C., Torng, E., Wein, J.: Optimal time-critical scheduling via
resource augmentation. In: ACM Symposium on Theory of Computing, El Paso,
Texas, United States (1997)

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

85

Appendix
Lemma 1. Let m denote a positive integer. Consider ui to be a real number
such that 0 ui < 3+25 and consider S to denote a set of non-negative real
numbers uj such that

2
m
(36)
uj ) + ui
(
3+ 5
jS
then it follows that

1
( (2 ui ) uj ) + ui 1
m

(37)

jS

Proof. Let us dene f as:


f = (2 ui )
We have:

2
2
m + m ui m ui +

3+ 5
3+ 5

f
2
m+m1>0
=
ui
3+ 5

(38)

(39)

From Inequality 39 and the constraint ui 3+25 we obtain that f is no


greater than f for the value ui = 3+25 . And we have f (ui = 3+25 ) = 0. This
gives us:
f (2 ui )

2
2
m + m ui m ui +
0
3+ 5
3+ 5

Applying Inequality 40 to Inequality 36 and rewriting yields:



2
m
(2 ui ) (
uj ) + ui + m ui ui +
3
+
5
jS

(40)

(41)

Rearranging terms in Inequality 41 gives us:



(2 ui ) ui ui +
1

(2 ui ) uj + ui +
m
m

2
3+ 5

(42)

jS

Recall that ui 3+25 . Clearly this gives us 2 ui 1. And hence the last term
in the left-hand side of Inequality 42 is non-negative. This gives us:

1

(2 ui ) uj + ui 1
(43)
m
jS

And this states the lemma. Hence the lemma is correct.


Lemma 2. Consider two non-negative real numbers uj and ui such that 0
uj < 1 and 0 ui < 1. For those numbers, it holds that:
uj

1 ui
1 ui
+ (1 uj
) uj (2 ui ) uj
1 uj
1 uj

(44)

86

B. Andersson

Proof. The proof is by contradiction. Suppose that the lemma is false. Then we
have:
1 ui
1 ui
uj
+ (1 uj
) uj > (2 ui ) uj
(45)
1 uj
1 uj
Let us explore the following cases.
1. ui = 0 and uj = 0
Applying this case on Inequality 45 gives us:
0>0

(46)

which is a contradiction. (end of Case 1)


2. ui = 0 and uj > 0
Applying this case on Inequality 45 gives us:
uj

1
1
+ (1 uj
) uj > 2 uj
1 uj
1 uj

(47)

Since uj > 0 we can divide Inequality 47 by uj and this gives us:


1
1
+ 1 uj
>2
1 uj
1 uj

(48)

Rewriting Inequality 48 yields:


1
(1 uj ) > 1
1 uj

(49)

which is a contradiction. (end of Case 2)


3. ui > 0 and uj = 0
Applying this case on Inequality 45 gives us:
0>0

(50)

which is a contradiction. (end of Case 3)


4. ui > 0 and uj > 0
Since uj > 0 we can divide Inequality 45 by uj and this gives us:
1 ui
1 ui
+ (1 uj
) > 2 ui
1 uj
1 uj

(51)

Rewriting Inequality 51 yields:


1 ui
1 ui
uj
> 1 ui
1 uj
1 uj

(52)

Further rewriting yields:


1
1
uj
>1
1 uj
1 uj

(53)

1>1

(54)

Further rewriting yields:


which is a contradiction. (end of Case 4)
Since a contradiction occurs for every case we obtain that the lemma is false.

Global Static-Priority Preemptive Multiprocessor Scheduling with UB 38%

87

Lemma 3. Consider two non-negative real numbers uj and ui such that 0


uj < 1 and 0 ui < 1. And two non-negative real numbers Tj and Ti such that
Tj (1 uj ) Ti (1 ui )

(55)

For those numbers, it holds that:


uj

Tj
Tj
1 ui
1 ui
+ (1 uj ) uj uj
+ (1 uj
)
Ti
Ti
1 uj
1 uj

(56)

Proof. Rewriting Inequality 55 yields:


j hp(i) :

1 ui
Tj

Ti
1 uj

(57)

Let qi,j denote the left-hand side of Inequality 57. There are two occurrences
qi,j in the left-hand side of Inequality 56. Also observe that the left-hand side
of Inequality 56 is increasing with increasing qi,j . For this reason, combining
Inequality 57 and the left-hand side of inequality 56 gives us that the lemma is
true.
Lemma 4. Consider two integers Tj and Cj such that 0 Cj Tj . For every
t > 0 it holds that:


t
t
Cj
 Cj + min(t   Tj , Cj ) Cj + (t Cj )
Tj
Tj
Tj

(58)

Proof. The proof is by contradiction. Suppose that the lemma is false. Then
there is a t > 0 such that:


t
Cj
t
 Cj + min(t   Tj , Cj ) > Cj + (t Cj )
Tj
Tj
Tj

(59)

Let us consider two cases:


1. t t/Tj  Tj Cj
Let be dened as: = Ci (t  Ttj  Tj ). Let us increase t by . Then
the left-hand side of Inequality 59 increases by and the right-hand side
increases by (Cj /Tj ) . Since Cj /Tj 1 it follows that Inequality 59 still
true. That is:
t
Cj
t
(60)
  Cj + min(t   Tj , Cj ) > Cj + (t Cj )
Tj
Tj
Tj
Repeating this argument gives us that t t/Tj  Tj = Cj . Applying it on
Inequality 60 yields:
t Cj
Cj
Cj + Cj > Cj + (t Cj )
Tj
Tj

(61)

Rewriting Inequality 61 gives us:


(t Cj ) > (t Cj )
which is impossible. (end of Case 1)

(62)

88

B. Andersson

2. t t/Tj  Tj Cj
Let be dened as: = (t  Ttj  Tj ) Cj . Let us decrease t by .
Then the left-hand side of Inequality 59 is unchanged and the right-hand
side decreases by (Cj /Tj ) . Since 0 Cj /Tj it follows that Inequality 59
still true. That is:


t
t
Cj
 Cj + min(t   Tj , Cj ) > Cj + (t Cj )
Tj
Tj
Tj

(63)

Repeating this argument gives us that t t/Tj  Tj = Cj . Applying it on


Inequality 63 and applying similar rewriting as in Inequality 61 and Inequality 62 yields:
(64)
(t Cj ) > (t Cj )
which is impossible. (end of Case 2)
We can see that regardless of which case occurs a contradiction occurs and
hence the lemma is correct.

Deadline Monotonic Scheduling


on Uniform Multiprocessors
Sanjoy Baruah1 and Joel Goossens2
1

University of North Carolina at Chapel Hill, NC, USA


[email protected]
2
Universite Libre de Bruxelles, Brussels, Belgium
[email protected]

Abstract. The scheduling of sporadic task systems upon uniform multiprocessor platforms using global Deadline Monotonic algorithm is studied. A sucient schedulability test is presented and proved correct. It
is shown that this test oers non-trivial quantitative guarantees, in the
form of a processor speedup bound.

Introduction

A multiprocessor computer platform is comprised of several processors. A platform in which all the processors have the same capabilities is referred to as an
identical multiprocessor, while those in which dierent processors have dierent
capabilities are called heterogeneous multiprocessors. Heterogeneous multiprocessors may be further classied into uniform and unrelated multiprocessors.
The only dierence between the dierent processors in a uniform multiprocessor
is the rate at which they can execute work: each processor is characterized by a
speed or computing capacity parameter s, and any job executing on the processor
for t time units completes t s units of execution. In unrelated multiprocessors,
on the other hand, the amount of execution completed by a particular job executing on a given processor depends upon the identities of both the job and the
processor.
A real-time system is often modeled as a nite collection of independent recurrent tasks, each of which generates a potentially innite sequence of jobs. Every
job is characterized by an arrival time, an execution requirement, and a deadline,
and it is required that a job completes execution between its arrival time and its
deadline. Dierent formal models for recurring tasks place dierent restrictions
on the values of the parameters of jobs generated by each task. One of the more
commonly used formal models is the sporadic task model [1, 2]. Each recurrent
task i in this model is characterized by three parameters: i = (Ci , Di , Ti ),
with the interpretation that i may generate an innite sequence of jobs with
successive jobs arriving at least Ti time units apart, each with an execution


Supported in part by NSF Grant Nos. CNS-0834270, CNS-0834132, CCF-0541056,


and CCR-0615197, ARO Grant No. W911NF-06-1-0425, and funding from IBM and
the Intel Corporation.

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 89104, 2008.
c Springer-Verlag Berlin Heidelberg 2008


90

S. Baruah and J. Goossens

requirement at most Ci and a deadline Di time units after its arrival time. A
sporadic task system is comprised of a nite collection of such sporadic tasks.
Sporadic task systems in which each task is required to have its relative deadline
and period parameters the same (Di = Ti for all i) are called implicit-deadline
task systems, and ones in which each task is required to have its relative deadline
be no larger than its period parameter (Di Ti for all i) are called constraineddeadline task systems. A task system that is not constrained-deadline is said to
be an arbitrary-deadline task system.
Several results have been obtained over the past decade, concerning the
scheduling of implicit-deadline systems on identical [3, 4, 5, 6, 7, 8, 9] and on uniform [10,11,12,13,14,15,16,17] multiprocessors, of constrained-deadline systems
on identical multiprocessors [18, 19, 20, 21, 22, 23, 24], and of arbitrary-deadline
systems on identical multiprocessors, [25, 26]. This paper seeks to extend this
body of work, by addressing the scheduling of constrained and arbitrary-deadline
sporadic task systems upon uniform multiprocessors. We assume that the platform is fully preemptive an executing job may be interrupted at any instant in
time and have its execution resumed later with no cost or penalty. We study the
behavior of the well-known and very widely-used Deadline Monotonic scheduling
algorithm [27] when scheduling systems of sporadic tasks upon such preemptive
platforms. We will refer to Deadline Monotonic scheduling with global interprocessor migration as global dm (or simply dm).
Contributions. We obtain a new test to our knowledge, this is the rst such
tests for determining whether a given constrained or arbitrary-deadline sporadic task system is guaranteed to meet all deadlines upon a specied uniform
multiprocessor platform, when scheduled using dm. This test is derived by applying techniques that have previously been used for the schedulability analysis
of constrained-deadline task systems on uniform multiprocessors when scheduled
using edf [28] and by integrating techniques used for schedulability analysis of
sporadic arbitrary-deadline systems on identical multiprocessors using dm [25].
Organization. The remainder of this paper is organized as follows. In Sect. 2
we formally dene the sporadic task model and uniform multiprocessor platforms, and provide some additional useful denitions, notation, and terminology
concerning sporadic tasks and uniform multiprocessors. We also provide a specication of the behavior of global dm is to be implemented upon uniform multiprocessors. In Sect. 3 we derive, and prove the correctness of, a schedulability
test for determining whether a given sporadic task system is dm-schedulable on a
specied uniform multiprocessor platform. In Sect. 4 we provide a quantitative
characterization of the ecacy of this new schedulability test in terms of the
resource augmentation metric.

Task and Platform Model

1. Sporadic task systems. A sporadic task i = (Ci , Di , Ti ) is characterized by


a worst-case execution requirement Ci , a (relative) deadline Di , and a minimum

Deadline Monotonic Scheduling on Uniform Multiprocessors

91

inter-arrival separation parameter Ti , also referred to as the period of the task.


Such a sporadic task generates a potentially innite (legal) sequence of jobs,
with successive job-arrivals separated by at least Ti time units. Each job has a
worst-case execution requirement equal to Ci and a deadline that occurs Di time
units after its arrival time. We refer to the interval, of size Di , between such a
jobs arrival instant and deadline as its scheduling window. We assume a fully
preemptive execution model: any executing job may be interrupted at any instant
in time, and its execution resumed later with no cost or penalty. A sporadic task
system is comprised of a nite number of such sporadic tasks. Let denote a
system of such sporadic tasks: = {1 , 2 , . . . n }, with i = (Ci , Di , Ti ) for all i,
1 i n. Without loss of generality, we assume that tasks are indexed in nonincreasing order of their relative deadline parameters: Di Di+1 ( i [1, n1]).
We nd it convenient to dene some properties and parameters for individual
sporadic tasks, and for sporadic task systems.
Utilization: The utilization ui of a task i is the ratio Ci /Ti of its execution
requirement to its period. The total utilization usum ( ) and the largest utilization umax ( ) of a task system are dened as follows:

def
def
usum ( ) =
ui ;
umax ( ) = max(ui ) .
i

Density: The density i of a task i is the ratio (Ci / min(Di , Ti )) of its execution
requirement to the smaller of its relative deadline and its period. The total
density sum ( ) of a task system is dened as follows:

def
sum ( ) =
i .
i

For each k, 1 k n, max (k) denotes the largest density from among the
tasks 1 , 2 , . . . , k :
def

max (k) = max(i ) .


i=1

DBF: For any interval length t, the demand bound function dbf(i , t) of a
sporadic task i bounds the maximum cumulative execution requirement by
jobs of i that both arrive in, and have deadlines within, any interval of
length t. It has been shown [2] that


 
t Di
+ 1) Ci .
dbf(i , t) = max 0, (
Ti
Load: A load parameter, based upon the dbf function, may be dened for any
sporadic task system as follows:


k
def
i=1 dbf(i , t)
.
load(k) = max
t>0
t

92

S. Baruah and J. Goossens

Computing dbf (and thereby, load) will turn out to be a critical component
of the schedulability analysis test proposed in this paper; hence, it is important
that dbf be eciently computable if this schedulability test is to be eciently
implementable as claimed. Fortunately, computing dbf is a well-studied subject,
and algorithms are known for computing dbf exactly [2, 29], or approximately
to any arbitrary degree of accuracy [30, 31, 32].
The following Lemma relates the density of a task to its dbf:
Lemma 1 ( [25]). For all tasks i and for all t 0,
t i dbf(i , t) .




In constrained task systems those in which Di Ti i a job becomes eligible to execute upon arrival, and remains eligible until it completes execution1 .
In systems with Di > Ti for some tasks i , we require that at most one job of
each task be eligible to execute at each time instant. We assume that jobs of
the same task are considered in rst-come rst-served order; hence, a job only
becomes eligible to execute after both these conditions are satised: (i) it has
arrived, and (ii) all previous jobs generated by the same task that generated it
have completed execution. This gives rise to the notion of an active task: briey,
a task is active at some instant if it has some eligible job awaiting execution at
that instant. More formally,
Denition 1 (active task). A task is said to be active in a given schedule at
a time-instant t if some job of the task is eligible to execute at time-instant t.
That is, (i) t the greater of the jobs arrival time and the completion time of
the previous job of the same task, and (ii) the job has not completed execution
prior to time-instant t.


2. Uniform multiprocessors. A uniform multiprocessor = (s1 , s2 , . . . , sm ) is
comprised of m > 1 processors, with the ith processor characterized by speed or
computing capacity si . The interpretation is that a job executing on the ith processor for a duration of t units of time completes tsi units of execution. Without
loss of generality, we assume that the speeds are specied in non-increasing order:
si si+1 for all i. We will also use the following notation:
def

Si () =

sj .

(1)

j=1

That is, Si () denotes the sum of the computing capacities of the i fastest
processors in (and Sm () hence denotes the total computing capacity of ).
An additional parameter that turns out to be useful in describing the properties of a uniform multiprocessor is the lambda parameter [12, 10]:
m
m
def
j=i+1 sj
.
(2)
() = max
i=1
si
1

Or its deadline has elapsed, in which case the system is deemed to have failed.

Deadline Monotonic Scheduling on Uniform Multiprocessors


6
to

ta

93

- deadline miss
D Dk -

?
time
td

Fig. 1. Notation. A job of task k arrives at ta . Task k is not active immediately prior
to ta , and is continually active over [ta , td ).

This parameter ranges in value between 0 and (m1) for an m-processor uniform
multiprocessor platform, with a value of (m 1) corresponding to the degenerate
case when all the processors are of the same speed (i.e., the platform is an
identical multiprocessor).
3. Deadline Monotonic scheduling. Priority-driven scheduling algorithms operate on uniform multiprocessors as follows: at each instant in time they assign
a priority to each job that is awaiting execution, and favor for execution the
jobs with the greatest priorities. Specically, (i) no processor is idled while there
is an active job awaiting execution; (ii) when there are fewer active jobs than
processors, the jobs execute on the fastest processors and the slowest ones are
idled; and (iii) greater-priority jobs execute on the faster processors. The Deadline Monotonic (dm) scheduling algorithm [33] is a priority-driven scheduling
algorithm that assigns priority to tasks according to their (relative) deadlines:
the smaller the deadline, the greater the priority.
With respect to a specied platform, a given sporadic task system is said to
be feasible if there exists a schedule meeting all deadlines for every collection of
jobs that may be generated by the task system. A given sporadic task system is
said to be (global) dm schedulable if dm meets all deadlines for every collection
of jobs that may be generated by the task system.

A dm Schedulability Test for Sporadic Task Systems

We now derive (Theorem 1) a sucient condition for determining whether a


sporadic task system is dm-schedulable upon a specied uniform multiprocessor platform . This sucient schedulability condition is in terms of the load
and maximum density parameters the load(k)s and the max (k)s dened
above of the task system, and the total computing capacity and the lambda
parameter Sm () and () dened above of the platform.
Our strategy for deriving, and proving the correctness of, our sucient schedulability condition is the following: for any legal sequence of job requests of task
system , on which dm misses a deadline we obtain a necessary condition for
that deadline miss to occur by bounding from above the total amount of execution that the dm schedule needs (but fails) to execute before the deadline miss.
Negating this condition yields a sucient condition for global-dm schedulability.

94

S. Baruah and J. Goossens

Consider any legal sequence of job requests of task system , on which dm


misses a deadline. Suppose that a job of task k is the one to rst miss a deadline,
and that this deadline miss occurs at time-instant td (see Fig. 1).
Discard from the legal sequence of job requests all jobs of tasks with priority
lower than k s, and consider the dm schedule of the remaining (legal) sequence
of job requests. Since lower-priority jobs have no eect on the scheduling of
greater-priority ones under preemptive dm, it follows that a deadline miss of k
occurs at time-instant td (and this is the earliest deadline miss), in this new dm
schedule. We will focus henceforth on this new dm schedule.
Let ta denote the earliest time-instant prior to td , such that k is continuously
active2 over the interval [ta , td ]. It must be the case that ta is the arrival time of
some job of k since k is, by denition, not active just prior to ta and becomes
active at ta .
It must also be the case that ta td Dk . This follows from the observation
that the job of k that misses its deadline at td arrives at td Dk . If Dk < Tk ,
then ta is equal to this arrival time of the job of i that misses its deadline at
td . If Dk Tk , however, tk may be the arrival-time of an earlier job of k . Let
def
D = td ta .
Let C denote the cumulative execution requirement of all jobs of k that arrive
ta , and have deadline td . By denition of dbf and Lemma 1, we have
C dbf(k , td ta ) k (td ta ) .

(3)

We introduce some notation now. For any time-instant t td ,


let W (t) denote the cumulative execution requirement of all the jobs in
this legal sequence of job requests, minus the total amount of execution
completed by the dm schedule prior to time-instant t.
def
Let (t) denote W (t) normalized by the interval-length: (t) = W (t)/(td
t).
let I denote the total duration over [ta , td ) for which exactly
processors
are busy in this dm schedule, 0
m. (We note that Io is necessarily
zero, since k s job does not complete by its deadline.)
Observe that the amount of execution that k s jobs receive over [ta , td ) is
m1
at least =1 s I , since k s jobs must be executing at any time-instant when
some processor is idle; therefore
C>

m1

s I ,

(4)

=1



m1
Since Sm ()D =1 (Sm ()S ())I denotes the total amount of execution
completed over [ta , td ) and this is not enough for k s jobs to complete the
execution requirement C before td , we have the following relationship:
2

See Denition 1 to recall the denition of an active task.

Deadline Monotonic Scheduling on Uniform Multiprocessors

W (ta ) > Sm ()D


= Sm ()D
Sm ()D

95

m1

(Sm () S ())I

=1
m1

=1
m1

Sm () S ()
s I
s
()s I

=1

= Sm ()D ()

m1

s I .

(5)

=1

From (5) and (4) above, we conclude that


W (ta ) > Sm ()D ()C
Sm ()D ()k D
(ta ) > Sm () ()k
(ta ) > Sm () ()max (k) .
Let
k = Sm () ()max (k)
def

(6)

observe that the value of k depends upon the parameters of both the task
system being scheduled, and the uniform multiprocessor = (s1 , s2 , . . . , sm )
upon which it is scheduled.
def
Let to denote the smallest value of t ta such that (t) k . Let = td to
(see Fig. 1).
By denition, W (to ) denotes the amount of work that the dm schedule needs
(but fails) to execute over [to , td ). This work in W (to ) arises from two sources:
those jobs that arrived at or after to , and those that arrived prior to to but have
not completed execution in the dm schedule by time-instant to . We will refer to
jobs arriving prior to to that need execution over [to , td ) as carry-in jobs.
We wish to obtain an upper bound on the total contribution of all the carry-in
jobs to the W (to ) term. We achieve this in two steps: we rst bound the number
of tasks that may have carry-in jobs (Lemma 2), and then we bound the amount
of work that all the carry-in jobs of any one such task may contribute to W (to )
(Lemma 3).
Lemma 2. The number of tasks that have carry-in jobs is strictly bounded from
above by


def
(7)
k = max
: S () < k .
Proof. Let  denote an arbitrarily small positive number. By denition of the
instant to , (to ) < k while (to ) k . It must therefore be the case
that strictly less than k  work was executed over [to , to ); i.e., the total
computing capacity of all the busy processors over [to , to ) is < k . And since

96

S. Baruah and J. Goossens

6


6
i

ti

?
to

time

Fig. 2. Example: dening ti for a task i with Di Ti . Three jobs of i are shown.
Task i is not active prior to the arrival of the rst of these 3 jobs, and the rst job
completes execution only after the next job arrives. This second job does not complete
execution prior to to . Thus, the task is continuously active after the arrival of the rst
job shown, and ti is hence set equal to the arrival time of this job.

k < Sm () (as can be seen from (6) above), it follows that some processor was
idled over [to , to ), implying that all jobs active at this time would have been
executing. This allows us to conclude that there are strictly fewer than k tasks
with carry-in jobs.
Lemma 3. The total remaining execution requirement of all the carry-in jobs
of each task i (that has carry-in jobs at time-instant to ) is < max (k).
Proof. Let us consider some task i (i < k) that has a carry-in job. Let ti < to
denote the earliest time-instant such that i is active throughout the interval
[ti , to ]. Observe that ti is necessarily the arrival time of some job of i . If Di < Ti ,
then ti is the arrival time of the (sole) carry-in job of i . If Di Ti , however, ti
may be the arrival-time of a job that is not a carry-in job see Fig. 2.
def
Let i = to ti (see Fig. 2). All the carry-in jobs of i have their arrival-times
and their deadlines within the (i + )-sized interval [ti , td ), and consequently
their cumulative execution requirement is dbf(i , i + ); in what follows,
we will quantify how much of this must have been completed prior to to (and
hence cannot contribute to the carry-in). We thus obtain an upper bound on the
total work that all the carry-in jobs of i contribute to W (to ), as the dierence
between dbf(i , i + ) and the amount of execution received by i over [ti , to ).
By denition of to , it must be the case that (ti ) < k . That is,
W (ti ) < k ( + i ) .

(8)

On the other hand, (to ) k , meaning that


W (to ) k .

(9)

Let Ci denote the amount of execution received by i s carry-in jobs over the
duration [ti , to ); the dierence dbf(i , i + ) Ci thus denotes an upper bound
on the amount of carry-in execution. Let J denote the total duration over [ti , to )
for which exactly
processors are busy in this dm schedule, 0
m. Observe
that the amount of execution that i s carry-in jobs receive over [ti , to ) is at least

Deadline Monotonic Scheduling on Uniform Multiprocessors

97

m1

=1 s J since i s job must be executing on one of the processors during any
instant when some processor is idle; therefore

Ci

m1

s J .

(10)

=1



m1
Since Sm ()i =1 J (Sm () S ()) denotes the total
 amount of execution completed over [ti , to ), the dierence W (ti ) W (to ) the amount of
execution completed over [ti , to ) is given by
W (ti ) W (to ) = Sm ()i

m1

(Sm () S ())J

=1

k ( + i ) k >
m1

Sm ()i
(Sm () S ())J
=1

(By (8) and (9))


k i > Sm ()i

m1

Sm () S ()
s J
s

=1

k i > Sm ()i

m1

()s J

=1

k i > Sm ()i ()
k i > Sm ()i

m1

s J

=1
()Ci .

Substituting for k (Equation 6 above), we have


(Sm () ()max (k))i > Sm ()i ()Ci Ci > max (k)i . (11)
Inequality 11 is important it tells us that task i must have already completed a signicant amount of its execution before time-instant to . More specically, the remaining work of all carry-in jobs of i contribute to W (to ), is given
by
(dbf(i , i + ) Ci ) < (i + )i i max (k)
(from Lemma 1)
(i + )max (k) i max (k)) = max (k)
as claimed in this lemma.
Based upon Lemmas 2 and 3 we obtain our desired result a sucient schedulability condition for global dm:

98

S. Baruah and J. Goossens

Theorem 1. Sporadic task system is global-dm schedulable upon a platform


comprised of m uniform processors, provided that for all k, 1 k n,
2 load(k) + k max (k) k ,

(12)

where k and k are as dened in (6) and 7 respectively.


Proof. The proof is by contradiction: we obtain necessary conditions for the
scenario above when k s job misses its deadline at td to occur. Negating
these conditions yields a sucient condition for global-dm schedulability.
Let us bound the total amount of execution that contributes to W (to ).
First, there are the carry-in jobs: by Lemmas 3 and 2, there are at most
k distinct tasks with carry-in jobs, with the total carry-in work for all the
jobs of each task bounded from above by max (k) units of work. Therefore
their total contribution to W (to ) is bounded from above by k max (k).
All other jobs that contribute to W (to ) arrive within the -sized interval
[to , td ), and hence have their deadlines within [to , td + Dk ), since their relative deadlines are all Dk . Their total execution requirement is therefore
bounded from above by ( + Dk ) load(k).
We consequently obtain the following bound on W (to ):
W (to ) < ( + Dk ) load(k) + k max (k) .

(13)

Since, by the denition of to , it is required that (to ) be at least as large as k ,


we must have


Dk
load(k) + k max (k) > k
1+

as a necessary condition for dm to miss a deadline; equivalently, the negation of


this condition is sucient to ensure dm-schedulability:

Dk
load(k) + k max (k) k
1+

(since Dk )

2 load(k) + k max (k) k

which is as claimed in the theorem.

A Speedup Bound

In this section, we provide a quantitative evaluation of the eectiveness of the


sucient schedulability condition of Theorem 1. There are several approaches
to quantifying the goodness or the eectiveness of dierent scheduling algorithms and schedulability tests. One relatively recent novel approach is centered
on processor speedup bounds. A sucient schedulability test is said to have a
processor speedup bound of c if

Deadline Monotonic Scheduling on Uniform Multiprocessors

99

Any task system deemed schedulable by the test is guaranteed to actually


be so; and
For any task system that is not deemed schedulable by the test, it is the case
that the task system is actually not schedulable upon a platform in which
each processor is 1c times as fast.
Intuitively speaking, a processor speedup bound of c for a sucient schedulability
test implies that the inexactness of the test penalizes its user by at most a
speedup factor of c when compared to an exact test. The smaller the processor
speedup bound, the better the sucient schedulability test: a processor speedup
bound of 1 would mean that the test is in fact an exact one.
We introduce some notation now. For any uniform multiprocessor platform
= (s1 , s2 , . . . , sm ) and any positive real number x, let x denote the uniform
multiprocessor platform comprised of the same number of processors as , with
the ith processor having a speed of si x.
The following two lemmas relate dm-schedulability on a uniform multiprocessor platform , as validated by the test of Theorem 1, with feasibility on
platform x .
Lemma 4. Any sporadic task system that is feasible upon a uniform multiprocessor platform x must satisfy
max (k) s1 x and load(k) Sm ()x

(14)

for all k, 1 k n.
Proof. Suppose that task system is feasible upon x. To prove that max (k)
xs1 , consider each task i separately:
In order to be able to meet all deadlines of i if i generates jobs exactly Ti
time units apart, it is necessary that Ci /Ti xs1 .
Since any individual job of i can receive at most Di xs1 units of execution
by its deadline, we must have Ci Di x s1 ; i.e., Ci /Di xs1 .
Putting both conditions together, we get (Ci / min(Ti , Di )) xs1 . Taken over all
the tasks 1 , 2 , . . . , k , this observation yields the condition that max (k) xs1 .
Since any individual job of i can receive at most Di xs1 units of execution
by its deadline, we must have Ci Di s1 x; i.e., Ci /Di s1 x. Taken over all
tasks in , this observation yields the rst condition.
To prove that load(k) Sm ()x, recall the denition of load(k) from
Sect. 1. Let t denote some value of t which denes load(k):


k
 def
i=1 dbf(i , t)
t = argmax
.
t
Suppose that all tasks in {1 , 2 , . . . , k } generate a job at time-instant zero, and
each task i generates subsequent jobs exactly Ti time units apart. The total
amount of execution that is available over the interval [0, t ) on this platform is
equal to Sm ()xt ; hence, it is necessary that load(k) Sm ()x if all deadlines
are to be met.

100

S. Baruah and J. Goossens

Lemma 5. Any sporadic task system that is feasible upon a multiprocessor platform x is determined to be global-dm schedulable on by the dm-schedulability
test of Theorem 1, provided

2
2
x (2s1 )/ Sm ()s1 + 2Sm ()sm + s1 sm Sm
()s21 + 4Sm
()s1 sm
2
+ 2Sm ()s21 sm + 4Sm
()s2m + 4s1 s2m Sm () + 2 s21 s2m

1/2
.
4s1 Sm ()sm

(15)

Proof. Suppose that is feasible upon a platform x. From Lemma 4, it must


be the case that load(k) Sm ()x and max (k) s1 x for all k. For to be
determined to be dm-schedulable upon by the test of Theorem 1, it is sucient
that for all k, 1 k n:
1
(k k max (k))
2
k
1) k and from Lemma 2)
(since (
sm
1
k
load(k) (k (
1)max (k))
2
sm
(since  1 for all )
k
1
load(k) (k ( )max (k))
2
sm

load(k)

1
max (k)
(k (1
))
2
sm
(by (6))

load(k)

load(k)

1
max (k)
((Sm () max (k))(1
))
2
sm

Sm ()x

1
s1 x
((Sm () s1 x)(1
))
2
sm

Sm ()x

1
Sm ()s1 x
s2 x2
(Sm ()
s1 x + 1 )
2
sm
sm

0 s1 x2 [Sm ()(s1 + 2sm ) + s1 sm ]x


+ Sm ()sm .
Solving for x using standard techniques for the solution of quadratic inequalities
yields (15).
A processor-speedup bound for the dm-schedulability test of Theorem 1 immediately follows from Lemma 5:

Deadline Monotonic Scheduling on Uniform Multiprocessors

101

Table 1. Speedup bound for various uniform platforms


heterogeneity
H1/2
H1/2
H1/2
H1/2
H1/4
H1/4
H1/4
H1/4

m
4
20
100
1000
4
20
100
1000

Sm ()
12
60
300
3000
15
75
375
3750

2
14
74
749
0.875
8.345
45.875
467.75

sm
4
4
4
4
8
8
8
8

s1
2
2
2
2
1
1
1
1

speedup
4.59
4.84
4.89
4.90
10.42
10.81
10.89
10.91

Theorem 2. The dm-schedulability test of Theorem 1 has a processor speedup


bound of the value of the right side of (15).



4.1

Analysis of the Speedup Bound

We rst observe that the processor speedup bound of Theorem 2 generalizes


previously-obtained bounds for identical multiprocessors. It may be veried that
by setting = (m 1), s1 = s2 = = sm = 1, and Sm () = m, Theorem 1 reduces to the dm-schedulability test for identical multiprocessors, and Theorem 2
reduces3 to the speedup bound for identical multiprocessors, derived in [25]. It
consequently follows that our result here is a generalization of the identical multiprocessor dm test and speedup bound from [25].
Evaluation by simulation experiments. Equation (15) expresses the processor
speedup bound as a function of the following platform parameters: , s1 , sm , and
Sm (). In order to get a more intuitive feel for the bounds, we computed the
speedup bound for various uniform multiprocessor platforms. Due to the large
number of parameters we restricted our study to platforms with four distinct
processor speeds: 8, 4, 2, 1 and two kinds of heterogeneity: 25 % of each kind
of processor speed, 50 % of processor speed 4 plus 50 % of processor speed 2
(labeled H1/4 and H1/2 in Table 1, respectively).
Table 1 gives the speedup bound for the various uniform platforms considered in this study. As seen from this table, the bound increases with increasing
heterogeneity, and increases with increasing size of the platform (for a given
heterogeneity).

Conclusions

Most research on multiprocessor real-time scheduling has focused on the simplest model systems of implicit-deadline tasks that are scheduled on identical
3

Notice that the discriminant presented in [25] is actually 12m2 4m + 1.

102

S. Baruah and J. Goossens

multiprocessors. More recent research has attempted to generalize this work in


two dierent directions, by either generalizing the task model (to constraineddeadline and arbitrary-deadline sporadic task systems), or by generalizing the
processor model (to uniform and unrelated multiprocessors).
Very recently [28], eorts have been made to generalize along both the taskmodel and the processor axes, by considering the scheduling of sporadic task
systems upon uniform multiprocessors. However, the only scheduling algorithm
that was considered in [28] is Earliest Deadline First (EDF). In this work, we
have applied the techniques from [28] to dm scheduling. We have obtained a
new schedulability test for the global dm scheduling of sporadic task systems
upon preemptive uniform multiprocessor platforms. This test characterizes a task
system by its load and max parameters, and a platform by its total computing
capacity and its parameter. We have also obtained a characterization of the
eectiveness of this schedulability test in terms of its processor speedup factor.

References
1. Mok, A.K.: Fundamental Design Problems of Distributed Systems for The
Hard-Real-Time Environment. PhD thesis, Laboratory for Computer Science, Massachusetts Institute of Technology, Available as Technical Report
No. MIT/LCS/TR-297 (1983)
2. Baruah, S., Mok, A., Rosier, L.: Preemptively scheduling hard-real-time sporadic
tasks on one processor. In: Proceedings of the 11th Real-Time Systems Symposium,
Orlando, Florida, pp. 182190. IEEE Computer Society Press, Los Alamitos (1990)
3. Baruah, S., Cohen, N., Plaxton, G., Varvel, D.: Proportionate progress: A notion
of fairness in resource allocation. Algorithmica 15(6), 600625 (1996)
4. Oh, D.I., Baker, T.P.: Utilization bounds for N-processor rate monotone scheduling
with static processor assignment. Real-Time Systems: The International Journal
of Time-Critical Computing 15, 183192 (1998)
5. Lopez, J.M., Garcia, M., Diaz, J.L., Garcia, D.F.: Worst-case utilization bound for
EDF scheduling in real-time multiprocessor systems. In: Proceedings of the EuroMicro Conference on Real-Time Systems, Stockholm, Sweden, pp. 2534. IEEE
Computer Society Press, Los Alamitos (2000)
6. Andersson, B., Jonsson, J.: Fixed-priority preemptive multiprocessor scheduling:
To partition or not to partition. In: Proceedings of the International Conference
on Real-Time Computing Systems and Applications, Cheju Island, South Korea,
pp. 337346. IEEE Computer Society Press, Los Alamitos (2000)
7. Andersson, B., Baruah, S., Jonsson, J.: Static-priority scheduling on multiprocessors. In: Proceedings of the IEEE Real-Time Systems Symposium, pp. 193202.
IEEE Computer Society Press, Los Alamitos (2001)
8. Goossens, J., Funk, S., Baruah, S.: Priority-driven scheduling of periodic task systems on multiprocessors. Real Time Systems 25(23), 187205 (2003)
9. Lopez, J.M., Diaz, J.L., Garcia, D.F.: Utilization bounds for EDF scheduling on
real-time multiprocessor systems. Real-Time Systems: The International Journal
of Time-Critical Computing 28(1), 3968 (2004)
10. Funk, S.H.: EDF Scheduling on Heterogeneous Multiprocessors. PhD thesis, Department of Computer Science, The University of North Carolina at Chapel Hill
(2004)

Deadline Monotonic Scheduling on Uniform Multiprocessors

103

11. Baruah, S.: Scheduling periodic tasks on uniform processors. In: Proceedings of the
EuroMicro Conference on Real-time Systems, Stockholm, Sweden, pp. 714 (June
2000)
12. Funk, S., Goossens, J., Baruah, S.: On-line scheduling on uniform multiprocessors.
In: Proceedings of the IEEE Real-Time Systems Symposium, pp. 183192. IEEE
Computer Society Press, Los Alamitos (2001)
13. Baruah, S., Goossens, J.: Rate-monotonic scheduling on uniform multiprocessors.
IEEE Transactions on Computers 52(7), 966970 (2003)
14. Funk, S., Baruah, S.: Task assignment on uniform heterogeneous multiprocessors.
In: Proceedings of the EuroMicro Conference on Real-Time Systems, Palma de
Mallorca, Balearic Islands, Spain, pp. 219226. IEEE Computer Society Press, Los
Alamitos (2005)
15. Darera, V.N., Jenkins, L.: Utilization bounds for RM scheduling on uniform multiprocessors. In: RTCSA 2006: Proceedings of the 12th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, Washington, DC, USA, pp. 315321. IEEE Computer Society, Los Alamitos (2006)
16. Andersson, B., Tovar, E.: Competitive analysis of partitioned scheduling on uniform
multiprocessors. In: Proceedings of the Workshop on Parallel and Distributed RealTime Systems, Long Beach, CA (March 2007)
17. Andersson, B., Tovar, E.: Competitive analysis of static-priority scheduling on
uniform multiprocessors. In: Proceedings of the IEEE International Conference on
Embedded and Real-Time Computing Systems and Applications, Daegu, Korea.
IEEE Computer Society Press, Los Alamitos (2007)
18. Baker, T.: Multiprocessor EDF and deadline monotonic schedulability analysis.
In: Proceedings of the IEEE Real-Time Systems Symposium, pp. 120129. IEEE
Computer Society Press, Los Alamitos (2003)
19. Baker, T.P.: An analysis of EDF schedulability on a multiprocessor. IEEE Transactions on Parallel and Distributed Systems 16(8), 760768 (2005)
20. Bertogna, M., Cirinei, M., Lipari, G.: Improved schedulability analysis of EDF on
multiprocessor platforms. In: Proceedings of the EuroMicro Conference on RealTime Systems, Palma de Mallorca, Balearic Islands, Spain, pp. 209218. IEEE
Computer Society Press, Los Alamitos (2005)
21. Bertogna, M., Cirinei, M., Lipari, G.: New schedulability tests for real-time tasks
sets scheduled by deadline monotonic on multiprocessors. In: Proceedings of the 9th
International Conference on Principles of Distributed Systems, Pisa, Italy. IEEE
Computer Society Press, Los Alamitos (2005)
22. Cirinei, M., Baker, T.P.: EDZL scheduling analysis. In: Proceedings of the EuroMicro Conference on Real-Time Systems, Pisa, Italy. IEEE Computer Society Press,
Los Alamitos (2007)
23. Fisher, N.: The Multiprocessor Real-Time Scheduling of General Task Systems.
PhD thesis, Department of Computer Science, The University of North Carolina
at Chapel Hill (2007)
24. Baruah, S., Baker, T.: Schedulability analysis of global EDF. Real- Time Systems
(to appear, 2008)
25. Baruah, S., Fisher, N.: Global deadline-monotonic scheduling of arbitrary-deadline
sporadic task systems. In: Tovar, E., Tsigas, P., Fouchal, H. (eds.) OPODIS 2007.
LNCS, vol. 4878, pp. 204216. Springer, Heidelberg (2007)
26. Baruah, S., Baker, T.: Global EDF schedulability analysis of arbitrary sporadic
task systems. In: Proceedings of the EuroMicro Conference on Real-Time Systems,
Prague, Czech Republic. IEEE Computer Society Press, Los Alamitos (2008)

104

S. Baruah and J. Goossens

27. Leung, J., Whitehead, J.: On the complexity of xed-priority scheduling of periodic,
real-time tasks. Performance Evaluation 2, 237250 (1982)
28. Baruah, S., Goossens, J.: The EDF scheduling of sporadic task systems on uniform
multiprocessors. Technical report, University of North Carolina at Chapel Hill
(2008)
29. Ripoll, I., Crespo, A., Mok, A.K.: Improvement in feasibility testing for real-time
tasks. Real-Time Systems: The International Journal of Time-Critical Computing 11, 1939 (1996)
30. Baker, T.P., Fisher, N., Baruah, S.: Algorithms for determining the load of a sporadic task system. Technical Report TR-051201, Department of Computer Science,
Florida State University (2005)
31. Fisher, N., Baruah, S., Baker, T.: The partitioned scheduling of sporadic tasks
according to static priorities. In: Proceedings of the EuroMicro Conference on RealTime Systems, Dresden, Germany. IEEE Computer Society Press, Los Alamitos
(2006)
32. Fisher, N., Baker, T., Baruah, S.: Algorithms for determining the demand-based
load of a sporadic task system. In: Proceedings of the International Conference on
Real-time Computing Systems and Applications, Sydney, Australia. IEEE Computer Society Press, Los Alamitos (2006)
33. Liu, C., Layland, J.: Scheduling algorithms for multiprogramming in a hard realtime environment. Journal of the ACM 20(1), 4661 (1973)

A Comparison of the
M-PCP, D-PCP, and FMLP on LITMUSRT
Bj
orn B. Brandenburg and James H. Anderson
The University of North Carolina at Chapel Hill
Dept. of Computer Science
Chapel Hill, NC 27599-3175 USA
{bbb,anderson}@cs.unc.edu

Abstract. This paper presents a performance comparison of three multiprocessor real-time locking protocols: the multiprocessor priority ceiling protocol (M-PCP), the distributed priority ceiling protocol (D-PCP),
and the exible multiprocessor locking protocol (FMLP). In the FMLP,
blocking is implemented via either suspending or spinning, while in the
M-PCP and D-PCP, all blocking is by suspending. The presented comparison was conducted using a UNC-produced Linux extension called
LITMUSRT . In this comparison, schedulability experiments were conducted in which runtime overheads as measured on LITMUSRT were
used. In these experiments, the spin-based FMLP variant always exhibited the best performance, and the M-PCP and D-PCP almost always
exhibited poor performance. These results call into question the practical viability of the M-PCP and D-PCP, which have been the de-facto
standard for real-time multiprocessor locking for the last 20 years.

Introduction

With the continued push towards multicore architectures by most (if not all)
major chip manufacturers [19,26], the computing industry is facing a paradigm
shift: in the near future, multiprocessors will be the norm. Current o-the-shelf
systems now routinely contain chips with two, four, and even eight cores, and
chips with up to 80 cores are envisioned within a decade [26]. Not surprisingly,
with multicore platforms becoming so widespread, real-time applications are
already being deployed on them. For example, systems processing time-sensitive
business transactions have been realized by Azul Systems on top of the highlyparallel Vega2 platform, which consists of up to 768 cores [4].
Motivated by these developments, research on multiprocessor real-time scheduling has intensied in recent years (see [13] for a survey). Thus far, however, few
proposed approaches have actually been implemented in operating systems and
evaluated under real-world conditions. To help bridge the gap between algorithmic research and real-world systems, our group recently developed LITMUSRT ,
a multiprocessor real-time extension of Linux [8,11,12]. Our choice of Linux as
a development platform was inuenced by recent eorts to introduce real-timeoriented features in stock Linux (see, for example, [1]). As Linux evolves, it could
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 105124, 2008.
c Springer-Verlag Berlin Heidelberg 2008


106

B.B. Brandenburg and J.H. Anderson

undoubtedly benet from recent algorithmic advances in real-time schedulingrelated research.


LITMUSRT has been used in several scheduling-related performance studies [5,8,12]. In addition, a study was conducted to compare synchronization
alternatives under global and partitioned earliest-deadline-rst (EDF) scheduling [11]. This study was partially motivated by the relative lack (compared to
scheduling) of research on real-time multiprocessor synchronization. It focused
more broadly on comparing suspension- and spin-based locking on the basis of
schedulability. Spin-based locking was shown to be the better choice.
Focus of this paper. In this paper, we present follow-up work to the latter
study that focuses on systems where partitioned, static-priority (P-SP) scheduling is used. This is an important category of systems, as both partitioning
and static priorities tend to be favored by practitioners. Moreover, the earliest
and most inuential work on multiprocessor real-time synchronization was directed at such systems. This work resulted in two now-classic locking protocols:
the multiprocessor priority ceiling protocol (M-PCP) and the distributed priority
ceiling protocol (D-PCP) [23]. While these protocols are probably the most widely
known (and taught) locking protocols for multiprocessor real-time applications,
they were developed at a time (over 20 years ago) when such applications were
deemed to be mostly of academic interest only. With the advent of multicore
technologies, this is clearly no longer the case. Motivated by this, we take a new
look at these protocols herein with the goal of assessing their practical viability.
We also examine the subject of our prior EDF-based study, the exible multiprocessor locking protocol (FMLP) [6,9,11]. We seek to assess the eectiveness
of these protocols in managing memory-resident resources on P-SP-scheduled
shared-memory multiprocessors.
Tested protocols. The M-PCP, D-PCP, and FMLP function very dierently.
In both the M-PCP and D-PCP, blocked tasks are suspended, i.e., such a task
relinquishes its assigned processor. The main dierence between these two protocols is that, in the D-PCP, resources are assigned to processors, and in the
M-PCP, such an assignment is not made.1 In the D-PCP, a task accesses a resource via an RPC-like invocation of an agent on the resources processor that
performs the access. In the M-PCP, a global semaphore protocol is used instead.
In both protocols, requests for global resources (i.e., resources accessed by tasks
on multiple processors) cannot appear in nested request sequences. Invocations
on such resources are ordered by priority and execute at elevated priority levels so that they complete more quickly. In contrast to these two protocols, the
FMLP orders requests on a FIFO basis, allows arbitrary request nesting (with
one slight restriction, described later), and is agnostic regarding whether blocking is via spinning (busy-waiting) or suspension. While spinning wastes processor
time, in our prior work on EDF-scheduled systems [11], we found that its use
almost always results in better schedulability than suspending. This is because
1

Because the D-PCP assigns resources to processors, it can potentially be used in


loosely-coupled distributed systemshence its name.

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

107

it can be dicult to predict which scheduler events may aect a task while it
is suspended, so needed analysis tends to be pessimistic. (Each of the protocols
considered here is described more fully later.)
Methodology and results. The main contribution of this paper is an assessment of the performance of the three protocols described above in terms of P-SP
schedulability. Our methodology in conducting this assessment is similar to that
used in our earlier work on EDF-scheduled systems [11]. The performance of
any synchronization protocol will depend on runtime overheads, such as preemption costs, scheduling costs, and costs associated with performing various
system calls. We determined these costs by analyzing trace data collected while
running various workloads under LITMUSRT (which, of course, rst required implementing each synchronization protocol in LITMUSRT ). We then used these
costs in schedulability experiments involving randomly-generated task systems.
In these experiments, a wide range of task-set parameters was considered (though
only a subset of our data is presented herein, due to space limitations). In each
experiment, schedulability was checked for each scheme using a demand-based
schedulability test [15], augmented to account for runtime overheads. In these experiments, we found that the spin-based FMLP variant always exhibited the best
performance (usually, by a wide margin), and the M-PCP and D-PCP almost always exhibited poor performance. These results reinforce our earlier nding that
spin-based locking is preferable to suspension-based locking under EDF scheduling [11]. They also call into question the practical viability of the M-PCP and
D-PCP.
Organization. In the next two sections, we discuss needed background and the
results of our experiments. In an appendix, we describe how runtime overheads
were obtained.

Background

We consider the scheduling of a system of sporadic tasks, denoted T1 , . . . , TN ,


on m processors. The j th job (or invocation) of task Ti is denoted Tij . Such a
job Tij becomes available for execution at its release time, r(Tij ). Each task Ti is
specied by its worst-case (per-job) execution cost , e(Ti ), and its period , p(Ti ).
The job Tij should complete execution by its absolute deadline, r(Tij ) + p(Ti ).
The spacing between job releases must satisfy r(Tij+1 ) r(Tij ) + p(Ti ). Task Ti s
utilization reects the processor share that it requires and is given by e(Ti )/p(Ti ).
In this paper, we consider only partitioned static-priority (P-SP) scheduling,
wherein each task is statically assigned to a processor and each processor is
scheduled independently using a static-priority uniprocessor algorithm. A wellknown example of such an algorithm is the rate-monotonic (RM) algorithm,
which gives higher priority to tasks with smaller periods. In general, we assume that tasks are indexed from 1 to n by decreasing priority, i.e., a lower
index implies higher priority. We refer to Ti s index i as its base priority. A job
is scheduled using its eective priority, which can sometimes exceed its base

108

B.B. Brandenburg and J.H. Anderson

priority under certain resource-sharing policies (e.g., priority inheritance may


raise a jobs eective priority). After its release, a job Tij is said to be pending
until it completes. While it is pending, Tij is either runnable or suspended. A
suspended job cannot be scheduled. When a job transitions from suspended
to runnable (runnable to suspended), it is said to resume (suspend ). While
runnable, a job is either preemptable or non-preemptable. A newly-released or resuming job can preempt a scheduled lower-priority job only if it is
preemptable.
Resources. When a job Tij requires a resource
, it issues a request R for
.
R is satised as soon as Tij holds
, and completes when Tij releases
. |R |
denotes the maximum time that Tij will hold
. Tij becomes blocked on
if R
cannot be satised immediately. (A resource can be held by at most one job at
a time.) A resource
is local to a processor p if all jobs requesting
execute on
p, and global otherwise.
If Tij issues another request R before R is complete, then R is nested within
R. In such cases, |R| includes the cost of blocking due to requests nested in R.
Some synchronization protocols disallow nesting. If allowed, nesting is proper,
i.e., R must complete no later than R completes. An outermost request is not
nested within any other request. Inset (b) of Fig. 1 illustrates the dierent phases
of a resource request. In this and later gures, the legend shown in inset (a) of
Fig. 1 is assumed.
Resource sharing introduces a number of problems that can endanger temporal correctness. Priority inversion occurs when a high-priority job Thi cannot
proceed due to a lower-priority job Tlj either being non-preemptable or holding
a resource requested by Thi . Thi is said to be blocked by Tlj . Another source of
delay is remote blocking, which occurs when a global resource requested by a job
is already in use on another processor.
In each of the synchronization protocols considered in this paper, local resources can be managed by using simpler uniprocessor locking protocols, such
as the priority ceiling protocol [25] or stack resource policy [3]. Due to space
constraints, we do not consider such functionality further, but instead focus our
attention on global resources, as they are more dicult to support and have
the greatest impact on performance. We explain below how such resources are
handled by considering each of the D-PCP, M-PCP, and FMLP in turn. It is not
possible to delve into every detail of each protocol given the space available. For
such details, we refer the reader to [6,9,11,21].
The D-PCP and M-PCP. The D-PCP implements global resources by providing local agents that act on behalf of requesting jobs. A local agent Aqi , located
on remote processor q where jobs of Ti request resources, carries out requests on
behalf of Ti on processor q. Instead of accessing a global remote resource
on processor q directly, a job Tij submits a request R to Aqi and suspends. Tij resumes
when Aqi has completed R. To expedite requests, Aqi executes with an eective
priority higher than that of any normal task (see [16,21] for details). However,
agents of lower-priority tasks can still be preempted by agents of higher-priority

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

scheduled
(no resource)

blocked
(resource unavailable)

scheduled
(with resource X )

waiting for response


from agent

R1

Tij

109

|R1 |

1,2 1
R2

time

nested
job release

job completion

(a)

issued

satisfied

complete

(b)

Fig. 1. (a) Legend. (b) Phases of a resource request. Tij issues R1 and blocks since R1
is not immediately satised. Tij holds the resource associated with R1 for |R1 | time
units, which includes blocking incurred due to nested requests.

tasks. When accessing global resources residing on Ti s assigned processor, Tij


serves as its own agent.
The M-PCP relies on shared memory to support global resources. In contrast
to the D-PCP, global resources are not assigned to any particular processor but
are accessed directly. Local agents are thus not required since jobs execute requests themselves on their assigned processors. Competing requests are satised
in order of job priority. When a request is not satised immediately, the requesting job suspends until its request is satised. Under the M-PCP, jobs holding
global resources execute with an eective priority higher than that of any normal
task.
The D-PCP and M-PCP avoid deadlock by prohibiting the nesting of global
resource requestsa global request R cannot be nested within another request
(local or global) and no other request (local or global) may be nested within R.
Example. Fig. 2 depicts global schedules for four jobs (T11 ,. . . ,T41 ) sharing two
resources (
1 ,
2 ) on two processors. Inset (a) shows resource sharing under the
D-PCP. Both resources reside on processor 1. Thus, two agents (A12 , A14 ) are also
assigned to processor 1 in order to act on behalf of T2 and T4 on processor 2. A14
becomes active at time 2 when T41 requests
1 . However, since T31 already holds

1 , A14 is blocked. Similarly, A12 becomes active and blocks at time 4. When T31
releases
1 , A12 gains access next because it is the highest-priority active agent
on processor 1. Note that, even though the highest-priority job T11 is released
at time 2, it is not scheduled until time 7 because agents and resource-holding
jobs have an eective priority that exceeds the base priority of T11 . A12 becomes
active at time 9 since T21 requests
2 . However, T11 is accessing
1 at the time,
and thus has an eective priority that exceeds A12 s priority. Therefore, A12 is not
scheduled until time 10.
Inset (b) shows the same scenario under the M-PCP. In this case, T21 and T41
access global resources directly instead of via agents. T41 suspends at time 2 since
T21 already holds
1 . Similarly, T21 suspends at time 4 until it holds
1 one time
unit later. Meanwhile, on processor 1, T11 is scheduled at time 5 after T21 returns
to normal priority and also requests
1 at time 6. Since resource requests are
satised in priority order, T11 s request has precedence over T41 s request, which
was issued much earlier at time 2. Thus, T41 must wait until time 8 to access
1 .

110

B.B. Brandenburg and J.H. Anderson

A12

2
1

T1

T3

T1

T4
time

9 10 11 12 13 14 15

(a)

T4

(c)

9 10 11 12 13 14 15

T1

T2

T3

T4
time

9 10 11 12 13 14 15

Processor 2

T2

time

9 10 11 12 13 14 15

Processor 1

T3

Processor 2

1
1

(b)

Processor 1

T1

Processor 2

T4

T2

Processor 2

T2

T3

Processor 1

Processor 1

A14

time

(d)

Fig. 2. Example schedules of four tasks sharing two global resources. (a) D-PCP schedule. (b) M-PCP schedule. (c) FMLP schedule (1 , 2 are long). (d) FMLP schedule (1 ,
2 are short).

Note that T41 preempts T21 when it resumes at time 8 since it is holding a global
resource.
The FMLP. The FMLP is considered to be exible for several reasons: it
can be used under either partitioned or global scheduling, with either static
or dynamic task priorities, and it is agnostic regarding whether blocking is via
spinning or suspension. Regarding the latter, resources are categorized as either
short or long. Short resources are accessed using queue locks (a type of spin
lock) [2,14,18] and long resources are accessed via a semaphore protocol. Whether
a resource should be considered short or long is user-dened, but requests for
long resources may not be contained within requests for short resources. To date,
we have implemented FMLP variants for both partitioned and global EDF and
P-SP scheduling (the focus of the description given here).
Deadlock avoidance. The FMLP uses a very simple deadlock-avoidance mechanism that was motivated by trace data we collected involving the behavior of
actual real-time applications [7]. This data (which is summarized later) suggests

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

111

that nesting, which is required to cause a deadlock, is somewhat rare; thus, complex deadlock-avoidance mechanisms are of questionable utility. In the FMLP,
deadlock is prevented by grouping resources and allowing only one job to access resources in any given group at any time. Two resources are in the same
group i they are of the same type (short or long) and requests for one may be
nested within those of the other. A group lock is associated with each resource
group; before a job can access a resource, it must rst acquire its corresponding
group lock. All blocking incurred by a job occurs when it attempts to acquire
the group lock associated with a resource request that is outermost with respect
to either short or long resources.2 We let G(
) denote the group that contains
resource
.
We now explain how resource requests are handled in the FMLP. This process
is illustrated in Fig. 3.

non-preemptive execution
blocked, job spins

critical section

short

issued

satisfied

complete

time

long
blocked, job suspends

resumed,
but blocked

critical section
non-preemptive
execution

priority boosted

Fig. 3. Phases of short and long resource requests

Short requests. If R is short and outermost, then Tij becomes non-preemptable


and attempts to acquire the queue lock protecting G(
). In a queue lock, blocked
processes busy-wait in FIFO order.3 R is satised once Tij holds
s group lock.
When R completes, Tij releases the group lock and leaves its non-preemptive
section.
Long requests. If R is long and outermost, then Tij attempts to acquire the
semaphore protecting G(
). Under a semaphore lock, blocked jobs are added to
a FIFO queue and suspend. As soon as R is satised (i.e., Tij holds
s group
lock), Tij resumes (if it suspended) and enters a non-preemptive section (which
2

A short resource request nested within a long resource request but no short resource
request is considered outermost.
The desirability of FIFO-based real-time multiprocessor locking protocols has been
noted by others [17], but to our knowledge, the FMLP is the rst such protocol to
be implemented in a real OS.

112

B.B. Brandenburg and J.H. Anderson

becomes eective as soon as Tij is scheduled). When R completes, Tij releases


the group lock and becomes preemptive.
Priority boost. If R is long and outermost, then Tij s priority is boosted when
R is satised (i.e., Tij is scheduled with eective priority 0). This allows it to
preempt jobs executing preemptively at base priority. If two or more priorityboosted jobs are ready, then they are scheduled in the order in which their
priorities were boosted (FIFO).
Example. Insets (c) and (d) of Fig. 2 depict FMLP schedules for the same
scenario previously considered in the context of the D-PCP and M-PCP. In (c),

1 and
2 are classied as long resources. As before, T31 requests
1 rst and forces
the jobs on processor 2 to suspend (T41 at time 2 and T21 at time 4). In contrast
to both the D-PCP and M-PCP, contending requests are satised in FIFO order.
Thus, when T31 releases
1 at time 5, T41 s request is satised before that of T21 .
Similarly, T11 s request for
1 is only satised after T21 completes its request at
time 7. Note that, since jobs suspend when blocked on a long resource, T31 can
be scheduled for one time unit at time 6 when T11 blocks on
1 .
Inset (d) depicts the schedule that results when both
1 and
2 are short. The
main dierence from the schedule depicted in (c) is that jobs busy-wait nonpreemptively when blocked on a short resource. Thus, when T21 is released at
time 3, it cannot be scheduled until time 6 since T41 executes non-preemptively
from time 2 until time 6. Similarly, T41 cannot be scheduled at time 7 when
T21 blocks on
2 because T21 does not suspend. Note that, due to the waste of
processing time caused by busy-waiting, the last job only nishes at time 15.
Under suspension-based synchronization methods, the last job nishes at either
time 13 (M-PCP and FMLP for long resources) or 14 (D-PCP).

Experiments

In our study, we sought to assess the practical viability of the aforementioned


synchronization protocols. To do so, we determined the schedulability of
randomly-generated task sets under each scheme. (A task system is schedulable if it can be veried via some test that no task will ever miss a deadline.)
Task parameters were generatedsimilar to the approach previously used
in [11]as follows. Task utilizations were distributed uniformly over [0.001, 0.1].
To cover a wide range of timing constraints, we considered four ranges of periods: (i) [3ms-33ms], (ii) [10ms-100ms], (iii) [33ms-100ms], and (iv) [100ms1000ms]. Task execution costs (excluding resource requests) were calculated
based on utilizations and periods. Periods were dened to be integral, but execution costs may be non-integral. All time-related values used in our experiments
were dened assuming a target platform like that used in obtaining overhead
values. As explained in the appendix, this system has four 2.7 GHz processors.
Given our focus on partitioned scheduling, task sets were obtained by rst
generating tasks for each processor individually, until either a per-processor uti was reached or 30 tasks were generated, and then generating
lization cap U

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

113

resource requests. By eliminating the need to partition task sets, we prevent the
eects of bin-packing heuristics from skewing our results. All generated task sets
were determined to be schedulable before blocking was taken into account.
Resource sharing. Each task was congured to issue between 0 and K resource
requests. The access cost of each request (excluding synchronization overheads)
was chosen uniformly from [0.1s, L]. K ranged from 0 to 9 and L from 0.5s
to 15.5s. The latter range was chosen based on locking trends observed in a
prior study of locking patterns in the Linux kernel, two video players, and an
interactive 3D video game (see [7] for details.). Although Linux is not a real-time
system, its locking behavior should be similar to that of many complex systems,
including real-time systems, where great care is taken to make critical sections
short and ecient. The video players and the video game need to ensure that
both visual and audio content are presented to the user in a timely manner,
and thus are representative of the locking behavior of a class of soft real-time
applications. The trace data we collected in analyzing these applications suggests
that, with respect to both semaphores and spin locks, critical sections tend to be
short (usually, just a few microseconds on a modern processor) and nested lock
requests are somewhat rare (typically only 1% to 30% of all requests, depending
on the application, with nesting levels deeper than two being very rare).
The total number of generated tasks N was used to determine the number of
resources according to the formula KN
m , where the sharing degree was chosen
from {0.5, 1, 2, 4}. Under the D-PCP, resources were assigned to processors in a
round-robin manner to distribute the load evenly. Nested resource requests were
not considered since they are not supported by the M-PCP and D-PCP and also
because allowing nesting has a similar eect on schedulability under the FMLP
as increasing the maximum critical section length.
Finally, task execution costs and request durations were inated to account
for system overheads (such as context switching costs) and synchronization overheads (such as the cost of invoking synchronization-related system calls). The
methodology for doing this is explained in the appendix.
Schedulability. After a task set was generated, the worst-case blocking delay
of each task was determined by using methods described in [20,21] (M-PCP),
[16,21] (D-PCP), and [9] (FMLP). Finally, we determined whether a task set was
schedulable after accounting for overheads and blocking delay with a demandbased [15] schedulability test.
A note on the period enforcer. When a job suspends, it defers part of its
execution to a later instant, which can cause a lower-priority job to experience
deferral blocking. In checking schedulability, this source of blocking must be accounted for. In [21], it is claimed that deferral blocking can be eliminated by
using a technique called period enforcer. In this paper, we do not consider the
use of the period enforcer, for a number of reasons. First, the period enforcer has
not been described in published work (nor is a complete description available

114

B.B. Brandenburg and J.H. Anderson

online). Thus, we were unable to verify its correctness4 and were unable to obtain
sucient information to enable an implementation in LITMUSRT (which obviously is a prerequisite for obtaining realistic overhead measurements). Second,
from our understanding, it requires a task to be split into subtasks whenever it
requests a resource. Such subtasks are eligible for execution at dierent times
based on the resource-usage history of prior (sub-)jobs. We do not consider it
feasible to eciently maintain a suciently complete resource usage history inkernel at runtime. (Indeed, to the best of our knowledge, the period enforcer
has never been implemented in any real OS.) Third, all tested, suspension-based
synchronization protocols are aected by deferral blocking to the same extent.
Thus, even if it were possible to avoid deferral blocking altogether, the relative
performance of the algorithms is unlikely to dier signicantly from our ndings.
3.1

Performance on a Four-Processor System

We conducted schedulability experiments assuming four to 16 processors. In all


cases, we used overhead values obtained from our four-processor test platform.
(This is perhaps one limitation of our study. In reality, overheads on larger
platforms might be higher, e.g., due to greater bus contention or dierent caching
behavior. We decided to simply use our four-processor overheads in all cases
rather than guessing as to what overheads would be appropriate on larger
systems.) In this subsection, we discuss experiments conducted to address three
questions: When (if ever) does either FMLP variant perform worse than either
PCP variant? When (if ever) is blocking by suspending a viable alternative to
blocking by spinning? What parameters aect the performance of the tested
algorithms most? In these experiments, a four-processor system was assumed;
larger systems are considered in the next subsection.
Generated task systems. To answer the questions above, we conducted extensive experiments covering a large range of possible task systems. We varied
(i) L in 40 steps over its range ([0.5s, 15.5s]), (ii) K in steps of one over its
in 40 steps over its range ([0.1, 0.5]), while keeping
range ([0, 9]), and (iii) U
(in each case) all other task-set generation parameters constant so that schedu . In particular, we
lability could be determined as a function of L, K, and U
conducted experiments (i)(iii) for constant assignments from all combinations
{0.15, 0.3, 0.45}, L {3s, 9s, 15s}, K {2, 5, 9},
of {0.5, 1, 2, 4}, U
and the four task period ranges dened earlier. For each sampling point, we
generated (and tested for schedulability under each algorithm) 1,000 task sets,
for a total 13,140,000 task sets.
Trends. It is clearly not feasible to present all 432 resulting graphs. However,
the results show clear trends. We begin by making some general observations
4

In fact, we have conrmed that some existing scheduling analysis (e.g., [21]) that
uses the period enforcer is awed [22]. Interestingly, in her now-standard textbook
on the subject of real-time systems, Liu does not assume the presence of the period
enforcer in her analysis of the D-PCP [16].

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

115

concerning these trends. Below, we consider a few specic graphs that support
these observations.
In all tested scenarios, suspending was never preferable to spinning. In fact, in
the vast majority of the tested scenarios, every generated task set was schedulable under spinning (the short FMLP variant). In contrast, many scenarios could
not be scheduled under any of the suspension-based methods. The only time that
suspending was ever a viable alternative was in scenarios with a small number
, high ) and relatively lax timing constraints
of resources (i.e., small K, low U
(long, homogeneous periods). Since the short FMLP variant is clearly the best
choice (from a schedulability point of view), we mostly focus our attention on
the suspension-based protocols in the discussion that follows.
Overall, the long FMLP variant exhibited the best performance among suspension-based algorithms, especially in low-sharing-degree scenarios. For = 0.5,
the long FMLP variant always exhibited better performance than both the MPCP and D-PCP. For = 1, the long FMLP variant performed best in 101 of
108 tested scenarios. In contrast, the M-PCP was never the preferable choice
for any . Our results show that the D-PCP hits a sweet spot (which we
0.3, and 2; it even
discuss in greater detail below) when K = 2, U
outperformed the long FMLP variant in some of these scenarios (but never the
short variant). However, the D-PCPs performance quickly diminished outside
this narrow sweet spot. Further, even in the cases where the D-PCP exhibited
the best performance among the suspension-based protocols, schedulability was
very low. The M-PCP often outperformed the D-PCP; however, in all such cases,
the long FMLP variant performed better (and sometimes signicantly so).
The observed behavior of the D-PCP reveals a signicant dierence with respect to the M-PCP and FMLP. Whereas the performance of the latter two is
mostly determined by the task count and tightness of timing constraints, the DPCPs performance closely depends on the number of resourceswhenever the
number of resources does not exceed the number of processors signicantly, the
D-PCP does comparatively well. Since (under our task-set generation method)
the number of resources depends directly on both K and (and indirectly on
, which determines how many tasks are generated), this explains the observed
U
sweet spot. The D-PCPs insensitivity to total task count can be traced back to
its distributed natureunder the D-PCP, a job can only be delayed by events on
its local processor and on remote processors where it requests resources. In contrast, under the M-PCP and FMLP, a job can be delayed transitively by events
on all processors where jobs reside with which the job shares a resource.
Example graphs. Insets (a)-(f) of Fig. 4 and (a)-(c) of Fig. 5 display nine
selected graphs for the four-processor case that illustrate the above trends. These
insets are discussed next.
Fig. 4 (a)-(c). The left column of graphs in Fig. 4 shows schedulability as a
= 0.3 and
function of L for K = 9. The case depicted in inset (a), where U
p(Ti ) [33, 100], shows how both FMLP variants signicantly outperform both
the M-PCP and D-PCP in low-sharing-degree scenarios. Note how even the long

116

B.B. Brandenburg and J.H. Anderson

variant achieves almost perfect schedulability. In contrast, the D-PCP fails to


schedule any task set, while schedulability under the M-PCP hovers around 0.75.
= 0.3, p(Ti ) [10, 100], and = 1, presents a more chalInset (b), where U
lenging situation: the wider range of periods and a higher sharing degree reduce
the schedulablity of the FMLP (long) and the M-PCP signicantly. Surprisingly,
the performance of the D-PCP actually improves marginally (since, compared
to inset (a), there are fewer resources). However, it is not a viable alternative
in this scenario. Finally, inset (c) depicts a scenario where all suspension-based
protocols fail due to tight timing constraints. Note that schedulability is largely
independent of L; this is due to the fact that overheads outweigh critical-section
lengths in practice.
Fig. 4 (d)-(f ). The right column of graphs in Fig. 4 shows schedulability as a
function of K for L = 9s, p(Ti ) [10, 100] (insets (d) and (e)) and p(Ti ) [3, 33]
equal to 0.3 (inset (d)), 0.45 (inset (e)), and 0.15 (inset (f)).
(inset (g)), and U
The graphs show that K has a signicant inuence on schedulability. Inset (d)
illustrates the superiority of both FMLP variants in low-sharing-degree scenarios
( = 0.5): the long variant exhibits a slight performance drop for K 6, whereas
the PCP variants are only viable alternatives for K < 6. Inset (e) depicts the
increased to 0.45. The increase in the number of tasks (and
same scenario with U
thus resources too) causes schedulability under all suspension-based protocols
to drop o quickly. However, relative performance remains roughly the same.
Inset (f) presents a scenario that exemplies the D-PCPs sweet spot. With
= 0.15 and = 2, the number of resources is very limited. Thus, the D-PCP
U
actually oers somewhat better schedulability than the long FMLP variant for
K = 3 and K = 4. However, the D-PCPs performance deteriorates quickly, so
that it is actually the worst performing protocol for K 7.
Fig. 5 (a)-(c). The left column of graphs in Fig. 5 shows schedulability as
. Inset (a) demonstrates, once again, the superior performance
a function of U
of both FMLP variants in low-sharing-degree scenarios ( = 0.5, K = 9, L =
3s, p(Ti ) [10, 100]). Inset (b) shows one of the few cases where the D-PCP
outperforms the M-PCP at low sharing degrees ( = 1, K = 2, L = 3s,
p(Ti ) [3, 33]). Note that the D-PCP initially performs as well as the long FMLP
= 0.3, fails more quickly. In the end, its performance
variant, but starting at U
is similar to that of the M-PCP. Finally, inset (c) presents one of the few cases
where even the short FMLP variant fails to schedule all tasks sets. This graph
represents the most taxing scenario in our study as each parameter is set to its
worst-case value: = 4 and K = 9 (which implies high contention), L = 15s,
and p(Ti ) [3, 33] (which leaves little slack for blocking terms). None of the
suspension-based protocols can handle this scenario.
To summarize, in the four-processor case, the short FMLP variant was always
the best-performing protocol, usually by a wide margin. Among the suspensionbased protocols, the long FMLP variant was preferable most of the time, while the
D-PCP was sometimes preferable if there were a small number (approximately
four) resources being shared. The M-PCP was never preferable.

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT


ucap=0.3 K=9 period=33-100 alpha=0.5 cpus=4

ucap=0.3 L=9 period=10-100 alpha=0.5 cpus=4

[2]

0.8

[3]

schedulability

schedulability

[1]

[1,2]
0.8
0.6
0.4
0.2

0.6
0.4
[3]
0.2

[4]

[4]

0
0

FMLP (short)
FMLP (long)

8
L (in us)

10

[1]
[2]

12

14

M-PCP
D-PCP

16

[3]
[4]

(a)

schedulability

schedulability

[2]

0.6
0.4

[3]

0.2

[1]
[2]

8
10
L (in us)

9
[3]
[4]

M-PCP
D-PCP

12

14

M-PCP
D-PCP

[1]

0.8
0.6
0.4

[2,3,4]

0
6

0.2

[4]
4

[1]

(d)

FMLP (short)
FMLP (long)

ucap=0.45 L=9 period=10-100 alpha=0.5 cpus=4

0.8

4
[1]
[2]

FMLP (short)
FMLP (long)

ucap=0.3 K=9 period=10-100 alpha=1 cpus=4

16

4
[1]
[2]

FMLP (short)
FMLP (long)

[3]
[4]

(b)

9
[3]
[4]

M-PCP
D-PCP

(e)
ucap=0.15 L=9 period=3-33 alpha=2 cpus=4

ucap=0.45 K=2 period=3-33 alpha=2 cpus=4


1

[1]

schedulability

0.8
schedulability

117

0.6
0.4

[1]

0.8
0.6
[3]
0.4
[2]
0.2

0.2
[2,3,4]

0
0

FMLP (short)
FMLP (long)

6
[1]
[2]

8
10
L (in us)

(c)

12
M-PCP
D-PCP

14

16
[3]
[4]

[4]
0

FMLP (short)
FMLP (long)

4
[1]
[2]

7
M-PCP
D-PCP

9
[3]
[4]

(f)

Fig. 4. Schedulability (the fraction of generated task systems deemed schedulable) as


a function of (a)-(c) the maximum critical-section length L and (d)-(f ) the per-job
resource request bound K

3.2

Scalability

We now consider how the performance of each protocol scales with the processor
count. To determine this, we varied the processor count from two to 16 for all
, L, K, and periods (assuming the ranges for each
possible combinations of , U
dened earlier). This resulted in 324 graphs, three of which are shown in the
right column of Fig. 5. The main dierence between insets (d) and (e) of the
gure is that task periods are large in (d) (p(Ti ) [100, 1000]) but small in (e)

118

B.B. Brandenburg and J.H. Anderson


K=9 L=3 period=10-100 alpha=0.5 cpus=4

ucap=0.45 K=9 L=15 period=100-1000 alpha=1

[1]
schedulability

schedulability

0.8
0.6
0.4
0.2

[1]

0.8

[2]

0.6
0.4

[3]

0.2
[2,3,4]

[4]
0

0.1

0.15

0.2

FMLP (short)
FMLP (long)

0.25 0.3
0.35
utilization cap
[1]
[2]

0.4

0.45

0.5

[3]
[4]

M-PCP
D-PCP

FMLP (short)
FMLP (long)

(a)

ucap=0.3 K=5 L=3 period=10-100 alpha=1

[3]

schedulability

schedulability

[1]

0.6
0.4

[4]

[2]

0.2

[1]

0.8

[2]

0.6
0.4
[3]
0.2

[4]

0.1

0.15

0.2

FMLP (short)
FMLP (long)

0.25 0.3
0.35
utilization cap
[1]
[2]

0.4

0.45

M-PCP
D-PCP

0.5

[3]
[4]

FMLP (short)
FMLP (long)

(b)

8
10
12
14
16
processor count
[1]
M-PCP
[3]
D-PCP
[4]
[2]

(e)

K=9 L=15 period=3-33 alpha=4 cpus=4

ucap=0.45 K=9 L=15 period=3-33 alpha=4

0.8

[1]

0.8
schedulability

schedulability

16
[3]
[4]

(d)

K=2 L=3 period=3-33 alpha=1 cpus=4

0.8

8
10
12
14
processor count
[1]
M-PCP
D-PCP
[2]

0.6
0.4
0.2

0.4
0.2

[2,3,4]

[1]

0.6

[2,3,4]

0
0.1

0.15

0.2

FMLP (short)
FMLP (long)

0.25 0.3
0.35
utilization cap
[1]
[2]

(c)

0.4

0.45

0.5
2

M-PCP
D-PCP

[3]
[4]

FMLP (short)
FMLP (long)

8
10
12
14
processor count
M-PCP
[1]
D-PCP
[2]

16
[3]
[4]

(f)

and
Fig. 5. Schedulability as a function of (a)-(c) the per-processor utilization cap U
of (d)-(f ) processor count

(p(Ti ) [10, 100]). As seen, both FMLP variants scale well in inset (d), but the
performance of the long variant begins to degrade quickly beyond six processors
in inset (e). In both insets, the M-PCP shows a similar but worse trend as the
long FMLP variant. This relationship was apparent in many (but not all) of the
tested scenarios, as the performance of both protocols largely depends on the
total number of tasks. In contrast, the D-PCP quite consistently does not follow
the same trend as the M-PCP and FMLP. This, again, is due to the fact that
the D-PCP depends heavily on the number of resources. Since, in this study,

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

119

the total number of tasks increases at roughly the same rate as the number of
processors, in each graph, the number of resources does not change signicantly
as the processor count increases (since and K are constant in each graph).
The fact that the D-PCPs performance does not remain constant indicates that
its performance also depends on the total task count, but to a lesser degree.
Inset (f) depicts the most-taxing scenario considered in this paper, i.e., that
shown earlier in Fig. 5 (c). None of the suspension-based protocols support this
scenario (on any number of processors), and the short FMLP variant does not
scale beyond four to ve processors.
Finally, we repeated some of the four-processor experiments discussed in
Sec. 3.1 for 16 processors to explore certain scenarios in more depth Although
we are unable to present the graphs obtained for lack of space, we do note that
blocking-by-suspending did not become more favorable on 16 processors, and the
short FMLP variant still outperformed all other protocols in all tested scenarios.
However, the relative performance of the suspension-based protocols did change,
so that the D-PCP was favorable in more cases than before. This appears to be
due to two reasons. First, as discussed above, among the suspension-based protocols, the D-PCP is impacted the least by an increasing processor count (given
our task-set generation method). Second, the long FMLP variant appears to be
somewhat less eective at supporting short periods for larger processor counts.
However, schedulability was poor under all suspension-based protocols for tasks
sets with tight timing constrains on a 16-processor system.
3.3

Impact of Overheads

In all experiments presented so far, all suspension-based protocols proved to be


inferior to the short variant of the FMLP in all cases. As seen in Table 1 in the
appendix, in the implementation of these protocols in LITMUSRT that were used
to measure overheads, the suspension-based protocols incur greater overheads
than the short FMLP variant. Thus, the question of how badly suspension-based
approaches are penalized by their overheads naturally arose. Although we believe
that we implemented these protocols eciently in LITMUSRT , perhaps it is
possible to streamline their implementations further, reducing their overheads.
If that were possible, would they still be inferior to the short FMLP variant?
To answer this question, we reran a signicant subset of the experiments considered in Sec. 3.1, assuming zero overheads for all suspension-based protocols
while charging full overheads for the short FMLP variant. The results obtained
showed three clear trends: (i) given zero overheads, the suspension-based protocols achieve high schedulability for higher utilization caps before eventually
degrading; (ii) when performance eventually degrades, it occurs less gradually
than before (the slope is much steeper); and (iii) while the suspension-based
protocols become more competitive (as one would expect), they were still bested
by the short FMLP variant in all cases. Additionally, given zero overheads, the
behavior of the M-PCP approached that of the suspension-based FMLP much
more closely in many cases.

120

B.B. Brandenburg and J.H. Anderson

Conclusion

From the experimental study just described, two fundamental conclusions


emerge. First, when implementing memory-resident resources (the focus of this
paper), synchronization protocols that implement blocking by suspending are
of questionable practical utility. This applies in particular to the M-PCP and
D-PCP, which have been the de-facto standard for 20 years for supporting locks
in multiprocessor real-time applications. Second, in situations in which the performance of suspension-based locks is not totally unacceptable (e.g., the sharing
degree is low, the processor count is not too high, or few global resources exist),
the long-resource variant of the FMLP is usually the better choice than either
the M-PCP or D-PCP (moreover, the FMLP allows resource nesting).
Although we considered a range of processor counts, overheads were measured only on a four-processor platform. In future work, we would like to obtain
measurements on various larger platforms to get a more accurate assessment.
On such platforms, overheads would likely be higher, which would more negatively impact suspension-based protocols, as their analysis is more pessimistic.
Such pessimism is a consequence of diculties associated with predicting which
scheduling-related events may impact a task while it is suspended. In fact, it is
known that suspensions cause intractabilities in scheduling analysis even in the
uniprocessor case [24].

References
1. IBM and Red Hat announce new development innovations in Linux kernel. Press
release (2007), http://www-03.ibm.com/press/us/en/pressrelease/21232.wss
2. Anderson, T.: The performance of spin lock alternatives for shared-memory multiprocessors. IEEE Transactions on Parallel and Distributed Systems 1(1), 616
(1990)
3. Baker, T.: Stack-based scheduling of real-time processes. Journal of Real-Time
systems 3(1), 6799 (1991)
4. Bisson, S.: Azul announces 192 core Java appliance (2006),
http://www.itpro.co.uk/serves/news/99765/
azul-announces-192-core-java-appliance.html
5. Block, A., Brandenburg, B., Anderson, J., Quint, S.: An adaptive framework for
multiprocessor real-time systems. In: Proceedings of the 20th Euromicro Conference on Real-Time Systems, pp. 2333 (2008)
6. Block, A., Leontyev, H., Brandenburg, B., Anderson, J.: A exible real-time locking protocol for multiprocessors. In: Proceedings of the 13th IEEE International
Conference on Embedded and Real-Time Computing Systems and Applications,
pp. 7180 (2007)
7. Brandenburg, B., Anderson, J.: Feather-Trace: A light-weight event tracing toolkit.
In: Proceedings of the Third International Workshop on Operating Systems Platforms for Embedded Real-Time Applications, pp. 1928 (2007)
8. Brandenburg, B., Anderson, J.: Integrating hard/soft real-time tasks and besteort jobs on multiprocessors. In: Proceedings of the 19th Euromicro Conference
on Real-Time Systems, pp. 6170 (2007)

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

121

9. Brandenburg, B., Anderson, J.: An implementation of the PCP, SRP, DPCP,


MPCP, and FMLP real-time synchronization protocols in LITMUSRT . In: Proceedings of the 14th IEEE International Conference on Embedded and Real-Time
Computing Systems and Applications, pp. 185194 (2008)
10. Brandenburg, B., Block, A., Calandrino, J., Devi, U., Leontyev, H., Anderson, J.:
LITMUSRT : A status report. In: Proceedings of the 9th Real-Time Linux Workshop, pp. 107123. Real-Time Linux Foundation (2007)
11. Brandenburg, B., Calandrino, J., Block, A., Leontyev, H., Anderson, J.: Synchronization on real-time multiprocessors: To block or not to block, to suspend or
spin? In: Proceedings of the 14th IEEE Real-Time and Embedded Technology and
Applications Symposium, pp. 342353 (2008)
12. Calandrino, J., Leontyev, H., Block, A., Devi, U., Anderson, J.: LITMUSRT : A
testbed for empirically comparing real-time multiprocessor schedulers. In: Proceedings of the 27th IEEE Real-Time Systems Symposium, pp. 111123 (2006)
13. Devi, U.: Soft Real-Time Scheduling on Multiprocessors. PhD thesis, University of
North Carolina, Chapel Hill, NC (2006)
14. Graunke, G., Thakkar, S.: Synchronization algorithms for shared-memory multiprocessors. IEEE Computer 23, 6069 (1990)
15. Lehoczky, J., Sha, L., Ding, Y.: The rate monotonic scheduling algorithm: Exact characterization and average case behavior. In: Proceedings of the 10th IEEE
International Real-Time Systems Symposium, pp. 166171 (1989)
16. Liu, J.: Real-Time Systems. Prentice-Hall, Upper Saddle River (2000)
17. Lortz, V., Shin, K.: Semaphore queue priority assignment for real-time multiprocessor synchronization. IEEE Transactions on Software Engineering 21(10), 834844
(1995)
18. Mellor-Crummey, J., Scott, M.: Algorithms for scalable synchronization on sharedmemory multiprocessors. ACM Transactions on Computer Systems 9(1), 2165
(1991)
19. SUN Microsystems. SUN UltraSPARC T1 Marketing material (2008),
http://www.sun.com/processors/UltraSPARC-T1/
20. Rajkumar, R.: Real-time synchronization protocols for shared memory multiprocessors. In: Proceedings of the 10th International Conference on Distributed Computing Systems, pp. 116123 (1990)
21. Rajkumar, R.: Synchronization In Real-Time Systems A Priority Inheritance
Approach. Kluwer Academic Publishers, Dordrecht (1991)
22. Rajkumar, R.: Private communication (April 2008)
23. Rajkumar, R., Sha, L., Lehoczky, J.P.: Real-time synchronization protocols for
multiprocessors. In: Proceedings of the 9th IEEE International Real-Time Systems
Symposium, pp. 259269 (1988)
24. Ridouard, F., Richard, P., Cottet, F.: Negative results for scheduling independent
hard real-time tasks with self-suspensions. In: Proceedings of the 25th IEEE International Real-Time Systems Symposium, pp. 4756 (2004)
25. Sha, L., Rajkumar, R., Lehoczky, J.P.: Priority inheritance protocols: An approach
to real-time synchronization. IEEE Transactions on Computers 39(9), 11751185
(1990)
26. Shankland, S., Kanellos, M.: Intel to elaborate on new multicore processor (2003),
http://news.zdnet.co.uk/hardware/chips/0,39020354,39116043,00.htm

122

B.B. Brandenburg and J.H. Anderson

Appendix
To obtain the overheads required in this paper, we used the same methodology
that we used in the prior study concerning EDF scheduling [11]. For the sake of
completeness, the approach is summarized here.
In real systems, task execution times are aected by the following sources
of overhead. At the beginning of each quantum, tick scheduling overhead is incurred, which is the time needed to service a timer interrupt. Whenever a scheduling decision is made, a scheduling cost is incurred, which is the time taken to
select the next job to schedule. Whenever a job is preempted, context-switching
overhead and preemption overhead are incurred; the former term includes any
non-cache-related costs associated with the preemption, while the latter accounts
for any costs due to a loss of cache anity.
When jobs access shared resources, they incur an acquisition cost. Similarly,
when leaving a critical section, they incur a release cost. Further, when a system
call is invoked, a job will incur the cost of switching from user mode to kernel
mode and back. Whenever a task should be preempted while it is executing a
non-preemptive (NP) section, it must notify the kernel when it is leaving its NPsection, which entails some overhead. Under the D-PCP, in order to communicate
with a remote agent, a job must invoke that agent. Similarly, the agent also incurs
overhead when it receives a request and signals its completion.
Accounting for overheads. Task execution costs can be inated using standard techniques to account for overheads in schedulability analysis [16]. Care
must be taken to also properly inate resource request durations. Acquire and
release costs contribute to the time that a job holds a resource and thus can cause
blocking. Similarly, suspension-based synchronization protocols must properly
account for preemption eects within critical sections. Further, care must be
taken to inate task execution costs for preemptions and scheduling events due
to suspensions in the case of contention. Whenever it is possible for a lowerpriority job to preempt a higher-priority job and execute a critical section,5 the
event source (i.e., the resource request causing the preemption) must be accounted for in the demand term of all higher-priority tasks. One way this can
be achieved is by modeling such critical sections as special tasks with priorities
higher than that of the highest-priority normal task [16].
Implementation. To obtain realistic overhead values, we implemented the MPCP, D-PCP, and FMLP under P-SP scheduling in LITMUSRT . A detailed description of the LITMUSRT kernel and its architecture is beyond the scope of this paper.
Such details can be found in [10]. Additionally, a detailed account of the implementation issues encountered, and relevant design decisions made, when implementing
the aforementioned synchronization protocols in LITMUSRT can be found in [9].
LITMUSRT is open source software that can be downloaded freely.6
5

This is possible under all three suspension-based protocols considered in this paper:
a blocked lower-priority job might resume due to a priority boost under the FMLP
and M-PCP and might activate an agent under the D-PCP.
http://www.cs.unc.edu/anderson/litmus-rt.

A Comparison of the M-PCP, D-PCP, and FMLP on LITMUSRT

123

Limitations of real-time Linux. There is currently much interest in using


Linux to support real-time workloads, and many real-time-related features have
recently been introduced in the mainline Linux kernel (such as high-resolution
timers, priority inheritance, and shortened non-preemptable sections). However,
to satisfy the strict denition of hard real-time, all worst-case overheads must
be known in advance and accounted for. Unfortunately, this is currently not
possible in Linux, and it is highly unlikely that it ever will be. This is due to
the many sources of unpredictability within Linux (such as interrupt handlers
and priority inversions within the kernel), as well as the lack of determinism on
the hardware platforms on which Linux typically runs. The latter is especially a
concern, regardless of the OS, on multiprocessor platforms. Indeed, research on
timing analysis has not matured to the point of being able to analyze complex
interactions between tasks due to atomic operations, bus locking, and bus and
cache contention. Without the availability of timing-analysis tools, overheads
must be estimated experimentally. Our methodology for doing this is discussed
next.
Measuring overheads. Experimentally estimating overheads is not as easy as
it may seem. In particular, in repeated measurements of some overhead, a small
number of samples may be outliers. This may happen due to a variety of
factors, such as warm-up eects in the instrumentation code and the various nondeterministic aspects of Linux itself noted above. In light of this, we determined
each overhead term by discarding the top 1% of measured values, and then taking
the maximum of the remaining values. Given the inherent limitations associated
with multiprocessor platforms noted above, we believe that this is a reasonable
approach. Moreover, the overhead values that we computed should be more than
sucient to obtain a valid comparison of the D-PCP, M-PCP, and FMLP under
consideration of real-world overheads, which is the focus of this paper.
The hardware platform used in our experiments is a cache-coherent SMP consisting of four 32-bit Intel Xeon(TM) processors running at 2.7 GHz, with 8K L1
instruction and data caches, and a unied 512K L2 cache per processor, and 2 GB
of main memory. Overheads were measured and recorded using Feather-Trace,
a light-weight tracing toolkit developed at UNC [7]. We calculated overheads by
measuring the systems behavior for task sets randomly generated as described
in Sec. 3. To better approximate worst-case behavior, longer critical sections
were considered in order to increase contention levels (most of the measured
overheads increase with contention).
We generated a total of 100 task sets and executed each task set for 30
seconds under each of the suspension-based synchronization protocols7 while
recording system overheads. (In fact, this was done several times to ensure
that the determined overheads are stable and reproducible.) Individual measurements were determined by using Feather-Trace to record timestamps at the
beginning and end of the overhead-generating code sections, e.g., we recorded a
7

Overheads for the short FMLP variant were already known from prior work [11] and
did not have to be re-determined.

124

B.B. Brandenburg and J.H. Anderson

Table 1. (a) Worst-case overhead values (in s), on our four-processor test platform
obtained in prior studies. (b) Newly measured worst-case overhead values, on our fourprocessor test platform, in s. These values are based on 86,368,984 samples recorded
over a total of 150 minutes.
Overhead
Worst-Case
Preemption
42.00
Context-switching
9.25
Switching to kernel mode
0.34
Switching to user mode
0.89
Leaving NP-section
4.12
FMLP short acquisition / release 2.00 / 0.87

(a)

Overhead
Scheduling cost
Tick
FMLP long acquisition / release
M-PCP acquisition / release
D-PCP acquisition / release
D-PCP invoke / agent

Worst-Case
6.39
8.08
2.74 / 8.67
5.61 / 8.27
4.61 / 2.85
8.36 / 7.15

(b)

timestamp before acquiring a resource and after the resource was acquired (however, no blocking is included in these overhead terms). Each overhead term was
determined by plotting the measured values obtained to check for anomalies, and
then computing the maximum value (discarding outliers, as discussed above).
Measurement results. In some case, we were able to re-use some overheads
determined in prior work; these are shown in inset (a) of Tab. 1. In other cases,
new measurements were required; these are shown in inset (b) of Tab. 1.
The preemption cost in Table 1 was derived in [12]. In [12], this cost is given
as a function of working set size (WSS). These WSSs are per quantum, thus
reecting the memory footprint of a particular task during a 1-ms quantum,
rather than over its entire lifetime. WSSs of 4K, 32K, and 64K were considered
in [12], but we only consider the 4K case here, due to space constraints. Note that
larger WSSs tend to decrease the competitiveness of methods that suspend, as
preemption costs are higher in such cases. Thus, we concentrate on the 4K case
to demonstrate that, even in cases where such methods are most competitive,
spinning is still preferable. The other costs shown in inset (a) of Table 1 were
determined in [11].

A Self-stabilizing Marching Algorithm for a


Group of Oblivious Robots
Yuichi Asahiro1 , Satoshi Fujita2 , Ichiro Suzuki3 , and Masafumi Yamashita4
1

Dept. of Social Information Systems, Faculty of Information Science, Kyushu


Sangyo University, 2-3-1 Matsukadai, Higashi-ku, Fukuoka 813-8503, Japan
[email protected]
2
Dept. of Electrical Engineering, Faculty of Engineering, Hiroshima University,
Kagamiyama 1 chome 4-1, Higashi-Hiroshima 739-8527, Japan
[email protected]
3
Dept. of Electrical Engineering and Computer Science, University of Wisconsin,
Milwaukee, PO Box 784, Milwaukee, WI 53201, USA
[email protected]
4
Dept. of Computer Science and Communication Engineering, Kyushu University,
744 Motooka, Nishi-ku, Fukuoka, Fukuoka 819-0395, Japan
[email protected]

Abstract. We propose a self-stabilizing marching algorithm for a group


of oblivious robots in an obstacle-free workplace. To this end, we develop
a distributed algorithm for a group of robots to transport a polygonal
object, where each robot holds the object at a corner, and observe that
each robot can simulate the algorithm, even after we replace the object
by an imaginary one; we thus can use the algorithm as a marching algorithm. Each robot independently computes a velocity vector using the
algorithm, moves to a new position with the velocity for a unit of time,
and repeats this cycle until it reaches the goal position. The algorithm
is oblivious, i.e., the computation depends only on the current robot
conguration, and is constructed from a naive algorithm that generates
only a selsh move, by adding two simple ingredients. For the case of
two robots, we theoretically show that the algorithm is self-stabilizing,
and demonstrate by simulations that the algorithm produces a motion
that is fairly close to the time-optimal motion. For cases of more than
two robots, we show that a natural extension of the algorithm for two
robots also produces smooth and elegant motions by simulations as well.
Keywords: motion coordination, marching, ocking, self-stabilizing algorithm, oblivious algorithm.

Introduction

Motion coordination among mobile robots with distributed information is a


common hot research area in automatic control, robotics and computer science [6, 19, 22, 26]. Among many challenging problems arising in this area, we
tackle the marching problem for mobile robots, where marching means moving in
formation.
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 125144, 2008.
c Springer-Verlag Berlin Heidelberg 2008


126

Y. Asahiro et al.

Self-organized behavior of a school of sh and a ock of geese has attracted


many researchers [8, 20, 30, 41]. The snapshot of a school of sh however changes
time by time, and the ocking problem in general is not interested in rigid relative positions among mobile agents. The formation problem for mobile robots,
on the other hand, looks for local rules to form a given geometrical pattern from
any initial conguration. The formation problem was rst investigated by Sugihara and Suzuki [37, 38], and the formable patterns are characterized using the
terminology of distributed computing by Suzuki and Yamashita [39, 40]. Then
extensive works have carried out in particular in this decade [2, 9, 12, 13, 14, 17,
18, 21, 24, 31, 32, 34, 35]. They all modeled a robot as a dimensionless point and
investigated the problem as a robot motion planning problem, ignoring mechanical constraints [27]. Cao et al. [10] and Debest [15] surveyed early researches in
control and robot societies. Also many research taking into account mechanical
constraints and obstacles then followed (e.g., [7, 23, 43]).
This paper considers the marching problem, which asks for a control algorithm
for a ock of agents to move to a goal position keeping the formation. A typical
survey paper by Cao et al. [10] could cite only a couple of papers on marching
a decade ago. Quite a few research projects have been conducted, but many
of them are classied into either a centralized or a leader-follower approach
[1, 20, 25, 29, 36], a distributed approach using some navigation device [16, 42], or
an articial potential approach [28, 33]. In these previous works, the quality of
marching route has not been discussed seriously, which motivates this paper.
In this paper we focus on the quality of route. Let us move two robots keeping
their distance unchanged or carrying a ladder. If we ask the robots to rotate
the ladder about 180, they are likely to rotate it with one robot being the
center as in Fig. 41 . Fig. 1 (left) shows another move, which is denitely more
elegant and smoother. Smoother motions can be more ecient than those that
are not. Indeed, the motion shown in Fig. 1 (left) is time optimal2 . It is worth
emphasizing that the robots moves are by no means straightforward; both of the
robots can never move straight to their nal positions in this instance to achieve
the time optimal motion. In general, the robots can have conicting interests,
and they have to resolve the conict to achieve good overall performance. Our
goal is to design a marching algorithm that realizes such an elegant, smooth and
time-ecient march, even under the presence of transient sensor and control
errors. Fig. 9 (left) shows how our algorithm rotates the ladder about 180 .
Specically, we develop a distributed algorithm called G+ Greedy Plus
below for a group of robots to transport a polygonal object, where each robot
holds the object at a corner, and observe that each robot can simulate the
algorithm, even after we replace the object by an imaginary one; we thus can
1

In Fig. 4, the circles and a line segment represent the robots and the ladder they
carry, respectively.
Parameters LB , and will be explained later. An analytical method for computing
a time-optimal motion of this problem for two robots, under the assumption that
the robots speed is either 0 or a given constant at any moment, is reported in [11].
We used this method to calculate this optimal motion.

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

127

Fig. 1. Time-optimal motions for instances I1 (left; LB = 2, = 1 , = 179 ) and I2


(right; LB = 4, = 30 , = 150 ), respectively

use the algorithm as a marching algorithm. Each robot independently computes


a velocity vector using the algorithm, moves to a new position with the velocity
for a unit of time, and repeats this cycle until it reaches the goal position. The
algorithm is oblivious, i.e., the computation depends only on the current robot
conguration, and is constructed from a naive algorithm called G below that
generates only a selsh move, by adding two simple ingredients. We rst design
and evaluate the algorithm G+ for two robots and will later apply the idea to
the case of three or more robots.
To design G+, we rst examine a straightforward greedy algorithm G, in
which each robot simply tries to move toward its goal location without taking
into account the goal of the other robot. Although G can generate a motion
for successfully transporting a ladder in almost all instances we consider, the
resulting motion tends to lack smoothness and eciency (Fig. 4 (left) shows
how G rotates the ladder about 180 ); the nish time can be greater by up to
163.5% over that of a time-optimal motion. G+ is based on the following simple
idea; at any moment each robot pursues its individual interest of moving toward
its goal position, while at the same time making minor adjustments to its course
based on the other robots current and goal locations and the nal orientation of
the object. G+ generates a motion that is smooth and fairly close to the timeoptimal motion (Fig. 9 (left) shows how G+ rotates the ladder about 180),
whose nish time is only up to 9.6% greater than the optimal.
In fact, the algorithm G+ is a renement of the algorithm called ALG1 proposed in [3]; we can show that G+ is correct, while ALG1 is not. Combining it
with the fact that G+ is oblivious, i.e., it determines the next position independently of the motions in the past, we can show that G+ is self-stabilizing the
system works stably even in the presence of transient sensor and control errors.
We next discuss how to apply G+ to three or more robots (up to nine robots),
and demonstrate by computer simulations that G+ seems to work fairly well,
although we do not have data on time-optimal motions against which the simulation results should be compared.
It is worth mentioning that our approach is not a variation of well-known
negative feedback control, in which a robot attempts to reduce the deviation
from an optimal trajectory that has been given a-priori. In our algorithm, the
robots are not provided with any optimal path in advance. Their trajectories are
determined only as a result of their interaction with each other.

128

Y. Asahiro et al.

Fig. 2. The setup of the problem. We represent robots A and B by hollow and gray
circles, respectively.

The paper is organized as follows: Section 2 explains the robot model.


Section 3 examines algorithm G. In Sect. 4, we introduce algorithm G+, demonstrates its performance, and show that it is correct and self-stabilizing. Extensions of G+ for three or more robots are discussed in Sect. 5. We then conclude
the paper by giving some remarks in Sect. 6.

Modeling Two Robots

Consider two robots A and B carrying a ladder of length


in an obstacle-free
workspace. We represent the robots and the ladder as two disks and a line
segment, respectively, as is shown in Fig. 2. We assume that the robots are
identical; they have the same maximum speed and execute the same algorithm.
Let As , Bs and Ag , Bg be the start and goal positions of the robots, respectively. We let LA = |As Ag | and LB = |Bs Bg |. and denote the angles that
the ladder makes with Bs Bg at the start and goal positions, respectively. We
can describe instances of the problem using LB , and .
Each end of the ladder is assumed to be attached to a force sensor as in [5],
that we model as an ideal spring at the center of each robot. Since the distance
D between the centers of the robots does not always equal
during motion, the
force sensor produces an oset vector o of size |(D
)/2| from the center of the
robot to the end of the ladder. The oset vectors at both ends are equal in size
and opposite in direction3 .
We assume that at any moment, robot A (resp. B) knows As , Ag , Bg , oA
(resp. Bs , Bg , Ag , oB ) and
(provided that there are no sensor errors). Since A
(resp. B) can computes Bs (resp. As ) using oA = oB , without loss of generality,
we may assume that at any moment, they know As , Ag , Bs , Bg , oA , oB ,
, LA ,
LB , and . A (distributed) algorithm for robot R with maximum speed V is
any procedure that computes a velocity vector vR with |vR | V from some of
As , Ag , Bs , Bg , oA , oB ,
, LA , LB , and .
We assume that a robot R repeatedly computes vR and moves to a new
position with velocity vR for unit time. For simplicity we use discrete time and
3

Here we ignore the acceleration of the robots and assume that suciently long period
of time for spring relaxation is given to the robots before sensing the current size of
the oset vector.

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

129

Ladder

offset vector

oR

center of the robot

endpoint of the ladder


Output vR

Robot R at time t

oR

vR

Robot R at time t+1

Fig. 3. The model of a robot

assume that both robots compute their respective velocity vectors and move to
their new positions at time instances 0, 1, . See Fig. 3.
The nish time is the time tf when both robots arrive at their respective goal
positions. The delay is then dened to be (tf to )/to 100(%), where to is the
nish time of a time-optimal motion. We use the size of the oset vector during
a motion to evaluate the smoothness of a motion.

Naive Algorithm G for Two Robots

In this section we describe and observe a naive greedy algorithm G. We shall


later modify it in Sect. 4 to obtain algorithm G+. In the following description
of G, R denotes either robot A or B, V is the maximum speed of both robots,
and s is a spring constant.
[Algorithm G for robot R]
Step 1: If LR V then move to Rg and terminate. Otherwise, let tR be a
vector directed from Rs to Rg such that |tR | = V .
Step 2: Scale the oset vector by hR = soR .
Step 3: Set TR = tR + hR .
Step 4: Scale the size of TR to V and move at velocity TR for a unit of time.
Go to Step 1.
Note that TR consists of two components: tR (move toward the goal) and hR
(move to reduce the oset). In Step 4 TR is scaled to V that makes the robot
always move at the maximum speed.
Let us observe the performance of G by conducting computer simulation. For
computer simulation, we use
= 1 and V = 0.01. The radius of a robot is 0.1.
We use spring constant s = 0.25, which was found to be large enough to keep
the endpoints of the ladder inside the circles representing robots bodies during
the simulation. To reduce the number of instances to examine, we consider only
the cases LB = 2 and 4, 0 90 , and = 180 (see Fig. 2). These
cases include two representative situations:
The distance to the goal is small, and the robots must rotate the ladder
quickly, as in instance I1 (LB = 2, = 1 , = 179), whose time-optimal
motion is shown in Fig. 1 (left).

130

Y. Asahiro et al.

Fig. 4. The motions by G for instance I1 (left) and I2 (right), respectively

The distance to the goal is large, and the robots do not need to rotate the
ladder quickly, as in instance I2 (LB = 4, = 30 , = 150), whose timeoptimal motion is shown in Fig. 1 (right).
Note that only the relative positioning of the initial and goal positions of the
ladder is important. That is, by interchanging the initial and goal positions, and
by interchanging endpoints A and B, the above setting covers the following cases
also, hence, need not be discussed separately:
The case > 90 and = 180 (rotating the ladder clockwise).
The case 90 < 0, and = 180 (this is a symmetric case)
Although this conguration setting does not cover all the possibilities, we consider that it is a reasonable subset of the innite instances: For example the case
= 180 for 0 < < 90 is not included explicitly in the above setting.
However, the robots fall into such a conguration at some intermediate step during the motion, because the angle of the ladder gradually changes and the robots
determine their motion based only on the current and goal positions. Hence if
the algorithm works well for the above conguration setting, we can expect that
it also works for the case = 180 .
The two gures of Fig. 4 show the motions by G for instances I1 (left) and I2
(right), respectively. The nish times are 340 and 439, respectively.
First, let us observe that the trajectories of the robots in these gures look
quite dierent from the smoother motions shown in Fig. 1. Robot A temporarily
yields to generate a smooth motion in Fig. 1 (left), while it does not in Fig. 4
(left). Both translation and rotation take place simultaneously in Fig. 1 (right),
while in Fig. 4 (right), the ladder starts to rotate only toward the end of the
motion. That is, rotation can occur only as a result of the robots individual
moves toward their goal positions, and that explains why in Fig. 4 (right) the
ladder rst translates without any rotation.
Here we would like to note the eect of the spring constant s: Intuitively
speaking, if s is smaller, the gray robot tends to move (more) straight to the
goal with narrowing the distance from the other robot, which largely breaks the
formation of the robots (The evaluation of the motions based on this criteria is
mentioned below).
The two gures of Figs. 5 show for LB = 2 (left) and LB = 4 (right), respectively, the nish times of the motions generated by G and those of time-optimal

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots


360

Optimal
G
G+

540

320

520

300

500
Finish time

Finish time

560

Optimal
G
G+

340

280

480

260

460

240

440

220

420

200

131

400
0

10

20

30

40

50

60

70

80

90

10

20

30

Alpha

40

50

60

70

80

90

Alpha

Fig. 5. Finish times of G, G+ and a time-optimal motion, for LB = 2 (left) and LB = 4


(right) with 0 90 (excluding = 0 for G)
0.1

Optimal
G
G+

0.08

0.08

0.06

0.06
Offset vector

Offset vector

0.1

0.04

0.02

Optimal
G
G+

0.04

0.02

50

100

150

200
Time

250

300

350

50

100

150

200

250

300

350

400

450

Time

Fig. 6. Oset vector size for instances I1 (left) and I2 (right), by G (Fig. 4, left), G+
(Fig. 9, left), and a time-optimal motion (Fig. 1, left)

motions, for = 1 , 5 , 10 , , 90 . (Ignore the plot for G+ for now.) Note


that for LB = 4, the nish time of a time-optimal motion almost always equals
LB /V (= 400), which indicates that the robot B can move straight to the goal
position in an optimal motion.
For both LB = 2 and 4, G generates a motion that is almost time-optimal
if 70 (and hence, the required amount of rotation is small). However, the
performance drops signicantly as becomes smaller (requiring more rotation).
The worst case (among those we observed) is when LB = 2 and = 1 (Fig. 4,
left), where the nish time of 340 by G is about 163.5% greater than a timeoptimal motions 208. (That is, the delay is 63.5%.)
Let us now discuss the smoothness of the motions that G generates. Figs. 6
(left) and 6 (right) show the oset vector size |oA | (= |oB |) during the motion
generated by G and in a time-optimal motion, for instances I1 and I2 , respectively. (Again, ignore the plot for G+ for now.) Note that the curves in these
gures end at the nish times of the respective motions. It is worth noting that
the oset vector size remains zero in the time-optimal motion.
The curves for G in these gures suddenly jump to a high peak when the
ladder starts to rotate, and remains relatively high during rotation. This happens
at time zero in Fig. 6 (left), and 229 in Fig. 6 (right). In contrast, the curve

132

Y. Asahiro et al.
0.1

0.1

Optimal
G
G+

0.08

Maximal size offset vector

Maximal size offset vector

0.08

Optimal
G
G+

0.06

0.04

0.02

0.06

0.04

0.02

10

20

30

40

50
Alpha

60

70

80

90

10

20

30

40

50

60

70

80

90

Alpha

Fig. 7. Peak oset vector size by G, G+ and a time-optimal motion, for LB = 2 (left)
and LB = 4 (right), respectively

in Fig. 6 (right) stays at zero before time 229 when the ladder is translated
but not rotated. Furthermore, in Fig. 6 (left), the curve stays very high for
a long period around time 20 to 150, indicating continuous severe stress that
might be unacceptable to physical robots. Especially the motions by G produce
oscillating oset vectors, e.g., around time 30 for I1 , that might be unacceptable
for practical robots as well. The spring constant s aects this stress of motion.
The above observation is obtained setting s = 0.25 and so we may be able to
choose more suitable (actually larger) value for s in order to keep the size of
oset vector small. However, the larger s causes the longer nish time because
the robots have to detour compared to the motions obtained by s = 0.25. The
dicult point here is that we need to choose a value for s which can achieve
fastness and smoothness at the same time.
Figs. 7 (left) and 7 (right) show the peak oset vector size observed during a
motion generated by G for LB = 2 and LB = 4, respectively, for various values
of . As observed above, the motion of G is divided into two parts, translation of
the ladder followed by rotation. Since oset vectors become large during rotation,
the distance to the goal does not have much eect here. Therefore the results
for LB = 2 and LB = 4 are very similar. The maximum size of the oset vectors
gradually decreases as increases and hence smaller rotation is required.
To summarize, we observe that G tends to separate translation and rotation.
This results in a motion that is less smooth because of a sudden transition
between the two phases. We believe that there are two reasons for the separation.
There is no explicit mechanism to rotate the ladder. Consequently, robot A
never moves away from its goal location in Fig. 4 (left) to assist robot B to
rotate the ladder.
The robots do not utilize the information on the distances to their respective
goals. Consequently, both robots move at the same speed (and hence, the
ladder is not rotated at all), during translation in Fig. 4 (right), even though
robot B is farther away from its goal than robot A.
It is conceivable that a smoother, faster motion can result if translation and
rotation are merged by resolving these issues.

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

133

Algorithm G+ for Two Robots

Based on the observations in the last section, we add two features to G:


When the distances to the goal positions dier between the robots, the robot
closer to its goal reduces its speed.
The velocity vector that each robot computes has a third component, called
rotation vector, whose magnitude is proportional to the amount of rotation
needed before the ladder reaches the goal location.
The resulting algorithm G+ is described below for robot R. As before, R is
either A or B, and R denotes the other robot, e.g., if R = A then R = B.
V is the robots maximum speed. Note again that V is the maximum distance
that a robot can move in a unit of time. In G+, we move both robots to their
goal positions, as soon as the ladder reaches a position suciently close to the
goal position. More specically, if max{LR , LR } V , then R moves to Rg
and terminates. We assume that V has been chosen to make this move feasible.
Parameters s > 0, t 0 and u 0 will be explained shortly.
[Algorithm G+ for robot R]
Step 1: Set Lmax = max{LR , LR }. If Lmax V then move to Rg and
terminate. Otherwise, go to Step 2.
R
)t .
Step 2: Let tR be a vector directed from Rs to Rg such that |tR | = V ( LLmax
Step 3: Let rR be a rotation vector of length u( )/Lmax . The direction
of rR , which is perpendicular to the ladder, is set to (i) + /2 if LR
LR , and (ii) /2 otherwise, in an attempt to reduce (favoring a
counterclockwise rotation if = 180 ).
Step 4: Scale the oset vector by hR = soR .
Step 5: TR = tR + rR + hR .
Step 6: Compute TR , following Steps 15 for R .
Step 7: Let vR = V max{|TTRR|,|T  |} . Move at velocity vR for a unit of time.
R
Go to Step 1.
G+ uses three parameters s, t and u. Parameter s, which is set to 0.25 as
in G, determines the sensitivity of the robot to the oset vector. Larger values
of t slows down further the robot that is closer to the goal, and parameter u
determines the eect of the rotation vector. Note that if t = 0 and u = 0, then
G+ reduces to G. Since the motion of the robots depends on t and u, it may
seem at rst that we need to set them to reasonably good values for each given
instance, to obtain a smooth motion that is close to the time-optimal motion.
As we demonstrate next, however, it turns out that a xed pair of values for t
and u can be used to such a smooth motion for a wide range of instances.
Figs. 8 (left) and 8 (right) show, for instances I1 and I2 , respectively, the nish
times of the motions generated by G+ for t, u = 0, 1, , 10. The results in these
gures exemplify what we observed through extensive simulation using a large
number of instances. That is, as is shown in Fig. 8 (left), in those instances in
which the ladder must be rotated quickly, using any value of t 1 and u = 1

134

Y. Asahiro et al.

L=2, Alpha=1

L=4, Alpha=30

Finish time

Finish time

360

440
435
430
425
420
415
410
405
400

340
320
300
280
260
240
220
10

10

8
0

6
2

4
t

4
6

0
u

6
2

2
10 0

4
6

2
10 0

Fig. 8. Finish time of the motions by G+ for instance I1 (left) and I2 (right), for
t, u = 0, 1, , 10

Fig. 9. The motions by G+ with t = 1 and u = 1, for instance I1 (left) and I2 (right),
respectively

often gives a good nish time. If the ladder need not be rotated very quickly,
then as shown in Fig. 8 (right) the nish time becomes less dependent on t and
u, and we can expect good performance using any value of t 1 and u = 0. In
addition to I1 and I2 , we tested more congurations, e.g., LB = 3, 4, 5, 6 and
= 1 , and also LB = 2 and = 5 , 10 , 15 , 20 . We omit the detailed results
here due to the space limitation, however for all the tested congurations the
obtained charts look like those in Fig. 8. Based on the above observation, in the
following we use t = 1 and u = 1 to evaluate G+4 .
Figs. 9 (left) and 9 (right) show the motions generated by G+ for instances I1
and I2 , respectively. These motions resemble the time-optimal motions in Figs. 1
(left) and 1 (right) more closely than those by G shown in Figs. 4 (left) and 4
(right). Specically, in Fig. 9 (left), robot A (the hollow circle) temporarily leaves
its position and returns to it in order to rotate the ladder quickly. In Fig. 9,
translation and rotation take place simultaneously, and robot B (that has to
move more than the other robot) can move nearly straight to its destination.
The simulation results indicate that G+ generates a faster and smoother
motion than G. See Fig. 5 and for a comparison of the nish times of G+, G
and a time-optimal motion. For 80 the nish time of G+ is smaller than
that of G. (For > 80 , both G and G+ generate a motion whose nish time
equals that of a time-optimal motion.) The delay of G+ in the motion of Fig. 9
4

Of course, t = 1, u = 1 is not optimal for all instances. For instance, setting t = 2


and u = 1 gives a slightly better nish time in Fig. 8 (right). Also, currently, we have
no method to obtain theoretically best values for t and u. In practice, physical
robots should be able to have a database of good t-u pairs for various LB , and .

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

135

(left) for instance I1 is only 9.6% (this is the worst case we observed for G+),
as opposed to 63.5% of G in Fig. 4 (left). For instance I2 , the delay is 3.5% in
the motion of Fig. 9 (right) by G+, as opposed to 9.8% in the motion of Fig. 4
(right) by G. Figs. 6 (left) and 6 (right), respectively, show that the oset vector
size is considerably smaller in the motions of Figs. 9 (left) and 9 (right) by G+,
compared to that of Figs. 4 (left) and 4 (right) by G. Figs. 7 show that the peak
oset vector size is smaller in G+ than in G.
We next show that G+ is correct, i.e., for arbitrary initial and goal positions
of the ladder, using Algorithm G+, robots A and B can transport the ladder to
its goal position, provided that there are no sensor or control errors.
Theorem 1. Suppose that u > 0. Then Algorithm G+ is correct.
Proof. We give an outline of the proof. If Lmax = max{LA , LB } V , then G+
terminates after the robots transport the ladder to the goal position in Step 1.
We show that eventually Lmax V holds. Suppose that = . Since u > 0,
gets closer to in each iteration of G+. Once turns to be equal to , the
rest of the work for the robots is just moving straight to the goal position; the
rotation vector is no longer needed and so it has size 0 in Step 3. Thus the target
vectors tA and tB of the robots can have opposite directions only in the very
rst iterations, and hence in subsequent iterations, tA and tB together have the
eect of moving the center of the ladder closer to its center in the goal position.
The robots oset correction vectors hA and hB are opposite in direction and
equal in magnitude, and hence they do not aect the movement of the center of
the ladder. Even if the center of the ladder stays exactly at the center of the goal
positions of the ladder, it does not mean the robots reached the goal positions.
However, after that, the vectors tR and hR help to move the robots to the goal
positions without moving the center of the ladder.


Consider the instance shown on the left in Fig. 10, where G+ would drive both
robots straight to their respective goal positions, if there were no control errors.
As illustrated on the right, in any physical experiment the robots will inevitably
deviate from the intended trajectories for a number of reasons, including sensor
and control errors. An advantage of G+ is that its future output depends only on
the current and goal states, and is independent of the past history. An algorithm
is said to be self-stabilizing if it tolerates any nite number of transient faults
like sensor and control errors, i.e., the algorithm is correct, even in the presence
of any nite number of transient faults.
Corollary 1. Algorithm G+ is self-stabilizing.
To conrm the robustness of G+, we conducted a series of simulations by replacing TR in Step 5 of G+ by TR = tR +rR +hR +nR , where nR is a random noise
vector of size 0.1V . Since Corollary 1 guarantees that G+ eventually transports
the ladder to the goal position, our main concern here is the nish time.
Figs. 11 (left) and 11 (right), respectively, show the nish times of the motions
by G+ in the presence of noise (as well as the results for the noise-free case and

136

Y. Asahiro et al.

Fig. 10. A simple instance (left), and deviation from the intended path (right)
230

420

Optimal
No Noise
With Noise

Optimal
No Noise
With Noise

225
415

410

215

Finish time

Finish time

220

210

405

205
400
200

195

395
0

10

20

30

40

50
Alpha

60

70

80

90

10

20

30

40

50

60

70

80

90

Alpha

Fig. 11. Finish times of G+ with noise for LB = 2 (left) and LB = 4 (right), respectively

time-optimal motions), for LB = 2 and 4. For each value of , a vertical bar


and a small box show, respectively, the range of nish times and their average
we observed in 100 runs. (Actual trajectories with noise dier very little visibly
from those in the noise-free case, and thus are omitted.)
The delay due to noise, over the noise-free case, can be as large as 4.5% for
both the cases LB = 2 and LB = 4 (when = 90 ). It is not clear why the delay
due to noise increases slightly as approaches 90 . However, we think that the
delay is suciently small in all cases and is acceptable in practice. Especially
(as Corollary 1 guarantees) the robots never failed to reach the goal positions in
our simulation even in the presence of noise.
Finally, we observe that each robot can simulate G+ even after we replace the
ladder by an imaginary one, and thus we can use G+ as a marching algorithm.
However, it is obvious from the fact that G+ requires as input the current and
goal positions of the robots.

Algorithm G+ for Three and More Robots

In this section we extend Algorithm G+ to more than two robots. First, we


modify the problem formulation dened in Section 2. Consider for example the
case of three robots carrying a regular triangle (equilateral). As in Fig. 2, let
As , Bs , Cs and Ag , Bg , Cg be the start and goal positions of the robots, respectively (see Fig. 12). Then instances are identied with three parameters L, ,
and , where L is dened as |As Ag | and and are the angles that As Bs and
Ag Bg makes with As Ag , respectively. For simplicity, we restrict our attention to

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

137

Fig. 12. The setup of the problem with three robots

a setting of the problem, in which = 180 (0 90 ), and L = 2 and


4 as before.
Algorithm G+ for more than two robots is basically the same as before, except
Step 4. In the case of two robots, we calculated the oset vector oR considering
that the ladder is placed exactly in the middle of the two robots. When the
number of robots is more than two, we use the method given in pp.752753
of [27] to this end, and then calculate the oset vector: First, the center of mass
of the object is supposed to coincide with the center of mass of the robots
positions. Then, we consider that the total external force to the object produced
by the robots is dened as sum of force vector produced by the dierence between
each robots current position and a corresponding corner of the object. Then, we
determine the position of the object such that the total moment of the external
forces is turned to be zero, that is, rotational forces caused by the force vectors
are canceled. For comparison purpose, we also extend G in a straight forward
way to the case with more than two robots.
In the following, let us observe the performance of G+ for more than two
robots (up to nine robots) carrying a regular polygon. First we pick up 0.3, 0.4,
0.5, 0.6, 0.7, 0.8, and 0.9 as the values of the spring constant s, respectively for
three to nine robots which were found to be large enough to keep the endpoints
of the object inside the circles representing robots bodies. Then, again we can
choose a reasonable pair of values for the parameters t and u such as t = 1 and
u = 1 by simulation (similar to the simulations in Section 4). Note again that
these values are not always the best for each instance, but they give reasonable
motions: Fig. 13 and Fig. 14 show example motions by G and G+ with three
robots, for the instances I1 and I2 . Roughly speaking, like the case of two robots,
the motions by G are divided into two parts, translation and rotation, while
these two are done simultaneously in the motions by G+ that derives smoother
motions.
Because of the space limitation, we only show a limited number of detailed
simulation results of G and G+ with three robots for L = 2 and L = 4 in
Figs. 15 and 16. In addition, for four to nine robots, we only show the gures
of motions by G and G+ for the instances I1 and I2 in Figs. 17 through 25
without charts showing nish times and formation errors. Since there is no previous research on time-optimal motions with more than two robots, the gures

138

Y. Asahiro et al.

Fig. 13. Motions with three robots by G (left) and G+ (right) for instance I1

Fig. 14. Motions with three robots by G (left) and G+ (right) for instance I2

290

0.1

G
G+

G
G+

0.09

280

0.08

270

Maximal size offset vector

0.07

Finish time

260

250

240

230

0.06
0.05
0.04
0.03

220

0.02

210

0.01

200

0
0

10

20

30

40

50

60

70

80

90

10

20

30

40

Alpha

50

60

70

80

90

Alpha

Fig. 15. Finish times (left) and peak oset vector size (right) of G and G+ for L = 2
and 0 90 with three robots, respectively

0.1

G
G+

0.09

480

G
G+

0.08

470

Maximal size offset vector

0.07
460

Finish time

450

440

430

0.06
0.05
0.04
0.03

420

0.02
0.01

410

400
0

10

20

30

40

50
Alpha

60

70

80

90

10

20

30

40

50

60

70

80

90

Alpha

Fig. 16. Finish times (left) and peak oset vector size (right) of G and G+ for L = 4
and 0 90 with three robots, respectively

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

(i)

(ii)

(iii)

139

(iv)

Fig. 17. Motions for instance I1 with four robots by (i) G and (ii) G+, and with ve
robots by (iii) G and (iv) G+, respectively

(i)

(ii)

(iii)

(iv)

Fig. 18. Motions for instance I1 with six robots by (i) G and (ii) G+, and with seven
robots by (iii) G and (iv) G+, respectively

(i)

(ii)

(iii)

(iv)

Fig. 19. Motions for instance I1 with eight robots by (i) G and (ii)G+, and with nine
robots by (iii) G and (iv) G+, respectively

Fig. 20. Motions with four robots by G (left) and G+ (right) for instance I2

140

Y. Asahiro et al.

Fig. 21. Motions with ve robots by G (left) and G+ (right) for instance I2

Fig. 22. Motions with six robots by G (left) and G+ (right) for instance I2

Fig. 23. Motions with seven robots by G (left) and G+ (right) for instance I2

Fig. 24. Motions with eight robots by G (left) and G+ (right) for instance I2

do not show the results for optimal motions. Finish times of G and G+ are
almost similar, although G+ is slightly better (Fig. 15, left). We observed the
largest improvement in terms of nish time for the instance I1 with nine robots:
The nish times of G and G+ are 835 and 741, respectively, in which the motion
by G+ is completed 12.2% faster than the motion by G. As for the size of the
peak oset vector, the motions by G+ are always smoother than those by G

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

141

Fig. 25. Motions with nine robots by G (left) and G+ (right) for instance I2

(Fig. 15, right). As an example, for the instance I1 with nine robots, the size
of the peak oset vector by G+ is about 63% of that by G; G+ improves the
smoothness of the motion. In summary, the simulation results indicate that if
the number of robots is greater than two, the advantage of algorithm G+ over G
is in smoother motions rather than smaller nish times. This observation seems
to indicate that maintaining a given formation can be a severe constraint when
obtaining time-optimal motion with many robots.
We would like to note here that, arguing as in the proof of Theorem 1, we
can prove the convergence of Algorithm G+, i.e., the robots positions always
converge to their goal positions, for the case of more than two robots in a simple
formation such as a regular polygon we examined.

Conclusion

In this paper, we have proposed a self-stabilizing marching algorithm in an


obstacle-free workplace, where marching means that the robots must move while
maintaining a given formation. To this end, for the case of two robots, we have
developed an algorithm G+ for a group of oblivious robots to transport a ladder,
and showed that G+ is correct, and self-stabilizing. As promised, G+ is simple
and constructed from a naive algorithm G that generates only a selsh move,
by adding two simple ingredients. Also, the motions obtained by G+ are fairly
close to the time-optimal motions.
In G+, each robot uses the oset vector to gure out (and adjust) its position
relative to the object it carries. It is not dicult to extend G+ to handle the
case in which more than two robots are involved to transport a given object,
since the oset vector can be easily calculated from the current positions of the
robots. Based on this idea, we have extended G+ and demonstrated by computer
simulation that G+ holds the merits for three or more robots.
We have also observed that G+ is indeed used as a marching algorithm for
the case of two robots. However, the same observation is obviously possible even
for more than two robots, and hence G+ can be used as a marching algorithm
for more than two robots, as well.
The three parameters s, t, and u used in G+ are experimentally determined
in this paper based on the limited number of simulations. As for the further

142

Y. Asahiro et al.

deep studies on the algorithm G+, if we can develop a method to select their
values for every conguration, it must be very useful. The algorithms G and G+
are evaluated by delay (nish time) and formation error, which shows that G+
is better than G, however it is controversial that how much delay or formation
error is accepted for real robots, or whether G+ is a best possible algorithm or
not. In addition to that, the proof of correctness of G+ does not guarantee the
maximum nish time or the maximum formation error. The self-stability of G+
is demonstrated under an assumption that the robot may move to wrong position
because of noise. An alternative interesting situation to be tested is that each
robot sometimes misunderstands the others locations because of sensor errors.
Distributed marching algorithms (i) by robots with dierent maximum speeds
and capabilities, (ii) using many robots, and (iii) in an environment occupied by
obstacles, are suggested for future study. Some results on these issues are found
in [3, 4].

Acknowledgments
This work was partially supported by KAKENHI 18300004 and 18700015.

References
1. Alami, R., Fleury, S., Herrb, M., Ingrand, F., Qutub, S.: Operating a Large Fleet of
Mobile Robots Using the Plan-merging Paradigm. In: IEEE Int. Conf. on Robotics
and Automation, pp. 23122317 (1997)
2. Ando, H., Oasa, Y., Suzuki, I., Yamashita, M.: A Distributed Memoryless Point
Convergence Algorithm for Mobile Robots with Limited Visibility. IEEE Trans.
Robotics and Automation 15(5), 818828 (1999)
3. Asahiro, Y., Chang, E.C., Mali, A., Nagafuji, S., Suzuki, I., Yamashita, M.: Distributed Motion Generation for Two Omni-directional Robots Carrying a ladder.
Distributed Autonomous Robotic Systems 4, 427436 (2000)
4. Asahiro, Y., Chang, E.C., Mali, A., Suzuki, I., Yamashita, M.: A Distributed Ladder Transportation Algorithm for Two Robots in a Corridor. In: IEEE Int. Conf.
on Robotics and Automation, pp. 30163021 (2001)
5. Asama, H., Sato, M., Bogoni, L., Kaetsu, H., Matsumoto, A., Endo, I.: Development of an Omni-directional Mobile Robot with 3 DOF Decoupling Drive Mechanism. In: IEEE Int. Conf. on Robotics and Automation, pp. 19251930 (1995)
6. Balch, T.: Behavior-based Formation Control for Multi-robot Teams. IEEE Trans.
Robotics and Automation 14(6), 926939 (1998)
7. Belta, C., Kumar, V.: Abstraction and Control for Groups of Robots. IEEE Trans.
Robotics 20(5), 865875 (2004)
8. Canepa, D., Gradinariu Potop-Butucaru, M.: Stabilizing Flocking via Leader Election in Robot Networks. In: Int. Symp. Stabilization, Safety, and Security, pp.
5266 (2007)
9. Cieliebak, M., Flocchini, P., Prencipe, G., Santoro, N.: Solving the robots gathering
problem. In: Baeten, J.C.M., Lenstra, J.K., Parrow, J., Woeginger, G.J. (eds.)
ICALP 2003. LNCS, vol. 2719, pp. 11811196. Springer, Heidelberg (2003)

A Self-stabilizing Marching Algorithm for a Group of Oblivious Robots

143

10. Cao, Y.U., Fukunaga, A.S., Kahng, A.B.: Cooperative Mobile Robots: Antecedents
and Directions. Autonomous Robots 4, 123 (1997)
11. Chen, A., Suzuki, I., Yamashita, M.: Time-optimal Motion of Two Omnidirectional
Robots Carrying a Ladder Under a Velocity Constraint. IEEE Trans. Robotics and
Automation 13(5), 721729 (1997)
12. Czyzowicz, J., Gasieniec, L., Pelc, A.: Gathering Few Fat Mobile Robots in the
Plane. In: Shvartsman, M.M.A.A. (ed.) OPODIS 2006. LNCS, vol. 4305, pp. 350
364. Springer, Heidelberg (2006)
13. Cohen, R., Peleg, D.: Convergence Properties of the Gravitational Algorithm in
Asynchronous Robot Systems. SIAM J. on Computing 34, 15161528 (2005)
14. Cohen, R., Peleg, D.: Convergence of Autonomous Mobile Robots with Inaccurate
Sensors and Movements. SIAM J. on Computing 38, 276302 (2008)
15. Debest, X.A.: Remark about Self-stabilizing Systems. Comm. ACM 38(2), 115117
(1995)
16. Donald, B.R.: Information Invariants in Robotics: Part I State, Communication,
and Side-eects. In: IEEE Int. Conf. on Robotics and Automation, pp. 276283
(1993)
17. Flocchini, P., Prencipe, G., Santoro, N., Widmayer, P.: Hard Tasks for Weak
Robots: The Role of Common Knowledge in Pattern Formation by Autonomous
Mobile Robots. In: Aggarwal, A.K., Pandu Rangan, C. (eds.) ISAAC 1999. LNCS,
vol. 1741, pp. 93102. Springer, Heidelberg (1999)
18. Flocchini, P., Prencipe, G., Santoro, N., Widmayer, P.: Arbitrary Pattern Formation by Asynchronous, Anonymous, Oblivious Robots. Theoretical Computer
Science (to appear)
19. Ge, S.S., Lewis, F.L. (eds.): Autonomous Mobile Robots: Sensing, Control Decision
Making and Applications. CRC Press, Boca Raton (2006)
20. Gervasi, V., Prencipe, G.: Coordination without Communication: The case of the
Flocking Problem. Discrete Applied Mathematics 143, 203223 (2003)
21. Izumi, T., Katayama, Y., Inuzuka, N., Wada, K.: Gathering Autonomous Mobile
Robots with Dynamic Compasses: An Optimal Results. In: Intl Symp. Distributed
Computing, pp. 298312 (2007)
22. Jadbabaie, A., Lin, J., Morse, A.S.: Coordination of Groups of Mobile Autonomous
Agents Using Nearest Neighbor Rules. IEEE Trans. Automatic Control 48(6), 988
1001 (2003)
23. Justh, E.W., Krishnaprasad, P.S.: Equilibria and Steering Laws for Planar Formation. System Control Letters 52(1), 2538 (2004)
24. Katayama, Y., Tomida, Y., Imazu, H., Inuzuka, N., Wada, K.: Dynamic Compass
Models and Gathering Algorithms for Autonomous Mobile Robots. In: Prencipe,
G., Zaks, S. (eds.) SIROCCO 2007. LNCS, vol. 4474, pp. 274288. Springer, Heidelberg (2007)
25. Kosuge, K., Oosumi, T.: Decentralized Control of Multiple Robots Handling an
Object. In: International Conference on Intelligent Robots and Systems, pp. 318
323 (1996)
26. Martnez, S., Cortes, J., Bullo, F.: Motion Coordination with Distributed Information. IEEE Control Systems Magazine, 7588 (2007)
27. LaValle, S.M.: Planning Algorithms. Cambridge University Press, Cambridge
(2006)
28. Lee, L.F., Krovi, V.: A Standardized Testing-ground for Articial Potential-eld
Based Motion Planning for Robot Collectives. In: 2006 Performance Metrics for
Intelligent Systems Workshop, pp. 232239 (2006)

144

Y. Asahiro et al.

29. Nakamura, Y., Nagai, K., Yoshikawa, T.: Dynamics and Stability in Coordination
of Multiple Robotics Mechanisms. Int. J. of Robotics Research 8(2), 4460 (1989)
30. Olfati-Saber, R.: Flocking for Multi-agent Dynamic Systems: Algorithms and Theory. IEEE Trans. Automatic Control 51(3), 401420 (2006)
31. Prencipe, G.: CORDA: Distributed Coordination of a Set of Autonomous Mobile
Robots. In: ERSADS 2001, pp. 185190 (2001)
32. Prencipe, G.: On the Feasibility of Gathering by Autonomous Mobile robots.
In: Pelc, A., Raynal, M. (eds.) SIROCCO 2005. LNCS, vol. 3499, pp. 246261.
Springer, Heidelberg (2005)
33. Shuneider, F.E., Wildermuth, D., Wolf, H.L.: Motion Coordination in Formations
of Multiple Robots Using a Potential Field Approach. Distributed Autonomous
Robotic Systems 4, 305314 (2000)
34. Souissi, S., Defago, X., Yamashita, M.: Gathering Asynchronous Mobile Robots
with Inaccurate Compasses. In: Shvartsman, M.M.A.A. (ed.) OPODIS 2006.
LNCS, vol. 4305, pp. 333349. Springer, Heidelberg (2006)
35. Souissi, S., Defago, X., Yamashita, M.: Using Eventually Consistent Compasses to
Gather Oblivious Mobile Robots with Limited Visibility. In: Datta, A.K., Gradinariu, M. (eds.) SSS 2006. LNCS, vol. 4280, pp. 471487. Springer, Heidelberg
(2006)
36. Stilwell, D.J., Bay, J.S.: Toward the Development of a Material Transport System
Using Swarms of Ant-like Robots. In: IEEE Int. Conf. on Robotics and Automation,
pp. 766771 (1995)
37. Sugihara, K., Suzuki, I.: Distributed Motion Coordination of Multiple Mobile
Robots. In: IEEE Int. Symp. on Intelligent Control, pp. 138143 (1990)
38. Sugihara, K., Suzuki, I.: Distributed Algorithms for Formation of Geometric Patterns with Many Mobile Robots. Journal of Robotic Systems 13(3), 127139 (1996)
39. Suzuki, I., Yamashita, M.: Formation and Agreement Problems for Anonymous
Mobile Robots. In: Annual Allerton Conference on Communication, Control, and
Computing, pp. 93102 (1993)
40. Suzuki, I., Yamashita, M.: Distributed Anonymous Mobile Robots: Formation of
Geometric Patterns. SIAM J. Computing 28(4), 13471363 (1999)
41. Tanner, H., Jadbabaie, A., Pappas, G.J.: Flocking in Fixed and Switching Networks. IEEE Trans. Automatic Control 52(5), 863868 (2007)
42. Whitcomb, L.L., Koditschek, D.E., Cabrera, J.B.D.: Toward the Automatic Control of Robot Assembly Tasks via Potential Functions: The Case of 2-D Sphere Assemblies. In: IEEE Int. Conf. on Robotics and Automation, pp. 21862191 (1992)
43. Yamaguchi, H.: A Distributed Motion Coordination Strategy for Multiple Nonholomic Mobile Robots in Cooperative Hunting Operations. Robotics and Autonomous Systems 43(4), 257282 (2003)

Fault-Tolerant Flocking in a k-Bounded Asynchronous


System
Samia Souissi1,2 , Yan Yang1 , and Xavier Defago1
1

School of Information Science


Japan Advanced Institute of Science and Technology (JAIST)
1-1 Asahidai, Nomi, Ishikawa 923-1292, Japan
{ssouissi,y.yang,defago}@jaist.ac.jp
2
Now at Nagoya Institute of Technology,
Department of Computer Science and Engineering
[email protected]

Abstract. This paper studies the flocking problem, where mobile robots group
to form a desired pattern and move together while maintaining that formation.
Unlike previous studies of the problem, we consider a system of mobile robots
in which a number of them may possibly fail by crashing. Our algorithm ensures
that the crash of faulty robots does not bring the formation to a permanent stop,
and that the correct robots are thus eventually allowed to reorganize and continue
moving together. Furthermore, the algorithm makes no assumption on the relative
speeds at which the robots can move.
The algorithm relies on the assumption that robots activations follow a kbounded asynchronous scheduler, in the sense that the beginning and end of activations are not synchronized across robots (asynchronous), and that while the
slowest robot is activated once, the fastest robot is activated at most k times (kbounded).
The proposed algorithm is made of three parts. First, appropriate restrictions
on the movements of the robots make it possible to agree on a common ranking
of the robots. Second, based on the ranking and the k-bounded scheduler, robots
can eventually detect any robot that has crashed, and thus trigger a reorganization of the robots. Finally, the third part of the algorithm ensures that the robots
move together while keeping an approximation of a regular polygon, while also
ensuring the necessary restrictions on their movement.

1 Introduction
Be it on earth, in space, or on other planets, robots and other kinds of automatic systems provide essential support in otherwise adverse and hazardous environments. For
instance, among many other applications, it is becoming increasingly attractive to consider a group of mobile robots as a way to provide support for rescue and relief during
or after a natural catastrophe (e.g., earthquake, tsunami, cyclone, volcano eruption).
As a result, research on mechanisms for coordination and self-organization of mobile
robot systems is beginning to attract considerable attention (e.g, [17,19,20,21]). For


Work supported by MEXT Grant-in-Aid for Young Scientists (A) (Nr. 18680007).

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 145163, 2008.
c Springer-Verlag Berlin Heidelberg 2008


146

S. Souissi, Y. Yang, and X. Defago

such operations, relying on a group of simple robots for delicate operations has various advantages over considering a single complex robot. For instance, (1) it is usually
more cost-effective to manufacture and deploy a number of cheap robots rather than a
single expensive one, (2) higher number yields better potential for a system resilient to
individual robot failures, (3) smaller robots have obviously better mobility in tight and
confined spaces, and (4) a group can survey a larger area than an individual robot, even
if the latter is equipped with better sensors.
Nevertheless, merely bringing robots together is by no means sufficient, and adequate coordination mechanisms must be designed to ensure coherent group behavior.
Furthermore, since many applications of cooperative robotics consider cheap robots
dwelling in hazardous environments, fault-tolerance is of primary concern.
The problem of reaching agreement among a group of autonomous mobile robots
has attracted considerable attention over the last few years. While much formal work
focuses on the gathering problem (robots must meet at a point, e.g., [7]) as the embodiment of a static notion of agreement, this work studies the problem of flocking (robots
must move together), which embodies a dynamic notion of agreement, as well as coordination and synchronization. The flocking problem has been studied from various
perspectives. Studies can be found in different disciplines, from artificial intelligence
to engineering [1,3,5,6]. However, only few works considered the presence of faulty
robots [2,4].
Fault-tolerant flocking. Briefly, the main problem studied in this paper, namely the
flocking problem, requires that a group of robots move together, staying close to each
other, and keeping some desired formation while moving. Numerous definitions of
flocking can be found in the literature [3,11,12,14], but few of them define the problem precisely. The rare rigorous definitions of the problem suppose the existence of a
leader robot and require that the other robots, called followers, follow the leader in a
desired fashion [3,6,10], such as by maintaining an approximation of a regular polygon.
The variant of the problem that we consider in this paper requires that the robots
form and move while maintaining an approximation of a regular polygon, in spite of
the possible presence of faulty robotsrobots may fail by crashing and a crash is permanent. Although we do consider the presence of a leader robot to lead the group, the
role of leader is assigned dynamically and any of the robots can potentially become a
leader. In particular, after the crash of a leader, a new leader must eventually take over
that role.
Model. The system is modelled as a system composed of a group of autonomous mobile
robots, modelled as points evolving on the plane, and all of which execute the same
algorithm independently. Some of the robots may possibly fail by crashing, after which
they do not move forever. Although the robots share no common origin, they do share
one common direction (as given by a compass), a common unit distance, and the same
notion of clockwise direction.
Robots repeatedly go through a succession of activation cycles during which they
observe their environment, compute a destination and move. Robots are asynchronous
in that one robot may begin an activation cycle while another robot finishes one. While
some robots may be activated more often than others, we assume that the scheduler

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

147

is k-bounded in the sense that, in the interval it takes any correct robot to perform a
single activation cycle, no other robot performs more than k activations. The robots can
remember only a limited number of their past activations.
Contribution. The paper presents a fault-tolerant flocking algorithm for a k-bounded
asynchronous robot system. The algorithm is decomposed into three parts. In the first
part, the algorithm relies on the k-bounded scheduler to ensure failure detection. In the
second part, the algorithm establishes a ranking system for the robots and then ensures
that robots agree on the same ranking throughout activations. In the third and last part,
the ranking and the failure detector are combined to realize the flocking of the robots
by maintaining an approximation of a regular polygon while moving.
Related work. Gervasi and Prencipe [3] have proposed a flocking algorithm for robots
based on a leader-followers model, but introduce additional assumptions on the speed
of the robots. In particular, they proposed a flocking algorithm for formations that are
symmetric with respect to the leaders movement, without agreement on a common
coordinate system (except for the unit distance). However, their algorithm requires that
the leader is distinguished from the robots followers.
Canepa and Potop-Butucaru [6] proposed a flocking algorithm in an asynchronous
system with oblivious robots. First, the robots elect a leader using a probabilistic algorithm. After that, the robots position themselves according to a specific formation. Finally, the formation moves ahead. Their algorithm only lets the formation move straight
forward. Although the leader is determined dynamically, once elected it can no longer
change. In the absence of faulty robots, this is a reasonable limitation in their model.
To the best of our knowledge, our work is the first to consider flocking of asynchronous (k-bounded) robots in the presence of faulty robots. Also, we want to stress
that the above two algorithms do not work properly in the presence of faulty robots, and
that their adaptation is not straightforward.
Structure. The remainder of this paper is organized as follows. In Section 2, we present
the system model. In Section 3, we define the problem. In Section 4, we propose a
failure detection algorithm based on kbounded scheduler. In Section 5, we give an
algorithm that provides a ranking mechanism for robots. In Section 6, we propose a
dynamic fault tolerant flocking algorithm that maintains an approximation of a regular
polygon. Finally, in Section 7, we conclude the paper.

2 System Model and Definitions


2.1 The C ORDA Model
In this paper, we consider the C ORDA model of Prencipe [8] with k-bounded scheduler.
The system consists of a set of autonomous mobile robots R = {r1 , , rn }. A robot
is modelled as a unit having computational capabilities, and which can move freely in
the two-dimensional plane. Robots are seen as points on the plane. In addition, they are
equipped with sensor capabilities to observe the positions of the other robots, and form
a local view of the world.

148

S. Souissi, Y. Yang, and X. Defago

The local view of each robot includes a unit of length, an origin, and the directions and orientations of the two x and y coordinate axes. In particular, we assume
that robots have a partial agreement on the local coordinate system. Specifically, they
agree on the orientation and direction of one axis, say y. Also, they agree on the clockwise/counterclokwise direction.
The robots are completely autonomous. Moreover, they are anonymous, in the sense
that they are a priori indistinguishable by appearance. Furthermore, there is no direct
means of communication among them.
In the C ORDA model, robots are totally asynchronous. The cycle of a robot consists
of a sequence of events: Wait-Look-Compute-Move.
Wait. A robot is idle. A robot cannot stay permanently idle. At the beginning all
robots are in Wait state.
Look. Here, a robot observes the world by activating its sensors, which will return
a snapshot of the positions of the robots in the system.
Compute. In this event, a robot performs a local computation according to its deterministic algorithm. The algorithm is the same for all robots, and the result of the
compute state is a destination point.
Move. The robot moves toward its computed destination. But, the distance it moves
is unmeasured; neither infinite, nor infinitesimally small. Hence, the robot can only
go towards its goal, but the move can end anywhere before the destination.
In the model, there are two limiting assumptions related to the cycle of a robot.
Assumption 1. It is assumed that the distance travelled by a robot r in a move is not
infinite. Furthermore, it is not infinitesimally small: there exists a constant r > 0, such
that, if the target point is closer than r , r will reach it; otherwise, r will move toward
it by at least r .
Assumption 2. The amount of time required by a robot r to complete a cycle (waitlook-compute-move) is not infinite. Furthermore, it is not infinitesimally small; there
exists a constant r > 0, such that the cycle will require at least r time.
2.2 Assumptions
k-bounded-scheduler. In this paper, we assume the C ORDA model with k-bounded
scheduler, in order to ensure some fairness of activations among robots. Before we
define the k-bounded-scheduler, we give a definition of full activation cycle for robots.
Definition 1 (full activation cycle). A full activation cycle for any robot ri is defined
as the interval from the event Look (included) to the next instance of the same event
Look (excluded).
Definition 2 (k-bounded-scheduler). With a k-bounded scheduler, between two consecutive full activation cycles of the same robot ri , another robot rj can execute at most
k full activation cycles.
This allows us to establish the following lemma:
Lemma 1. If a robot ri is activated k+1 times, then all (correct) robots have completed
at least one full activation cycle during the same interval.

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

149

Faults. In this paper, we address crash failures. That is, we consider initial crash of
robots and also the crash of robots during execution. That is, a robot may fail by crashing, after which it executes no actions (no movement). A crash is permanent in the sense
that a faulty robot never recovers. However, it is still physically present in the system,
and it is seen by the other non-crashed robots. A robot that is not faulty is called a
correct robot.
Before we proceed, we give the following notations that will be used throughout this
paper. We denote by R = {r1 , , rn } the set of all the robots in the system. Given
some robot ri , ri (t) is the position of ri at time t. y(ri ) denotes the y coordinate of
robot ri at some time t. Let A and B be two points, with AB, we will indicate the
segment starting at A and terminating at B, and dist (A, B) is the length of such a
segment. Given a region X , we denote by |X |, the number of robots in that region at
time t. Finally, let S be a set of robots, then |S| indicates the number of robots in S.

3 Problem Definition
Definition 3 (Formation). A formation F = F ormation(P1 , P2 , ..., Pn ) is a configuration, with P1 the leader of the formation, and the remaining points, the followers of
the formation. The leader P1 is not distinct physically from the robot followers.
In this paper, we assume that the formation F is a regular polygon. We denote by d the
length of the polygon edge (known to the robots), and by = (n 2)180/n the angle
of the polygon, where n is the number of robots in F .
Definition 4 (Approximate Formation). We say that robots form an approximation of
the formation F if each robot ri is within r from its target Pi in F .
Definition 5 (The Flocking Problem). Let r1 ,...,rn be a group of robots, whose positions constitute a formation F = F ormation(P1 , P2 , ..., Pn ). The robots solve the
Approximate Flocking Problem if, starting from any arbitrary formation at time t0 ,
t1 t0 such that, t t1 all robots are at a distance of at most r from their respective targets Pi in F , and r is a small positive value known to all robots.

4 Perfect Failure Detection


In this section, we give a simple perfect failure detection algorithm for robots based
on a kbounded scheduler in the asynchronous model C ORDA. The concept of failure
detectors was first introduced by Chandra and Toueg [16] in asynchronous systems with
crash faults. A perfect failure detector has two properties: strong completeness, and
strong accuracy. Before we proceed to the description of the algorithm, we make the
following assumption, which is necessary for the failure detector mechanism to identify
correct robots and crashed ones.
Assumption 3. We assume that, at each activation of some robot ri (correct), ri computes as destination a position that is different from its current position. Also, a robot

150

S. Souissi, Y. Yang, and X. Defago

ri never visits the same location for the last k + 1 activations of ri .1 Finally, a robot
ri never visits a location that was visited by any other robot rj during the last k + 1
activations of rj .
Recall that we only consider permanent crash failures of robots, and that crashed robots
remain physically in the system. Besides, robots are anonymous. Therefore, the problem is how to distinguish faulty robots from correct ones. Algorithm 1 provides a simple
perfect failure detection mechanism for the identification of correct robots. The algorithm is based on the fact that a correct robot must change its current position whenever
it is activated (Assumption 3), and also relies on the definition of the kbounded scheduler for the activations of robots. So, a robot ri considers that some robot rj is faulty
if ri is activated k + 1 times, while robot rj is still in the same position. Algorithm 1
gives as output the set of positions of correct robots Scorrect , and uses the following
variables:
SP osP revObser : a global variable representing the set of points of the positions
of robots in the system in the previous activation of some robot ri . These points
include the positions of correct and faulty robots. SP osP revObser is initialized to
the empty set during the first activation of robot ri .
SP osCurrObser : the set of points representing the positions of robots (including
faulty ones) in the current activation of some robot ri .
cj : a global variable recording how many times robot rj did not change its position.
Algorithm 1. Perfect Failure Detection (code executed by robot ri )
Initialization: SP osP revObser := ; cj := 0
1: procedure Failure Detection(SP osP revObser ,SP osCurrObser )
2:
Scorrect := SP osCurrObser ;
3:
for pj SP osCurrObser do
4:
if (pj SP osP revObser ) then
5:
cj := cj + 1;
6:
else
7:
cj := 0;
8:
end if
9:
if (cj k) then
10:
Scorrect = Scorrect {pj };
11:
end if
12:
end for
13:
return (Scorrect)
14: end

{robot rj has not moved}

The proposed failure detection algorithm (Algorithm 1) satisfies the two properties
of a perfect failure detector: strong completeness, and strong accuracy. It also satisfies
the eventual agreement property. These properties are stated respectively in Theorem 1,
Theorem 2, and Theorem 3, and their proofs are straightforward (details are in corresponding research report [18]).
1

That is, ri never revisits a point location that was within its line of movement for its last k + 1
total activations.

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

151

Theorem 1. Strong completeness: eventually every robot that crashes is permanently


suspected by every correct robot.
Theorem 2. Strong accuracy: there is a finite time after which correct robots are not
suspected by any other correct robots.
Theorem 3. Eventual agreement: there is a finite time after which, all correct robots
agree on the same set of correct robots in the system.

5 Agreed Ranking for Robots


In this section, we provide an algorithm that gives a unique ranking (or identification) to
every robot in the system since we assume that robots are anonymous, and do not have
any identifier to allow them to distinguish each other. The algorithm allows correct
robots to compute and agree on the same ranking. In particular, the ranking mechanism
is needed for the election of the leader of the formation. Recall that, a deterministic
leader election is impossible without a shared y-axis [9]. Therefore, we assume that
robots agree on the y-axis.
We first assume that robots are not located initially at the same point. That is, robots
are not in the gathering configuration [7], because it may become impossible to separate
them later.2 The ranking assignment is given in Algorithm 2, which takes as input the set
of positions of correct robots in the system Scorrect , and returns as output an ordered
set of the positions in Scorrect , called RankSequence. The ranking of positions of
robots in Scorrect gives to every robot a unique identification number. The computation
of RankSequence is done as follows: RankSequence = {Scorrect, <}, where the
relation < is defined by comparing the y coordinates of the points in Scorrect ,
and breaking ties from left to right. In other words, the positions of robots in Scorrect
are sorted by decreasing order of ycoordinate, such that the robot with greatest ycoordinate is the first in RankSequence. When two or more robots share the same
y-coordinate, the clockwise direction is used to determine the sequence; a robot ri that
has a robot rj on its right hand, has a lower rank than rj in RankSequence.
In order for robots to agree on the same RankSequence initially, some restrictions
on their movement are required during their first k activations. The movement restriction is given by procedure Lateral Move Right(), and it is designed in such a way that
all robots compute the same RankSequence during their first k activations. In particular, a robot ri that does not have robots on Right(ri ) can move by at most the distance
r /(k + 1)(k + 2) along Right(ri ) in order to preserve the same ycoordinate. Otherwise, ri moves by min(r /(k + 1)(k + 2), dist(ri , p)/(k + 1)(k + 2)) along Right(ri ),
where p is the position of the nearest robot to ri in Right(ri ). 3 From Algorithm 2, we
2

Consider two robots that happen to have the same coordinate system and that are always activated together. It is impossible to separate them deterministically. In contrast, it would be
trivial to scatter them at distinct positions using randomization (e.g., [15]), but this is ruled out
in our model.
Note that, the bounded distance min(r /(k + 1)(k + 2), dist (ri , p)/(k + 1)(k + 2)) set on
the movement of robots is conservative, and is sufficient to avoid collisions between robots,
and to satisfy Assumption 3.

152

S. Souissi, Y. Yang, and X. Defago

Algorithm 2. Ranking Correct Robots (code executed by robot ri )


1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:

Input: Scorrect: set of positions of correct robots;


Output: RankSequence: Ordered set of positions of correct robots Scorrect ;
Initialization: counteract := a global variable recording the number of activations of ri ;
procedure Ranking Correct Robots(Scorrect )
When ri is activated
counteract := counteract + 1;
Lef t(ri ):= is the ray starting at ri and perpendicular to its yaxis in counter-clockwise
direction.
Sort the ycoordinates of robots in Scorrect in decreasing order.
if (rj , rk Scorrect , y(rj )
= y(rk )) then
RankSequence := the set Scorrect in order of decreasing ycoordinate;
else if y(rj ) = y(rk ) then
if (rj is on Lef t(rk )) then
RankSequence := rj < rk ;
else
RankSequence := rk < rj ;
end if
end if
if (counteract k) then
Lateral Move Right();
end if
Return(RankSequence);
end

Algorithm 3. Procedure Lateral Move Right (code executed by robot ri ).


1: procedure Lateral Move Right()
2:
Right(ri ) := the ray starting at ri and perpendicular to its yaxis in clockwise direction;
3:
if (If no other robot on Right(ri )) then
4:
ri moves by at most r /(k + 1)(k + 2) to Right(ri );
5:
else
{some robots are in Right(ri ) including faulty robots}
6:
p := the position of the nearest robot to ri in Right(ri );
7:
ri moves by min(r /(k + 1)(k + 2), dist(ri , p)/(k + 1)(k + 2)) to Right(ri );
8:
end if
9: end

derive the following lemmas. In particular, the algorithm gives a unique ranking to every
robot in the system, and also ensures no collisions between robots.
Lemma 2. Algorithm 2 gives a unique ranking to every correct robot in the system.
Lemma 3. By Algorithm 2, there is a finite time after which, all correct robots agree
on the same initial sequence of ranking, RankSequence.
Lemma 4. Algorithm 2 guarantees no collisions between the robots in the system.
The proofs of the above lemmas are simple (details can be found in corresponding
research report [18]).

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

153

6 Dynamic Fault-Tolerant Flocking


In this section, we propose a dynamic fault tolerant flocking algorithm, where a group of
robots can dynamically generate an approximation of a regular polygon (Definition 4),
and maintain it while moving. Our flocking algorithm relies on the existence of two
devices, namely a perfect failure detector device and a ranking device, which were
represented respectively in Algorithm 1, and Algorithm 2.
6.1 Algorithm Description
The flocking algorithm is depicted in Algorithm 4, and takes as input the length of the
polygon edge d, and the history of robot ri , which includes the following variables:
SP osP revObser : the set of positions of robots in the system during the last previous
observation of robot ri .
HistoryM ove: the set of points on the plane visited by robot ri during its last
previous k + 1 activations.
nbract : a counter recording the last previous k + 1 activations of robot ri .
The overall idea of the algorithm is as follows. First, when robot ri gets activated, it
executes the following steps:
1. Robot ri takes a snapshot of the current positions SP osCurrObser of robots in the
system.
2. Robot ri calls the failure detection module to get the set of correct robots, Scorrect .
3. Robot ri calls the ranking module, and gets a total ordering on the set of correct
robots Scorrect , called RankSequence.
4. Depending on the rank of robot ri in RankSequence, ri executes the procedure described in Algorithm 5; Flocking Leader(RankSequence, d, nbract , HistoryMove)
if it has the first rank in RankSequence (i.e., the leader). Otherwise, robot ri
is a follower, and it executes the procedure which is described in Algorithm 6,
Flocking Follower(RankSequence, d, nbract, HistoryMove).
5. Robot ri is a leader. First, ri computes the points of the formation P1 , ..., Pn as in
Definition 4, with its location as the first point P1 in the formation. The targets of
the followers are the other points of the formation, and they are assigned to them
based on their order in the RankSequence. After that, the leader will initiate the
movement of the formation, while preserving the same rank sequence, keeping an
approximation of the regular polygon, and also avoiding collisions with followers.
In order to prevent collisions between robots, the algorithm must guarantee that
no two robots ever move to the same location. Therefore, the algorithm defines a
movement zone for each robot, within which the robot must move. The zone of the
leader, referred to as Zone(ri ), is defined depending on the position of the next
robot ri+1 in RankSequence. Let us denote by projri+1 , the projection of robot
ri+1 on the yaxis of ri . The movement zone of the leader is defined as follows:
ri and ri+1 have the same y coordinate: Zone(ri ) is the half circle with radius
min(dist (ri , ri+1 )/(k+1)(k+2), r /(k+1)(k+2)), centered at ri and above
ri (refer to Fig. 1(a)).

154

S. Souissi, Y. Yang, and X. Defago

Algorithm 4. Dynamic Fault-tolerant Flocking (code executed by robot ri )


1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:

Input: Memory(ri ):SP osP revObser ;HistoryM ove;nbract;


d := the desired distance of the polygon edge;
When ri is activated
ri takes a snapshot of the positions SP osCurrObser of robots;
Scorrect := Failure Detection(SPosPrevObser , SPosCurrObser );
RankSequence := Ranking Correct Robots(Scorrect );
leader := first robot in RankSequence;
if (ri = leader) then
Flocking Leader(RankSequence, d, nbract , HistoryMove);
else
Flocking Follower(RankSequence, d, nbract , HistoryMove);
end if

{leader}
{follower}

Algorithm 5. Flocking Leader: Code executed by a robot leader ri .


1: procedure Flocking Leader(RankSequence,d,nbract , HistoryM ove)
2:
n := |RankSequence|;
3:
:= (n 2)180 /n;
4:
P := Formation(P1 , P2 , ..., Pn ) as in Definition 3;
5:
P1 := current position of the leader ri ;
6:
ri+1 := next robot to ri in RankSequence;
7:
projri+1 := the projection of ri+1 on yaxis of ri ;
{ri has same ycoordinate as ri+1 }
8:
if (projri+1 = ri ) then
9:
Zone(ri ):= half circle with radius min(dist(ri , ri+1 )/(k + 1)(k + 2), r /(k + 1)(k +
2)), centered at ri and above ri (refer to Fig. 1(a));
10:
else
11:
Zone(ri ):= the circle centered at ri , and with radius min(r /(k + 1)(k +
2), dist(ri , projri+1 )/(k + 1)(k + 2)) (refer to Fig. 1(b));
12:
end if
13:
SCrashInZone := the set of positions of crashed robots in Zone(ri );
14:
if (SCrashInZone
= ) then
15:
ri moves to a desired point T arget(ri) within Zone(ri ), excluding the points in
SCrashInZone , and the points in HistoryM ove;
16:
else
17:
ri moves to a desired point T arget(ri) within Zone(ri ), excluding the points in
HistoryM ove;
18:
end if
19:
CurrM ove := the set of points on the segment ri T arget(ri);
20:
if (nbract k + 1) then
21:
HistoryM ove := HistoryM ove CurrM ove;
22:
else
23:
HistoryM ove := CurrM ove;
24:
nbract := 1;
25:
end if
26: end

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

155

y
y
zone(ri)

ri

zone(ri)

dist(ri,ri+1)/(k+1)(k+2)

ri

ri+1

< r
(a) ri and ri+1 have the same y-coordinate,
and dist(ri , ri+1 ) < r : Zone(ri ) is the
half circle with radius dist (ri , ri+1 )/(k +
1)(k + 2).

r/(k+1)(k+2)

>= r

ri+1

(b) ri and ri+1 do not have the same ycoordinate, and dist (ri , projri+1 ) r :
Zone(ri ) is the circle with radius r /(k +
1)(k + 2).

Fig. 1. Zone of movement of the leader

ri and ri+1 do not have the same y coordinate: Zone(ri ) is the circle, centered
at ri , and with radius min(dist (ri , projri+1 )/(k+1)(k+2), r /(k+1)(k+2))
(refer to Fig. 1(b)).
After determining its zone of movement Zone(ri ), robot ri needs to determine
if there are crashed robots within Zone(ri ). If no crashed robots are within its
zone, then robot ri can move to any desired target within Zone(ri ), satisfying
Assumption 3. Otherwise, robot ri can move within Zone(ri ) by excluding the
positions of crashed robots, and satisfying Assumption 3.
6. Robot ri is a follower. First, ri assigns the points of the formation P1 , ..., Pn to the
robots in RankSequence based on their order in RankSequence. Subsequently,
robot ri determines its target Pi based on the current position of the leader (P1 ),
and the polygon angle given in the following equation: = (n 2)180 /n,
where n is the number of robots in the formation.
In order to ensure no collisions between robots, the algorithm also defines a
movement zone for each robot follower. The zone of a follower, referred to as
Zone(ri ) is defined depending on the position of the previous robot ri1 and
the next robot ri+1 to ri in RankSequence. Before we proceed, we denote by
projri1 , the projection of robot ri1 on the yaxis of robot ri . Similarly, we denote by projri+1 , the projection of robot ri+1 on the yaxis of ri . The zone of
movement of a robot follower ri is defined as follows:
ri , ri1 and ri+1 have the same y coordinate, then Zone(ri ) is the segment
ri p, with p as the point at distance min(dist (ri , ri+1 )/(k + 1)(k + 2), r /(k +
1)(k + 2)) from ri (Fig. 2(a)).
ri , ri1 and ri+1 do not have the same y coordinate, then Zone(ri ) is the circle
centered at ri , and with radius min(r /(k + 1)(k + 2), dist(ri , projri1 )/(k +
1)(k + 2), dist (ri , projri+1 )/(k + 1)(k + 2)) (Fig. 2(b)).
ri and ri+1 have the same y coordinate, however ri1 does not, then Zone(ri )
is the half circle above it, centered at ri , and with radius min(r /(k + 1)(k +
2), dist (ri , projri1 )/(k + 1)(k + 2), dist (ri , ri+1 )/(k + 1)(k + 2)).

156

S. Souissi, Y. Yang, and X. Defago

ri-1
zone(ri)

ri-1

ri

zone(ri)

ri+1

r/(k+1)(k+2)
>=r

(a) Aligned.

>= r

dist(ri,proj(ri+1))/(k+1)(k+2)

ri-1

ri

ri
r/(k+1)(k+2)

zone(ri)

>= r
ri+1

(b) Not aligned.

>= r

< r
ri+1

(c) Partly aligned.

Fig. 2. Zone of movement of a follower. There are three cases as follows. The situation (a) in
which ri1 , ri , and ri+1 have the same y coordinate. The situation (b) where ri1 , ri and ri+1
do not have the same y coordinate, and dist (ri , projri1 ) r , and dist (ri , projri+1 ) r .
The situation (c) where ri1 and ri have the same y coordinate, however, ri+1 does not. Also,
dist (ri , ri1 ) r , and dist(ri , projri+1 ) < r .

ri and ri1 have the same y coordinate, however ri+1 does not, then Zone(ri )
is the half circle below it, centered at ri , and with radius min(r /(k + 1)(k +
2), dist (ri , ri1 )/(k +1)(k +2), dist (ri , projri+1 )/(k +1)(k +2)) (Fig. 2(c)).
As we mentioned before, the bound min(r /(k +1)(k +2), dist(ri , p)/(k +1)(k +
2)) set on the movement of robots is conservative, and is sufficient to avoid collisions between robots, and to satisfy Assumption 3 (this will be proved later).
For the sake of clarity, we do not describe explicitly in Algorithm 6 the zone
of movement of the last robot in the rank sequence. The computation of its zone
of movement is similar to that of the other robot followers, with the only difference being that it does not have a next neighbor ri+1 . So, if robot ri has the
same ycoordinate as its previous neighbor ri1 , then its zone of movement is
the half circle with radius min(r /(k + 1)(k + 2), dist (ri , ri1 )/(k + 1)(k + 2)),
centered at ri and below ri . Otherwise, the circle centered at ri , and with radius
min(r /(k + 1)(k + 2), dist (ri , projri1 )/(k + 1)(k + 2)).
After determining its zone of movement Zone(ri ), robot ri needs to determine
if it can progress toward its target T arget(ri ). Note that, T arget(ri ) may not necessarily belong to Zone(ri ). To do so, robot ri computes the intersection of the
segment ri T arget(ri ) and Zone(ri ), called Intersect. If Intersect is equal to
the position of ri , then ri will move toward its right as given by the procedure
Lateral Move Right(). Otherwise, ri moves along the segment Intersect as much
as possible, while avoiding to reach the location of a crashed robot in Intersect, if
any, and satisfying Assumption 3. In any case, if ri is not able to move to any point
in Intersect, except its current position, it moves to its right as in the procedure
Lateral Move Right().
Note that, by the algorithm robot followers can move in any direction by adaptation
of their target positions with respect to the new position of the leader. When the leader
is idle, robot followers move within the distance r /(k + 1)(k + 2) or smaller in order
to keep an approximation of the formation with respect to the position of the leader, and
preserve the rank sequence.

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

157

6.2 Correctness of the Algorithm


In this section, we prove the correctness of our flocking algorithm by first showing that
correct robots agree on the same ranking during the execution of Algorithm 4 (Theorem 4). Second, we prove that no two correct robots ever move to the same location, and
that a correct robot never moves to a location occupied by a faulty robot (Theorem 5).
Then, we show that all correct robots dynamically form an approximation of a regular
polygon in finite time, and keep this formation while moving (Theorem 6). Finally, we
prove that our algorithm tolerates permanent failures of robots (Theorem 7).
Lemma 5. Algorithm 4 satisfies Assumption 3.
Proof (Lemma 5). To prove the lemma, we first show that any robot ri in the system
is able to move to a destination that is different from its current location, and robot
ri never visits a point location that was within its line of movement for its last k + 1
activations. Then, we show that a robot ri never visits a location that was visited by
another robot rj during the last k + 1 activations of rj .
First, assume that robot ri is the leader. By Algorithm 4, its zone of movement
Zone(ri ) is either a circle or a half circle on the plane, excluding the points in its history of moves HistoryM ove for the last k + 1 activations, and the positions of crashed
robots. Since, Zone(ri ) is composed of an infinite number of points, the positions of
crashed robots are finite, and HistoryM ove is a strict subset of Zone(ri ), then robot
ri can always compute and move to a new location that is different from the locations
visited by ri during its last k + 1 activations.
Now, assume that robot ri is a follower, and let ri1 and ri+1 , be respectively the
previous, and next robots to ri in RankSequence. Two cases follow depending on the
zone of movement of ri .
Consider the case where Zone(ri ) is the segment with length min(r /(k + 1)(k +
2), dist (ri , ri+1 )/(k + 1)(k + 2)), excluding ri . Since, such case occurs only when
ri1 , ri , and ri+1 have the same y coordinate, and robot ri is only allowed to move
to Right(ri ). Then, ri can always move to a free position in Right(ri ) that does
not belong to HistoryM ove, and that excludes the positions of crashed robots
since they are finite and there exists an infinite number of points in Zone(ri ).
Consider the case where Zone(ri ) is either a circle or a half circle, centered at ri
and with a radius greater than zero, excluding its history of move HistoryM ove
for the last k + 1 activations, and the positions of crashed robots. By similar arguments as above, we have Zone(ri ) is composed of an infinite number of points,
HistoryM ove is a strict subset of Zone(ri ), and the positions of crashed robots
are finite. Thus, robot ri can always compute and move to a new location that is
different from the locations visited by ri during its last k + 1 activations.
We now show that robot ri never visits a location that was visited by another robot
rj during the last previous k + 1 activations of rj . Without loss of generality, we consider robot ri and its next neighbor ri+1 . The same proof holds for ri and its previous
neighbor ri1 . Observe that if ri and ri+1 are moving away from each other, then neither robots move to a location that was occupied by the other one for its last k + 1
activations.

158

S. Souissi, Y. Yang, and X. Defago

Algorithm 6. Flocking Follower: Code executed by a robot follower ri .


1: procedure Flocking Follower(RankSequence,d,nbract,HistoryM ove)
2:
n := |RankSequence|; and := (n 2)180 /n;
3:
P := Formation(P1 , P2 , ..., Pn ) as in Definition 3;
4:
P1 := current position of the leader;
5:
rj RankSequence, T arget(rj ) = Pj F ormation(P1 , P2 , ..., Pn );
{Formation = True}
6:
if (rj RankSequence, rj is within r of Pj ) then
7:
Lateral Move Right();
8:
else
{Flocking and formation generation}
9:
ri1 := previous robot to ri in RankSequence;
10:
projri1 := the projection of ri1 on yaxis of ri ;
11:
ri+1 := next robot to ri in RankSequence;
12:
projri+1 := the projection of ri+1 on yaxis of ri ;
{same y coordinate as neighbors}
13:
if (projri1 = ri projri+1 = ri ) then
14:
Zone(ri ) := segment with length min(r /(k + 1)(k + 2), dist (ri , ri+1 )/(k +
1)(k + 2)) starting at ri to Right(ri ) (Fig. 2(a));
15:
else if (projri1
= ri ) (projri+1
= ri ) then
16:
Zone(ri ) := circle centered at ri , with radius min(r /(k + 1)(k +
2), dist (ri , projri1 )/(k+1)(k+2), dist (ri , projri+1 )/(k+1)(k+2)) (Fig. 2(b));
17:
else if (projri1
= ri projri+1 = ri ) then
18:
Zone(ri ) := half circle centered at ri , above it, and with radius min(r /(k +1)(k +
2), dist (ri , projri1 )/(k + 1)(k + 2), dist (ri , ri+1 )/(k + 1)(k + 2));
19:
else
{ri has different y coordinate from next robot}
20:
Zone(ri ) := half circle centered at ri , below it, and with radius min(r /(k+1)(k+
2), dist (ri , ri1 )/(k + 1)(k + 2), dist (ri , projri+1 )/(k + 1)(k + 2))(Fig. 2(c));
21:
end if
22:
Intersect := the intersection of the segment ri T arget(ri) with Zone(ri );
{ri is able to progress to its target}
23:
if (Intersect
= ri ) then
24:
SCrashInLine := the set of crashed robots in the segment intersect;
25:
if (SCrashInLine = ) then
26:
ri moves to the last point in Intersect, excluding the points in HistoryM ove;
27:
else
28:
rc := the closest crashed robot to ri in Intersect;
29:
ri moves linearly to the last point in the segment ri rc , excluding rc , and the points
in HistoryM ove;
30:
end if
31:
else
32:
Lateral Move Right();
33:
end if
34:
end if
35:
CurrM ove := the set of points on the segment ri T arget(ri);
36:
if (nbract k + 1) then
37:
HistoryM ove := HistoryM ove CurrM ove;
38:
else
39:
HistoryM ove := CurrM ove; and nbract := 1;
40:
end if
41: end

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

159

Now assume that both robots ri and ri+1 are moving to the same direction, then
we will show that ri never reaches the position of ri+1 after k + 1 activations of ri+1 .
Assume the worst case where robot ri+1 is activated once during each k activations of
ri . Then, after k + 1 activations of ri+1 , ri will move toward ri+1 by a distance of at
most dist (ri , ri+1 )(k + 1)2 /(k + 1)(k + 2), which is strictly less than dist (ri , ri+1 ),
hence ri is unable to reach the position of ri+1 .
Finally, we assume that both ri and ri+1 are moving toward each other. In this case,
we assume the worst case when both robots are always activated together. After k + 1
activations of either ri or ri+1 , each of them will travel toward the other one by at most
the distance dist (ri , ri+1 )(k +1)/(k +1)(k +2). Consequently, 2dist (ri , ri+1 )/(k +2)
is always strictly less than dist (ri , ri+1 ) because k 1. Hence, neither ri or ri+1
moves to a location that was occupied by the other during its last k + 1 activations, and
the lemma holds.


Corollary 1. By Algorithm 4, at any time t, there is no overlap between the zones of
movement of any two correct robots in the system.
Agreement on Ranking. In this section, we show that correct robots agree always on
the same sequence of ranking even in the presence of failure of robots.
Lemma 6. By Algorithm 4, correct robots always agree on the same RankSequence
when there is no crash. Moreover, if some robot rj crashes, there is a finite time after
which, all correct robots exclude rj from the ordered set RankSequence, and keep the
same total order in RankSequence.
Proof (Lemma 6). By Lemma 3, all correct robots agree on the same sequence of ranking, RankSequence after the first k activations of any robot in the system. Then, in
the following, we first show that the RankSequence is preserved during the execution
of Algorithm 4 when there is no crash in the system. Second, we show that if some
robot rj has crashed, there is a finite time after which correct robots agree on the new
sequence of ranking, excluding rj .
There is no crash in the system: we consider three consecutive robots ra , rb and
rc in RankSequence, such that ra < rb < rc . We prove that the movement of rb
does not allow it to swap ranks with ra or rc in the three different cases that follow:
1. ra , rb and rc share the same y coordinate. In this case, rb moves by min(r /
(k + 1)(k + 2), dist(rb , rc )/(k + 1)(k + 2)) along the segment rb rc . Such a
move does not change the y coordinate of rb , and also it does not change its
rank with respect to ra and rc because it always stays between ra and rc , and
it never reaches either ra nor rb , by the restrictions on the algorithm.
2. ra , rb and rc do not share the same y coordinate. In this case, the movement of
rb is restricted within a circle C, centered at rb , and having a radius that does
not allow rb to reach the same y coordinate as either ra nor rc . In particular, the
radius of C is equal to min(r /(k + 1)(k + 2), dist (rb , projra )/(k + 1)(k +
2), dist (rb , projrc )/(k + 1)(k + 2)), which is less than dist (rb , projra )/k,
and dist (rb , projrc )/k, where projra and projrc are respectively, the projections of robot ra and rc on the yaxis of rb . Hence, such a restriction on the
movement of rb does not allow it to swap its rank with either ra or rb .

160

S. Souissi, Y. Yang, and X. Defago

3. Two consecutive robots have the same y coordinate, (say ra and rb ), however
rc does not. This case is almost similar to the previous one. The movement
of rb is restricted within a half circle, centered at rb , and below it, and with
a radius that does not allow rb to have less than or equal y coordinate as rc .
In particular, that radius is equal to min(r /(k + 1)(k + 2), dist (ra , rb )/(k +
1)(k + 2), dist (rb , projrc )/(k + 1)(k + 2)), which is less than dist (ra , rb )/k,
and also less than dist (rb , projrc )/k, where projrc is the projection of robot
rc on the yaxis of rb . Hence, the restriction on the movement of rb does not
allow it to swap ranks with either ra or rb .
Since, all robots execute the same algorithm, then the proof holds for any two consecutive robots in RankSequence. Note that, the same proof applies for both algorithms executed by the leader and the followers because the restrictions made on
their movements are the same
Some robot rj crashes: From what we proved above, we deduce that all robots
agree and preserve the same sequence of ranking, RankSequence in the case of
no crash. Assume now that a robot rj crashes. By Theorem 3, we know that there
is a finite time after which all correct robots detect the crash of rj . Hence, there
is a finite time after which correct robots exclude robot rj from the ordered set
RankSequence.
In conclusion, the total order in RankSequence is preserved for correct robots during
the entire execution of Algorithm 4. This terminates the proof.


The following Theorem is a direct consequence from Lemma 6.
Theorem 4. By Algorithm 4, all robots agree on the total order of their ranking during
the entire execution of the algorithm.
Collision-Freedom
Lemma 7. Under Algorithm 4, at any time t, no two correct robots ever move to the
same location. Also, no correct robot ever moves to a position occupied by a faulty
robot.
Proof (Lemma 7). To prove that no two correct robots ever move to the same location,
we show that any robot ri always moves to a location within its own zone Zone(ri ), and
the rest follows from the fact that the zones of two robots do not intersect (Corollary 1).
By restriction on the algorithm, ri must move to a location T arget(ri ), which is within
Zone(ri ). Since, ri belongs to Zone(ri ), Zone(ri ) is a convex form or a line segment,
and the movement of ri is linear, so all points between ri and T arget(ri ) must be in
Zone(ri ).
Now we prove that, no correct robot ever moves to a position occupied by a crashed
robot. By Theorem 1, robot ri can compute the positions of crashed robots in finite time.
Moreover, by Lemma 5, robot ri always has free destinations within its zone Zone(ri ),
which excludes crashed robots. Finally, Algorithm 4 restricts robots from moving to the
locations that are occupied by crashed robots. Thus, robot ri never moves to a location
that is occupied by a crashed robot.



Fault-Tolerant Flocking in a k-Bounded Asynchronous System

161

The following theorem is a direct consequence from Lemma 7.


Theorem 5. Algorithm 4 is collision free.
Fault-tolerant Flocking. Before we proceed, we state the following lemma, which sets
a bound on the number of faulty robots under which a polygon can be formed.
Lemma 8. A polygon is generated if and only if the number of faulty robots f is
bounded by f n 3, where n is the number of robots in the system, and n 3.
Proof (Lemma 8). The proof is trivial. A polygon requires three or more robots to be
formed. Then, the number of robots n in the system should be greater or equal to three.
Also, the number of faulty robots f at any time t in the system should be less than or
equal to n 3 for the polygon to be formed. This proves the lemma.


Lemma 9. Algorithm 4 allows correct robots to form an approximation of a regular
polygon in finite time, and to maintain it in movement.
Proof (Lemma 9). We first show that each robot can be within r of its target in the
formation F (P1 , P2 , ..., Pn ) in a finite number of steps. Second, we show that correct
robots maintain an approximation of the formation while moving.
Assume that ri is a correct robot in the system. If ri is a leader, then by Algorithm 4,
the target of ri is a point within a circle or half circle, centered at ri , and with radius
less than or equal to r satisfying Assumption 3, and excluding the positions of crashed
robots. Since, there exists an infinite number of points within Zone(ri ), and by Assumption 2, the cycle of a robot is finite, then ri can reach its target within Zone(ri ) in
a finite number of steps.
Now, consider that ri is a robot follower. We also show that ri can reach within r of
its target Pi in a finite number of steps. We consider two cases:
Robot ri can move freely toward its target Pi : every time ri is activated, it can
progress by at most r /(k + 1)(k + 2). Since, the distance dist (ri , Pi ) is finite,
the bound k of the scheduler is also finite, and the cycle of a robot is finite by
Assumption 2, then ri can be within r of Pi in a finite number of steps.
Robot ri cannot move freely toward its target Pi : first, assume that ri cannot
progress toward its target because of the restriction on RankSequence. Since,
there exists at least one robot in RankSequence that can move freely toward its
target, and this is can be done in finite time. In addition, the number of robots in
RankSequence is finite, and by Lemma 5, a robot can always move to a new location satisfying Assumption 3, then, eventually each robot ri in RankSequence can
progress toward its target Pi , and arrive within r of it in a finite number of steps.
Now, assume that ri cannot progress toward its target Pi because it is blocked by
some crashed robots. By Lemma 5, a robot can always move to a new location
satisfying Assumption 3. Also, the number of crashed robots is finite, so eventually
robot ri can make progress, and be within r of its target in a finite number of steps,
by similar arguments.
We now show that correct robots maintain an approximation of the formation
while moving. Since, all robots are restricted to move within one cycle by at most

162

S. Souissi, Y. Yang, and X. Defago

r /(k + 1)(k + 2), then in every new k activations in the system, each correct robot ri
cannot go farther away than r from its position during k activations. Consequently, ri
can always be within r of its target Pi as in Definition 4, and the lemma follows. 

Theorem 6. Algorithm 4 allows correct robots to dynamically form an approximation
of a regular polygon, while avoiding collisions.
Proof (Theorem 6). First, by Theorem 3, there is a finite time after which all correct
robots agree on the same set of correct robots. Second, by Theorem 4, all correct robots
agree on the total order of their ranking RankSequence. Third, By Theorem 5, there is
no collision between any two robots in the system, including crashed ones. Finally, by
Lemma 9, all correct robots form an approximation of a regular polygon in finite time,
and the theorem holds.


Lemma 10. Algorithm 4 tolerates permanent crash failures of robots.
Proof (Lemma 10). By Theorem 1, a crash of a robot is detected in finite time, and by
Algorithm 4, a crashed robot is removed from the list of correct robots, although it appears physically in the system. Finally, by Theorem 5, correct robots avoid collisions
with crashed robots. Thus, Algorithm 4 tolerates permanent crash failures of robots. 

From Theorem 6, and Lemma 10, we infer the following theorem:
Theorem 7. Algorithm 4 is a fault tolerant dynamic flocking algorithm that tolerates
permanent crash failures of robots.

7 Conclusion
In this paper, we have proposed a fault-tolerant flocking algorithm that allows a group
of asynchronous robots to self organize dynamically, and form an approximation of a
regular polygon, while maintaining this formation in movement. The algorithm relies
on the assumption that robots activations follow a k-bounded asynchronous scheduler,
and that robots have a limited memory of the past.
Our flocking algorithm allows correct robots to move in any direction, while keeping
an approximation of the polygon. Unlike previous works (e.g., [3,6]), our algorithm is
fault-tolerant, and tolerates permanent crash failures of robots. The only drawback of
our algorithm is the fact that it does not permit the rotation of the polygon by the robots,
and this is due to the restrictions made on the algorithm in order to ensure the agreement
on the ranking by robots. The existence of such algorithm is left as an open question
that we will investigate in our future work.
Finally, our work opens new interesting questions, for instance it would be interesting
to investigate how to support flocking in a model in which robots may crash and recover.

Acknowledgments
This work is supported by the JSPS (Japan Society for the Promotion of Science) postdoctoral fellowship for foreign researchers (ID No.P 08046).

Fault-Tolerant Flocking in a k-Bounded Asynchronous System

163

References
1. Daigle, M.J., Koutsoukos, X.D., Biswas, G.: Distributed diagnosis in formations of mobile
robots. IEEE Transactions on Robotics 23(2), 353369 (2007)
2. Coble, J., Cook, D.: Fault tolerant coordination of robot teams,
citeseer.ist.psu.edu/coble98fault.html
3. Gervasi, V., Prencipe, G.: Coordination without communication: the Case of the Flocking
Problem. Discrete Applied Mathematics 143(1-3), 203223 (2004)
4. Hayes, A.T., Dormiani-Tabatabaei, P.: Self-organized flocking with agent failure: Off-line
optimization and demonstration with real robots. In: Proc. IEEE Intl. Conference on Robotics
and Automation, vol. 4, pp. 39003905 (2002)
5. Saber, R.O., Murray, R.M.: Flocking with Obstacle Avoidance: Cooperation with Limited
Communication in Mobile Networks. In: Proc. 42nd IEEE Conference on Decision and Control, pp. 20222028 (2003)
6. Canepa, D., Potop-Butucaru, M.G.: Stabilizing flocking via leader election in robot networks.
In: Masuzawa, T., Tixeuil, S. (eds.) SSS 2007. LNCS, vol. 4838, pp. 5266. Springer, Heidelberg (2007)
7. Defago, X., Gradinariu, M., Messika, S., Raipin-Parvedy, P.: Fault-tolerant and selfstabilizing mobile robots gathering. In: Dolev, S. (ed.) DISC 2006. LNCS, vol. 4167, pp.
4660. Springer, Heidelberg (2006)
8. Prencipe, G.: CORDA: Distributed Coordination of a Set of Autonomous Mobile Robots. In:
Proc. European Research Seminar on Advances in Distributed Systems, pp. 185190 (2001)
9. Flocchini, P., Prencipe, G., Santoro, N., Widmayer, P.: Pattern Formation by Autonomous
Robots Without Chirality. In: Proc. 8th Intl. Colloquium on Structural Information and Communication Complexity (SIROCCO 2001), pp. 147162 (2001)
10. Gervasi, V., Prencipe, G.: Flocking by A Set of Autonomous Mobile Robots. Technical Report, Dipartimento di Informatica, Universit`a di Pisa, Italy, TR-01-24 (2001)
11. Reynolds, C.W.: Flocks, Herds, and Schools: A Distributed Behavioral Model. Journal of
Computer Graphics 21(1), 7998 (1987)
12. Brogan, D.C., Hodgins, J.K.: Group Behaviors for Systems with Significant Dynamics. Autonomous Robots Journal 4, 137153 (1997)
13. John, T., Yuhai, T.: Flocks, Herds, and Schools: A Quantitative Theory of Flocking. Physical
Review Journal 58(4), 48284858 (1998)
14. Yamaguchi, H., Beni, G.: Distributed Autonomous Formation Control of Mobile Robot
Groups by Swarm-based Pattern Generation. In: Proc. 2nd Int. Symp. on Distributed Autonomous Robotic Systems (DARS 1996), pp. 141155 (1996)
15. Dieudonne, Y., Petit, F.: A Scatter of Weak Robots. Technical Report, LARIA, CNRS,
France, RR07-10 (2007)
16. Chandra, T.D., Toueg, S.: Unreliable Failure Detectors for Reliable Distributed Systems.
Journal of the ACM 43(2), 225267 (1996)
17. Schreiner, K.: NASAs JPL Nanorover Outposts Project Develops Colony of Solar-powered
Nanorovers. IEEE DS Online 3(2) (2001)
18. Souissi, S., Yang, Y., Defago, X.: Fault-tolerant Flocking in a k-bounded Asynchronous System. Technical Report, JAIST, Japan, IS-RR-2008-004 (2008)
19. Konolige, K., Ortiz, C., Vincent, R., Agno, A., Eriksen, M., Limketkai, B., Lewis, M., Briesemeister, L., Ruspini, E., Fox, O., Stewart, J., Ko, B., Guibas, L.: CENTIBOTS: Large-Scale
Robot Teams. Journal of Multi-Robot Systems: From Swarms to Intelligent Autonoma (2003)
20. Bellur, B.R., Lewis, M.G., Templin, F.L.: An Ad-hoc Network for Teams of Autonomous
Vehicles. In: Proc. 1st IEEE Symp. on Autonomous Intelligent Networks and Systems (2002)
21. Jennings, J.S., Whelan, G., Evans, W.F.: Cooperative Search and Rescue with a Team of
Mobile Robots. In: Proc. 8th Intl. Conference on Advanced Robotics, pp. 193200 (1997)

Bounds for Deterministic Reliable Geocast in Mobile


Ad-Hoc Networks
Antonio Fernandez Anta and Alessia Milani
LADyR, GSyC, Universidad Rey Juan Carlos, Spain

Abstract. In this paper we study the impact of the speed of movement of nodes
on the solvability of deterministic reliable geocast in mobile ad-hoc networks,
where nodes move in a continuous manner with bounded maximum speed. Nodes
do not know their position, nor the speed or direction of their movements. Nodes
communicate over a radio network, so links may appear and disappear as nodes
move in and out of the transmission range of each other. We assume that it takes a
given time T for a single-hop communication to reliably complete. The mobility
of nodes may be an obstacle for deterministic reliable communication, because
the speed of movements may impact on how quickly the communication topology
changes.
Assuming the two-dimensional mobility model, the paper presents two tight
bounds for the solvability of deterministic geocast. First, we prove that the maximum speed vmax < T is a necessary and sufficient condition to solve the geocast, where is a parameter that together with the maximum speed captures the
local stability in the communication topology. We also prove that (nT ) is a time
complexity lower bound for a geocast algorithm to ensure deterministic reliable
delivery, and we provide a distributed solution which is asymptotically optimal
in time.
Finally, assuming the one-dimensional mobility model, i.e. nodes moving on
a line, we provide a lower bound on the speed of movement necessary to solve
the geocast problem, and we give a distributed solution. The algorithm proposed
is more efficient in terms of time and message complexity than the algorithm for
two dimensions.
Keywords: Mobile ad-hoc network, geocast, speed of movement towards solvability, distributed algorithms.

1 Introduction
A mobile ad-hoc network (MANET) is a set of mobile wireless nodes which dynamically build a network, without relying on a stable infrastructure. Direct communication
links are created between pairs of nodes as they come into the transmission range of
each other. If two nodes are too far apart to establish a direct wireless link, other nodes
act as relays to route messages between them. This self-organizing nature of mobile


This research was supported in part by Comunidad de Madrid grant S-0505/TIC/0285; Spanish MEC grants TIN2005-09198-C02-01 and TIN2008-06735-C02-01.The work of Alessia
Milani is funded by a Juan de la Cierva contract.

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 164183, 2008.
c Springer-Verlag Berlin Heidelberg 2008


Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

165

ad-hoc networks makes them specially interesting for scenarios where networks have
to be built on the fly, e.g., under emergency situation, in military operations, or in environmental data collection and dissemination.
A fundamental communication primitive in certain mobile ad-hoc network is geocasting [14]. This is an operation initiated by a node in the system, called the source,
that disseminates some information to all the nodes in a given geographical area, named
the geocast region. In this sense, the geocast primitive is a variant of multicasting, where
nodes are eligible to deliver the information according to their geographical location.
While geocasting in two dimensions is clearly useful, geocasting in one dimension is
also a natural operation in some real situations, like announcing an accident to the
nearby vehicles in a highway. In mobile ad-hoc environments, geocasting is also a basic building block to provide more complex services. As an example, Dolev et al. [5] use
a deterministic reliable geocast service to implement atomic memory in mobile ad-hoc
networks. A geocast service is deterministic if it ensures deterministic reliable delivery,
i.e. all the nodes eligible to deliver the information will surely deliver it.
Designing a geocast primitive in a mobile ad-hoc network forces to deal with the
uncertainty due to the dynamics of the network. Since communication links appear and
disappear as nodes move in and out of the transmission range of other nodes, there is
a (potential) continuous change of the communication topology. In other words, the
movement of nodes and their speed of movement usually impacts on the lifetime of
radio links. Then, since it takes at least (log n) time to ensure a one-hop successful
transmission in a network with n nodes [6], mobility may be an obstacle for deterministic reliable communication.
Our contributions. In this paper we study the impact of the maximum speed of movement of nodes on the solvability of deterministic geocast in mobile ad-hoc networks.
In our model we assume that a node does not know its position (nodes have no GPS or
similar device), and that it knows neither the speed nor the direction of its movement.
Additionally, we assume that it takes a given time T for a single-hop communication to
succeed. As far as we know, [1] is the only theoretical work that deals with the geocast
problem in such a model.
Our results improve and generalize the bounds presented in [1] and, to the best of our
knowledge, present the first deterministic reliable geocast solution for two dimensions,
i.e. where nodes move in a continuous manner in the plane. In particular, we give bounds
on the maximum speed of nodes in order to be able to solve the deterministic reliable
geocast problem in one and two dimensions. While the bounds provided in [1] are for a
special class of algorithms, our bounds apply to all geocasting algorithms and are tighter.
Then, we present a distributed solution for the two-dimensional mobility model, which
is proved to be asymptotically optimal in terms of time complexity. Let n be the number
of nodes in the system, it takes 3nT time for our solution to ensure that the geocast information is reliably delivered by all the eligible nodes. Additionally, we prove that (nT )
is a time complexity lower bound for a geocast algorithm to ensure deterministic reliable
delivery. Finally, we provide a distributed geocast algorithm for the one-dimensional case
(i.e. nodes move on a line) and upper bound its message and time complexity. This algorithm is more efficient in terms of message complexity than the algorithms proposed
in [1], and (not surprisingly) than the algorithm for two dimension.

166

A.F. Anta and A. Milani

Related work. Initially introduced by Imielinski and Navas [14] for the Internet, the
geocast problem was then proposed for mobile ad-hoc networks by Ko and Vaidya
[7]. The majority of geocast algorithms presented in the literature for mobile ad-hoc
networks provide probabilistic guarantees, e.g. [8, 12, 9]. See the review of Jinag and
Camp [4] for an overview of the main existing geocast algorithms. As mentioned above,
Baldoni et al. [1] provide a deterministic solution for the case where nodes move on a
line.
Other deterministic solutions for multicast and broadcast in MANETs have been proposed, but their correctness relies on strong synchronization or stability assumptions. In
particular, Mohsin et al. [13] present a deterministic solution to solve broadcast in onedimensional mobile ad-hoc networks. They assume that nodes move on a linear grid,
that nodes know their current position on the grid, and that communication happens in
synchronous rounds. Gupta and Srimani [10], and Pagani and Rossi [16] provide two
deterministic multicast solutions for MANET, but they require the network topology to
globally stabilize for long enough periods to ensure delivery. Moreover, they assume a
fixed and finite number of nodes arranged in some logical or physical structure.
Few bounds on deterministic communication in MANETs have been provided [15,
2]. We prove that the lower time complexity bound to complete a geocast in the plane
is (nT ). Interestingly, Prakash et al. [15] provide a lower bound of (n) rounds for
the completion time of broadcast in mobile ad hoc networks, where n is the number of
nodes in the network. As the authors point out, they consider grid-based networks, but
a lower bound proved for this restricted grid mobility model automatically applies to
more general mobility models. This latter result improves the (D log n) bound provided by Bruschi and Pinto [2], where D is the diameter of the network. These results
unveil the fact that, when nodes may move, the dominating factor in the complexity of
an algorithm is the number of nodes in the network and not its diameter.
Road map. In Section 2 we present the model for mobile ad hoc network we consider
and in Section 3 we revise the problem. Then, in Section 4 we present the results for
two dimensions and in Section 5 the results for one dimension. Finally, our conclusions
are presented in Section 6.

2 A Mobile Ad-Hoc Network Model


We consider a finite set of n (mobile) nodes which move in a continuous manner
on a plane (2-dimensional Euclidean space) with bounded maximum speed vmax . The
nodes in do not have access to a global clock, but their local clocks run at the same
rate. Additionally, no node in fails.
Nodes communicate by exchanging messages over a wireless radio network. All the
nodes have the same transmission radius r. At any time, each node is a neighbor of,
and can communicate with, all the nodes that are within its transmission range at that
time, i.e., the nodes that are completely inside a disk, centered at the nodes position, of
radius r [3]. Formally, let distance(p, q, t) denote the distance between p and q at time
t, we say that p and q are neighbors at time t if distance(p, q, t) < r. Nodes do not
know their position, speed, nor direction of movement.

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

167

Local broadcast primitive. To directly communicate with their neighbors, nodes are
provided with a local reliable broadcast primitive. Communication is not instantaneous,
it takes some time for a broadcast message to be received. To simplify the presentation
we consider as time unit the time slot. This is the time the communication of a message
takes when accessing the underlying wireless network communication channel without
collision. Additionally, we assume that local computation at the nodes takes negligible
time (zero time for the purpose of the analyses). Since collisions are a intrinsic characteristic of MANETs, they have to be considered. We assume that the potential collisions
due to concurrent broadcasts by neighbors are dealt by a lower level communication
layer, and that this layer takes T units of time to (reliably and deterministically) deliver
a message to its destination. The value of T could be related to the size of the system
and depends on the complexity of the lower level communication protocol. As already
stated, [6] shows that it takes at least (log n) time to ensure a one-hop successful
transmission in a network with n nodes.
Then, if a node p invokes broadcast (m) at time t, then all nodes that remain neighbors of p throughout [t, t + T ] receive m by time t + T , for some fixed known integer
T > 0. A node that receives a message m generates a receive(m) event. It is possible
that some node that has been a neighbor of p at some time in [t, t + T ] (but not during
the whole period) also receives m, but there is no such guarantee. However, no node
receives m after time t + T . A node issues a new invocation of the broadcast primitive
only after it has completed the previous one (T time later). Then in each time interval
of length T a node broadcasts at most one message.
Connectivity. Baldoni et al. [1] have proved that traditional connectivity is too weak
to implement a deterministic geocasting primitive in the model described. To overcome
this impossibility result they have introduced the notion of strong connectivity, and assumed it in their model. Like them, we also assume strong connectivity in our model1 .
Let us remark that strong connectivity is only a possible way to overcome the above
impossibility. A different approach could be to constrain the mobility pattern of nodes
(e.g. [13]) or to assume the global communication topology to be stable long enough
to ensure reliable delivery (e.g. [10]). On the other hand, strong connectivity is a local
property which helps to formalize the local stability in the communication topology
necessary to solve the problem. In the following, we revise the notions of strong neighborhood and strong connectivity.
Definition 1 (Strong neighborhood). Let 2 = r and 1 be fixed positive real numbers
such that 1 < 2 . Two nodes p and p are strong neighbors at some time t if there is
a time t t such that distance(p, p , t ) 1 , and distance(p, p , t ) < 2 for all
t [t , t].
Assumption 1 (Strong Connectivity). For every pair of nodes p and p , and every
time t, there is at least one path of strong neighbors connecting p and p at t.
When convenient, we may use that a pair of (strong) neighbors have a (strong) connection, or are (strongly) connected. Observe that once two nodes p and p become strong
1

In this paper we do not consider the Delay/Disruption Tolerant Networks model.

168

A.F. Anta and A. Milani

neighbors (i.e., they are at distance 1 from each other), to get disconnected they must
move away from each other so that their distance is at least 2 . This means that the total
distance to be covered in order for p and p to disconnect is 2 1 . We use the notation
1
= 2
2 , where denotes the minimum distance that any two nodes that just became
strong neighbors have to travel to stop being neighbors when moving in opposite directions. Thus, for a clearer presentation of our results, we express the maximum speed of
nodes, denoted vmax , as the ratio between and the time necessary to travel this space,
denoted T  . Formally,
Assumption 2 (Movement Speed). It takes at least T  > 0 time for a node to travel
2 1
1
distance = 2
2 , i.e. vmax = 2T  .
Since nodes move, the topology of the network may continuously change. In this sense,
assuming both strong connectivity and an upper bound on the maximum speed of nodes
provides some topological stability in the network. In particular, it ensures that there are
periods in which the neighborhood of a node remains stable. Formally,
Lemma 1. If two nodes become strong neighbors at time t, then they are neighbors
throughout the interval (t T  , t + T  ) and remain strong neighbors throughout the
interval [t, t + T  ).
Proof. If p and p become strong neighbors at time t, then distance(p, p , t) = 1 . To
be disconnected, they must move away from each other a distance of at least 2, so
that their distance is at least 2 . From Assumption 2, this takes at least T  time. Hence,
for (t T  , t + T  ), distance(p, q, ) < 1 + 2 = 2 , which combined with
Definition 1 proves the claims.

3 The Geocast Problem


The geocast is a variant of the conventional multicasting problem, where nodes are eligible to deliver the information if they are located within a specified geographic region.
The geocast region we consider is the circular area centered in the location where the
source starts the geocasting and whose radius is some given value d. We assume d to be
provided as input by the user of the geocast primitive.
The geocast problem is solved by a geocast algorithm run on the mobile nodes, which
implements the following geocast primitives: Geocast (I, d) to geocast information I at
distance d, and Deliver (I) to deliver information I (previously geo-casted). The geocast algorithm uses the broadcast (m) and receive(m) primitives to achieve communication among neighbors. The geocast information I is initially known by exactly one
node, the source. If the source invokes Geocast (I, d) at time t, being at location l, then
there are three properties that must be satisfied by the geocast service.
Property 1. [Reliable Delivery] There is a positive integer C such that, by time t + C,
information I is delivered (with Deliver (I)) at all nodes that are located at most at
distance d away from l throughout [t, t + C].
The following properties rule out solutions which waste resources causing continuous
communication or distribution of the information I to the whole Euclidean space.

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

169

Property 2. [Termination] If no other node issues a call to the geocast service, then
there is a positive integer C  such that after time t + C  , no node performs any communication (i.e. a local broadcast) triggered by a geocast.
Property 3. [Integrity] There is a d d such that, if a node has never been within
distance d from l, it never delivers I.
Observe that these properties are deterministic. This justifies the use of a deterministic
reliable local broadcast primitive and the fact that we enforce nodes to be in range less
than 2 during T steps to complete a successful communication.

4 Solving the Geocast Problem in Two Dimensions


In this section we first show that in two dimensions T  must be larger than T for the
geocast problem to be solvable. Then, we present an algorithm that solves the problem if this condition is met, whose time complexity is O(nT ). Finally we prove that
this complexity is optimal, since any algorithm has executions in which (nt) time is
required to complete the geocast.
Theorem 3. No algorithm can solve the geocast problem in two dimension if vmax
2 1

2T , i.e. if T T .
Proof. Consider a Geocast (I, d) primitive invoked at some time t by a source s, with
d 2 . To prove the claim we construct a scenario with 6 nodes, and an execution in
it, such that the geocast region contains several nodes permanently, but only s delivers
I. Since the reliable delivery property is not satisfied, this proves the claim.
In our scenario, there are 6 nodes, the source s, and nodes p, q, x1 , x2 , and x3 . At the
time t of the geocast, we assume that the system is in the state shown in Figure 1.(a).
This state can be reached from an initial situation in which the nodes q, x1 , x2 , x3 , p,
and s are placed (in this order) on a line, at distance 1 each one from the next, and
move without breaking the strong connectivity, to the state of Figure 1.(a). Observe that
all nodes are strongly connected along the path q, x1 , x2 , x3 , p, s, but that the source is

x1

2 
q

x2

x2
2 

2 
2

x1
2 

x3
2 
s

l2
l1
l
(a) State at time t and t0

2 
l1

2 
x3
2 
p

s
l

l2

(b) State at time t 0 = t0 + T

Fig. 1. Scenario for the proof of Theorem 3

170

A.F. Anta and A. Milani

not a neighbor of neither x1 , x2 , nor x3 . Additionally, both p and q are in the geocast
region (since d 2 ). They will be in the region during the whole execution and hence
to satisfy reliable delivery they should deliver I.
We consider several possible behaviour of the geocast algorithm. Let us first assume
then that, although it invoked Geocast (I, d), s never makes a call to broadcast (I).
Then, p and q will never receive nor deliver I and reliable delivery is violated. Otherwise, assume that as a consequence of the Geocast (I, d) invocation, s invokes
broadcast (I ) at times t0 , t1 , ..., with ti+1 ti + T . Let us define first the behavior
of the nodes in interval [t0 , t1 ]. At time t0 , the source s and node q start moving towards
each other at the maximum speed vmax , while nodes p, x1 , x2 , and x3 start moving in
the same direction as q. At time t0 = t0 + T  t1 all nodes have travelled a distance
of (by definition of T  ) and the system is in the state depicted in Figure 1.(b). In the
interval [t0 , t1 ] no node moves.
Observe that strong connectivity has been preserved during the whole period [t0 , t0 ],
since the distances along the path q, x1 , x2 , x3 , p did not change, and the source is a
strong neighbor of p for all the period [t0 , t0 ) and at time t0 it becomes strong neighbor
of q. Note also that neither p nor q have been neighbors of s during the whole period
[t0 , t0 + T ], because q is not a neighbor at time t0 and p is not a neighbor at time
t0 t0 + T . Hence, in our execution no node delivers I in [t0 , t1 ].
The behavior in interval [t1 , t2 ] is the same as described for [t0 , t1 ], but swapping the
directions of movement and the roles of p and q. The initial state at time t1 is the one
show in Figure 1.(b), and the final state reached at time t1 = t1 + T  is the one shown in
Figure 1.(a). Again, I is not delivered at p nor q because they have not been neighbors
of s in the whole period [t1 , t1 + T ]. For any interval [ti , ti+1 ] the behavior is the same
as in interval [t0 , t1 ], if i is even, and the same as in interval [t1 , t2 ] if i is odd. Then, in
this scenario only s delivers I and the reliable delivery property is not satisfied.
The above theorem gives a lower bound of T  > T to be able to solve the geocast
problem. We show now that this bound is tight by presenting an algorithm that always
solves the problem if T  > T . The algorithm belongs to the class of algorithms presented in Figure 2, which has a configuration parameter M , the bound on time that
the algorithm uses to stop the geocast. The algorithm M -Geocast(I, d) works as follows. When the source node invokes a call Geocast (I, d), it immediately delivers the
information I (Line 8). Then, it broadcasts I and stores in a local variable T LB the time
this first transmission happened (Lines 10-11), in order to retransmit every T units of
time (Lines 13-14). When a node p receives for the first time a message with information I, it immediately delivers it and starts rebroadcasting the information periodically
(Lines 2-6). With the information I the algorithm broadcasts a value count I , which
contains an estimate of the time that has passed since the geocast started. This value
combined with the the parameter M is used to terminate the algorithm.
We show now that the algorithm M -Geocast (I, d) solves the geocast problem in two
dimensions for an appropriate value of M . Let us denote by S the set of nodes that have
already delivered the information I, and S(t) the set S at time t. Let us denote by ti the
time at which the set S increases from size i to i + 1. Note that t0 is the time the geocast
starts.

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

Init
(1)

T LBI

171

Procedure M -Geocast (I, d)


(8) trigger Deliver (I);
(9) count I 0;
(10) trigger broadcast (I, count I );
(11) T LBI clock

upon event receive(I, c)


(2) if (T LBI = ) then
(3)
trigger Deliver (I);
(4)
count I c + 1;
when (clock = T LBI + T ) and (count I < M )
(5)
trigger broadcast (I, countI ) (12) count I count I + T ;
(6)
T LBI clock
(13) trigger broadcast (I, countI )
(7) end if
(14) T LBI T LBI + T
Fig. 2. The code of M -Geocast (I, d) algorithm class

Lemma 2. If T  > T and count I < M at all nodes during the time interval [t0 , tn1 ],
then ti+1 ti 3T for every i {0, . . . , n 2}.
Proof. Since strong connectivity holds, at any time there must be chains of strong
neighbors connecting any two nodes in the system. In particular, at every time t0 <
t < tn1 (i.e., such that S(t) = ) there must exist at least a pair of neighbors q and p
such that q S(t) and p S(t). Let C(t) denote the set of all such pairs.
Let us fix an i {0, . . . , n 2}, and assume, for contradiction, that ti+1 ti > 3T .
Consider the case when there is some pair (q, p) C(ti ) that belongs to C(t ) for
all t [ti , ti + 2T ]. In other words, this pair is formed by a node q that has I at ti ,
and a node p that does not, neighbors for at least 2T time. By the M -Geocast(I, d)
algorithm and the fact that count I < M during the time interval [t0 , tn1 ], a node
having the information I will rebroadcast it once every T time. Hence q will rebroadcast
the information I at some time t [ti , ti + T ], and thus p will receive and deliver it by
time t + T ti + 2T .
Otherwise, all the connections in C(ti ), i {0, . . . , n 2}, have been broken by
some time t (ti , ti + 2T ]. But, for strong connectivity to hold, a strong connection
has to exist between some node q S(t ) and a node p S(t ), since otherwise these
subsets are disconnected at time t . Let t , ti < t t , be the time at which q and
p become strong neighbors, i.e. they are within distance 1 from each other. The claim
follows if q S(ti ), since q S(t ) and ti < t t ti + 2T . Otherwise, note that
by Lemma 1 and the fact that T  > T , q and p are neighbors throughout all the period
[t T, t +T ]. Then, since q S(ti ) and ti < t , q will broadcast I once in the period
[t T, t ], and p will deliver I by time t + T > t > ti . Given that t ti + 2T , p
will deliver the information I by time ti + 3T and the claim holds.
Let us now relate the value of the count I at each node with respect to the time that
has passed since M -Geocast(I, d) was invoked. Let count I (q, t) be the value of the
variable count I of node q at time t. Let us define a propagation sequence as the sequence of nodes s = p0 , p1 , p2 , ..., pk = q such that the first message received by pi
with information I was sent by pi1 . Node s = p0 is the source of the geocast.
Lemma 3. Let t0 be the time at which M -Geocast(I, d) is invoked at source s. Given
a node q with propagation sequence s = p0 , p1 , p2 , ..., pk = q and a time t t0 at

172

A.F. Anta and A. Milani

which q has delivered I, with count I (q, t) M , then it is satisfied that ((t t0 )
count I (q, t)) [0, k(T 1) + T ].
Proof. We prove by induction on k that at time t t0 it is satisfied that ((t t0 )
count I (pk , t)) [0, k(T 1) + T ], and that if a message is sent at time t it carries a
counter c(pk , t) such that ((t t0 ) c(pk , t)) [0, k(T 1)]. The base case is the
source node s = p0 . At time t0 the source sets count I (s, t0 ) = 0 (Line 9), and then,
as long as count I < M , it increments count I by T every T time (Line 12). Hence, at
time t = t0 + we have ((t t0 ) count I (s, t)) = 0 if is a multiple of T , and
((t t0 ) count I (s, t)) > 0 otherwise. Furthermore, the difference (t t0 ) count I
is always smaller than T . Since messages are broadcast at times t = t0 + with
a multiple of T , the values c(s, t) carried by the messages sent by the source satisfy
((t t0 ) c(s, t)) = 0.
Let us assume now by induction that, if pi1 broadcasts a message at time t t0 ,
this carries a value c(pi1 , t) such that ((t t0 ) c(pi1 , t)) [0, (i 1)(T 1)]. If pi
receives I for the first time at t and the corresponding message was sent by pi1 at time
t, pi sets count I (pi , t ) = c(pi1 , t) + 1 (Line 4). This message took between 1 and T
time units to be received at time t = t+. Hence, [1, T ]. Considering one extreme
case, if ((t t0 ) c(pi1 , t)) = 0 and = 1, then ((t t0 ) count I (pi , t )) = 0. In
the other extreme, if ((t t0 ) c(pi1 , t)) = (i 1)(T 1) and = T , then ((t
t0 ) count I (pi , t )) = i(T 1). Therefore, ((t t0 ) count I (pi , t )) [0, i(T 1)].
Like the source, pi increments count I by T every T time as long as count I < M (Line
12). Hence, at any time t = t + we have ((t t0 ) count I (pi , t )) [0, i(T 1)]
if is a multiple of T . Otherwise, this difference increases in up to T time, and hence
((t t0 ) count I (pi , t )) [0, i(T 1) + T ]. Since messages are broadcast by pi at
times t = t + with a multiple of T , the values c(pi , t ) carried by the messages
sent by pi satisfy ((t t0 ) c(pi , t )) [0, i(T 1)].
This lemma can be used to prove the following theorem, which shows that the geocast
problem can be solved in two dimensions as long as T  > T .
Theorem 4. If T  > T , the M -Geocast (I, d) algorithm with M = 3T (n 1) ensures
(1) the Reliable Delivery Property 1 for C = 3T (n 1),
(2) the Termination Property 2 for C  = 3T (n 1) + (n 1)(T 1) + T , and
(3) the Integrity Property 3 for d = max(d, 3T (n 1)vmax + (n 1)2 ).
Proof. The first part of the claim is a direct consequence of Lemma 2, which proves
that at most 3T (n 1) tn1 t0 time after Geocast (I, d) is invoked, all nodes have
delivered the information I. The second part of the claim follows from Lemma 3, using
the fact that no propagation sequence has more than n nodes (hence taking k = n 1),
combined with the first part of the claim. The third claim is also direct consequence of
Lemma 2, since the information can be carried by nodes at most distance 3T (n1)vmax
in time 3T (n 1) from the initial location of the source, and travels less than (n 1)2
in the n 1 broadcasts that inform new nodes.
Finally we show that the time bound found for the M -Geocast(I, d) algorithm with
M = 3T (n 1) is in fact asymptotically optimal, since there are cases in which any
geocast algorithm requires (nT ) time to complete.

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks


xl

xp = xl + 22 

xl + 2
q1

xl

q3

xl + 2

q1

173

xp = xl + 22 
q3

...

2 

2 
p

a0

a1

a2

2 
a0

...

a2

...

...

...

...

...

...

...

...

...

2 
p a1

...

2 
2 
s = q0 2

q2

...

2 
2 
s = q0 2

(a) Status at time t0 = ti

q2
2

(b) Status at time t1 = ti+1

Fig. 3. Scenario for the proof of Theorem 5

Theorem 5. Any deterministic Geocast (I, d) algorithm that ensures the Reliable Delivery Property in two dimensions requires time (nT ) to complete, for each d 322 .
Proof. We present a scenario (shown in Figure 3) in which for any Geocast (I, d) call,
with d 322 , all nodes that are in the geocast region require time (n 1)T to deliver I.
Consider the scenario depicted in Figure 3.(a) where Geocast (I, d) is invoked at time
t0 at the source node s while in location l = (xl , yl ) and with only one neighbor at
distance 2 . The rest of nodes except p form a chain in which each node is neighbor
of its predecessor and successor in the chain. The chain forms a snake shape. Node p
is a node that is within distance d of l, at the same level (coordinate y) of the last node
in the chain (r in Figure 3.(b)), and connected with some node in the chain as shown.
Especially, p is located at some position (xp , yp ) where xp is equal to xl + 22 
with  ! 2 and yp is the same as the coordinate y of node r. Assume that nodes reach
this configuration while previously been at distance 1 from each other. Thus at time t0
strong connectivity holds. We usually refer to the location of the node only considering
the x coordinate because it is the one of interest.
The number of nodes between any pair of nodes qi and qi+1 as depicted in Figure 3 is
2
. In this execution, at time t0 all the nodes, except node p, start to
fixed to k =  T vmax
2
move towards the left at the same speed v = T (k+1)
vmax . These values are chosen
so that, in the execution we construct, qi is at position xl (see Figure 3.(b)) at time
t0 + iT (k + 1). When the last node of the chain is at a distance 2  at the left of p, the
latter also starts moving to the left at the same speed. In our execution, we assume that
a node that receives the information I immediately rebroadcasts it. In any other case the
execution can be easily adapted by stopping the movement while I is not rebroadcasted.
Then, in the execution, s broadcasts I at time t0 . We assume that each node in the chain
receives I from its predecessor in T units of time. Then, qi receives first I at time
t0 + iT (k + 1). Since at that time qi is at xl , no progress has been made to the right.
Then, the geocast problem will be solved when all nodes in the chain have received I,
and p received I from the last node in the chain. Since this implies n 1 transmissions
and each takes T time, the total time to provide reliable delivery is (n 1)T . Finally,
for all nodes except p strong connectivity holds throughout all the execution because
they do not change their neighbourhood. At time t0 strong connectivity holds for node

174

A.F. Anta and A. Milani

p because it is at distance  from a node a0 on its right. It is easy to see that strong
connectivity holds throughout all the execution, since due to the movement pattern and
speed, p will stop to be strong neighbour of node ai only after it already becomes strong
neighbour of node ai+1 for i 0 (see Figure 3,(a)). The
 claim holds because p is at

1 
2
2
2
most at distance [ T vmax (2 )]2 + (22 )2 = 22 ( T vmax
)2 + (22)
< 322
4
2

from (xl , yl ).
The bound of the above theorem depends on n. If n is finite this bound is finite. However, in a system with potentially infinite nodes, the geocast problem may never be
solved.
Corollary 1. No deterministic Geocast (I, d) algorithm will ensure the reliable delivery property in a system with infinite nodes, for d 322 .

5 Solving the Geocast Problem in One Dimension


In this section we explore the geocast problem when all nodes move along the same line.
We show first that in order for the problem to be solvable, it is necessary that T  > T /2.
Then, we present an algorithm that solves the problem efficiently if T  > T . These
two results leave a gap (T /2, T ] of values of T  , and hence an interval of maximum
movement speed vmax , in which it is not known whether the problem can be solved.
Theorem 6. No algorithm can solve the geocast problem in one dimension if vmax
2 1
T

T , i.e. if T 2 .
Proof. Consider a Geocast (I, d) primitive invoked at some time t by a source s, with
d 2 . We prove the claim by presenting an scenario in which, independently of the
algorithm used, no node except the source delivers information I, while there are other
nodes in the geocast region permanently. This violates the reliable delivery property
and hence the geocast problem is not solved.
In our scenario there are three nodes, the source s and nodes p and q, that are permanently in the geocast region. Initially, node s is at a position l, from which it will never
move. Node p is at position l1 = l + 1 (at distance 1 from s), and q is at distance
1 from p and at distance 21 from s. Then, q moves to reach the state spq depicted
in Figure 4, which has the following properties: all nodes are located on a single line;
the leftmost node is the source s located at position l; a node p is located at position
l1 at distance 1 from l; and a node q is located at position l2 at distance 2 from l1 .
Observe that from the initial configuration up to state spq strong connectivity holds, and
that nodes p and q are always within distance 2 d of l.
If s never broadcasts I then neither p nor q deliver it, and reliable delivery is violated.
Otherwise, assume that as a consequence of the Geocast (I, d) invocation, s invokes
broadcast (I ) at times t0 , t1 , ..., with ti+1 ti + T . Let us define first the behavior of
the nodes in interval [t0 , t1 ]. At time t0 nodes p and q start moving at their maximum
speed vmax to exchange their positions. Then, at time t0 = t0 + 2T  , p is located at
l2 and q is located at l1 reaching state sqp . They do not move from that state until t1 .

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

l1

175

l2
2

1
State spq
at time ti

State at some
time in (ti, ti+1)
State sqp
at time ti+1

2
p

Fig. 4. Scenario for the proof of Theorem 6

Observe that strong connectivity has been preserved during the whole period [t0 , t0 ]: p
and q never stop being strong neighbors, and the source is strong neighbor of p for all
the period [t0 , t0 ) and at time t0 it becomes strong neighbor of q. Note that neither p
nor q have been neighbors of s during the whole period [t0 , t0 + T ], because q is not a
neighbor at time t0 and p is not a neighbor at time t0 t0 + T . Hence, in our execution
no node delivers I in [t0 , t1 ].
The behavior in interval [t1 , t2 ] is the same as described but exchanging the roles of
p and q: the initial state is sqp , at time t1 they start moving to exchange positions, and
at time t1 they end up at state spq . Again, I is not delivered at p nor q because they have
not been neighbors of s in the whole period [t1 , t1 + T ]. For any interval [ti , ti+1 ] the
behavior is the same as in interval [t0 , t1 ], if i is even, and the same as in interval [t1 , t2 ]
if i is odd. Then, in this scenario of execution only s delivers I and the reliable delivery
property is not fulfilled.
This result shows that in order to solve the geocast problem it must hold that T  > T /2.
In the following we prove that when all nodes move along the same line, the algorithm
presented in Figure 2, for an appropriate value of M , efficiently solves the problem as
long as T  > T and 1 > . Let us first introduce some preliminary Observations and
Lemmata which are instrumental for the proof of the main Theorem 13.
Assume that the source s = q0 initiates a call of M -Geocast(I, d) at time t = t0
from location l = l0 . Next, we prove that I propagates from l0 towards the right of l0 .
(For the left of l0 , the proof is symmetrical.) This happens in steps so that within a small
period of time, I moves from a node, qj to another node qj+1 at some large distance
away.
Observation 7. Let p be a node that receives information I at time t, then either p
immediately rebroadcasts I or it exists a time [t, t + T ] such that p broadcasts I
both at time T and at time .
Observation 8. If T  > T ,
time T .

T
T

< is the maximum distance that a node can cover in

176

A.F. Anta and A. Milani

Observation 9. Let p be a node that receives a message with information I at time t. p


has delivered information I by time t.
Hereafter, we denote = 1 + .
Lemma 4. Let qj be a node located at location lj at time tj . If T  > T then every node
that at time tj + T is within distance = 1 + from lj has been a neighbour of qj
throughout all the period [tj , tj + T ].
Proof. At time tj , qj is located in lj and it is a neighbor of all nodes at distance smaller
than 2 from lj . Let p be a node that at time tj + T is located within distance from
lj . Let v be the maximum speed of nodes. Since T  > T , in T time a node can travel
at most a distance vT = T T < . Thus at time tj , p was located at lp , within distance
+ vT < 2 from lj .
To break the connection with p after tj , qj has to travel in the opposite direction
of p during [tj , tj + T ]. Without loss of generality, assume lj lp and consider qj
moving towards the left and p to the right at full speed. At any time t [tj , tj + T ],
qj will be located at lj vt and p will be at lp + vt. Let lpT denote the location of
p at time tj + T , lpT = lp + vT . Then, lp = lpT vT and for all t [tj , tj + T ]
lpt = lp +vt = lpT vT +vt = lpT +v(tT ). So at time t the distance between qj and p is
distance(p, q, t) = lpt lqt = lpT +v(tT )(lj vt) = lpT lj +2vtvT lpT lj +vt.
Since lpT lj + 1 + , distance(p, q, t) 1 + + vt < 2 because vt < for all
t [tj , tj + T ].
Lemma 5. Assume that a node q broadcasts information I at time t, being at location
l. If T  > T , then every node that at time t + T is within distance from l will deliver
the information by time t + T .
Proof. By Lemma 4, every node p that at time t + T is within distance from l is a
neighbor of q throughout all the period [t, t + T ]. Thus if q broadcasts the information
I at time t, p will deliver I by time t + T .
The following Lemma 6 states that if it exists a node that broadcasts information I at
some time t, then by time t + 3T there is another node far away from location l which
broadcasts information I. Thus, these two nodes define a non-zero spatial interval and
a temporal interval between two successive broadcasts events.
Lemma 6. Assume that a node qj broadcasts the information I at time tj , located at
point lj . Let Lr denote the set of nodes that at time tj are located on the right of lj +1 . If
1 , T  > T and Lr = then, assuming that count I < M for all nodes throughout
[tj , tj+1 ], either all the nodes in Lr deliver information I by time tj + T , or there is a
node qj+1 which broadcasts I at time tj+1 at location lj+1 such that:
1. tj+1 tj 3T ,
2. lj+1 lj 1 T
T > 0
3. let t = min (tj , tj+1 T ), throughout all the interval [t, tj+1 ], node qj+1 is a
neighbor of another node q located on the left of lj + and which invoked
broadcast(I) at some time in [tj+1 T, tj+1 ]

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

177

Proof. Assume that at time tj , a node qj broadcasts the information I being located at
lj . One of the following two cases holds:
At time tj + T , there is at least a node p located in the interval [lj + 1 , lj + ].
Then, by Lemma 5, p will deliver the information I by time tj + T . By Observation
7, p will broadcast I at some time tj+1 [tj , tj + T ]. By Observation 8 and its
position at time tj + T , p will rebroadcast I at time tj+1 tj + T being at location
lj+1 lj + 1 T
T  . The claim holds being qj+1 = p and q = qj .
At time tj + T , no node is located in the interval [lj + 1 , lj + ]. Let L and L
respectively denote the set of nodes that at time tj + T are located on the left of
lj + 1 and the ones that at time tj + T are located on the right of lj + .
If L = then all the nodes that at time tj were on the right of lj + 1 are within
distance from lj at time tj + T . By Lemma 5, these nodes deliver information I
by time tj + T .
Otherwise, there must exist paths of strong neighbors from nodes in L to node
on the left of lj + 1 . In particular, nodes in L can be connected with nodes in L at
most within distance on the left of lj . These latter have delivered the information
I by time tj + T . One of the following cases has to hold:
1. It exists at least a connection between a node p in L and a node q in L which
lasts throughout [tj , tj + 2T ]. Then p will deliver the information I at some
time t [tj , tj + 2T ]. Note that at time tj + T , p is on the right of lj +
T
and, since T  > T , it is on the right or on lj + T
T  > l j + 1 T 
throughout all the period [tj + T, tj + 2T ]. Then by Observation 7, p will
broadcast information I at some time tj+1 [tj + T, tj + 2T ], being located
at some position lj+1 > lj + 1 T
T  . The claim holds being qj+1 = p and by
the fact that p and q are neighbors throughout [, tj+1 ] [tj , tj + 2T ], where
= min{tj+1 T, tj }.
2. Each connection between nodes in L and nodes in L breaks at some time in
[tj , tj + 2T ]. Then, a new strong connection has to be created at some time t
[tj , tj + 2T ] before all such connections break. Otherwise strong connectivity
is violated.
Let p and q be respectively the node in L and the node in L that create
the new strong connection at time t, i.e. distance(p, q, t) 1 . By Lemma
4, p and q have been neighbors throughout [t T, t + T ]. If t [tj , tj + T ],
[tj , tj +T ] [tT, t+T ] and all such connections have to break at some point
in [tj + T, tj + 2T ], since otherwise it will exist at least a connection between
a node in L and a node in L that lasts throughout all the period [tj , tj + 2T ]
and thus we reach a contradiction.
Then, a new connection between a node p in L and a node q in L has to be
created at some time t [tj +T, tj +2T ]. At time t, distance(p, q, t) 1 , and
since in 2T time a node can travel at most a distance 2T
T  , at time tj q was on the
right of lj 2. Thus q delivers information I by time tj + T , and q broadcasts
I both at time and  = + T with [tj , tj + T ]. By Lemma 1, p and q are
neighbors throughout all the period [t T, t + T ] with t [tj + T, tj + 2T ].
Either or  is in the interval [t T, T ], then p delivers information I by time
t + T tj + 3T . Then, either p immediately broadcasts I or it broadcasted

178

A.F. Anta and A. Milani

I at some time in [t, t + T ]. Either way, p broadcasts information I at time


tj+1 tj + 3T being at some location lj+1 > lj + 1 + 2T
T  > lj +
1 T
.
The
claim
holds
being
q
=
p
and
by
the
fact
that
throughout

j+1
T
[tj+1 T, tj+1 ] [t T, t + T ] p and q are neighbors.
Observation 10. Let q, q  , and p be three nodes that at time t are respectively located
at lq , lq , and lp , such that lp < lq and lp < lq . Assume that q delivers information I
by time t + T because p invoked a call of broadcast(I) at time t. If q  is between q and
p throughout [t, t + T ], q  delivers information I by time t + T .
Definition 2. tj is a time at which a node qj invokes the broadcast(I) being located at
location lj such that tj+1 tj 3T and lj+1 lj 1 T
T  , for j {0, 1, . . . , i}.
The following Lemmas 7, 8 and 9 are instrumental to prove Lemma 10. This latter
states that any node that traverses any of the spatial intervals defined by two consecutive
broadcast events (the ones defines in Definition 2) during the corresponding broadcast
period, deliver the information by a given time.
Lemma 7. Let t, t [tj , tj+1 ] with t > t. Let p be a node that at time t is on the
left of lj and at time t is located inside the interval [lj , lj+1 ]. If 1 > , p receives a
message with information I by time tj + T .
Proof. By Definition 2, tj+1 tj 3T . Let t, t [tj , tj+1 ] with t > t. Let p be a
node that at time t is on the left of lj and at time t is located inside the interval [lj , lj+1 ].
If 1 > , at time tj p is at most within distance 3 < 2 on the left of lj . Then, at time
tj , qj and p are neighbors and they will remain neighbor at least up to tj + T . This is
because in the worst case p reaches position lj immediately after tj but then at time
tj + T the distance between p and qj is at most 2. Otherwise they move towards each
other getting closer. So p will receive a message with information I by time tj + T .
Lemma 8. Let t, t [tj+1 T, tj+1 ] with t > t. Let p be a node that at time t is on
the right of location lj+1 . If 1 > , p receives a message with information I by time
tj+1 + T .
Proof. Let t, t [tj+1 T, tj+1 ] with t > t. Let p be a node that at time t is on the
right of location lj+1 . If at some time t [tj+1 T, tj+1 ] p is on the left of lj+1 , p
is neighbor of qj+1 throughout all the period [tj+1 , tj+1 + T ]. This is because 1 >
and at time tj+1 , p is on the right of lj+1 and in T times the distance between p and
qj+1 increases less than 2. Then p will receive a message with information I by time
tj+1 + T .
Lemma 9. Let p be a node that at time t [tj , tj+1 ] is on the right of lj+1 . If at some
time t [tj , tj+1 ] with t < t , p is located at lp [lj , lj+1 ] and it does not exist a
time t [tj , tj+1 ] with t > t such that p is not on the right of lj+1 , p delivers the
information I by time tj+1 + 2T .
Proof. Consider a node p that at time t [tj , tj+1 ] is located at the right of location
lj+1 . Assume that at time t [tj , tj+1 ], with t > t, p is located inside the interval

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

179

[lj , lj+1 ] and it does not exist t [tj , tj+1 ] with t > t such that p is on the right of
lj+1 at t .
If t [tj+1 T, tj+1 ], the claim follows by Lemma 8 and Observation 9. Then,
consider t [tj , tj+1 T ). [tj , tj+1 T ) [tj , tj + 2T ), then by Observation 8,
at time tj+1 T , p is on the right of lj+1 2. At the same time tj+1 T , qj+1 is
at most within distance T
T  from lj+1 , since it has to be located at lj+1 at time tj+1 .
Then, since 3 < 2 , at time tj+1 T p and qj+1 are neighbors. p and qj+1 remain
neighbors throughout all the period [tj+1 T, tj+1 ] because at time tj+1 , p is at most
within distance 3 from lj+1 , due to Lemma 6.(1) and Observation 8.
By Lemma 6 third bullet, either qj+1 receives I at time tj+1 because a node q that
received the information directly by qj invoked broadcast(I) at some time [tj+1
T, tj+1 ] or qj+1 invoked broadcast(I) also at time tj+1 T . In this last case, the claim
holds because p and qj+1 are neighbors throughout all the period [tj+1 T, tj+1 ] and
because of Observation 9. Then, consider the case where qj+1 receives I at time tj+1
because a node q invoked broadcast(I) at some time [tj+1 T, tj+1 ].
If at time tj+1 + T node p is within distance from lj+1 then by Lemma 5 p
deliver the message by time tj+1 + T . Otherwise, the location of p at time tj+1 + T
is on the left of lj+1 . This implies that the location of p at time tj+1 is minor or
equal to lj+1 + T
T  . Then, at time tj+1 , p and q are neighbors because distance(q,
qj+1 ,tj+1 )< 2 and distance(p, qj+1 ,tj+1 ) T
T .
Note that at time tj , p is on the right of lj . Then, by Observation 10, either p delivers
the information by time tj + T or at some point t [tj , tj + T ] p is located on the
right of q. Note that q will broadcast the information once in each time interval [tj +
kT, tj + (k + 1)T ] with k {0, . . . , 3}. So either there is a time in [tj , tj+1 ] where p
and q are strong neighbors and then p delivers the information by time tj+1 + T , or at
time tj+1 q is on the left of p and this latter is on the left of qj+1 . Then, p will deliver
information I by time tj+1 + 2T because of a call of broadcast either at qj+1 or at q.
This is because either p remains neighbors of q or of qj+1 throughout all the interval
[tj+1 T, tj+1 +T ] or at time tj+1 p and q are within distance greater than 1 from each
other and they move towards or in the same direction of q. So they do not disconnect
for at least other 2T .
Lemma 10. Let p be a node that at some time t [tj , tj+1 ] is in some location lp
[lj , lj+1 ]. If it does not exist a time t [tj , tj+1 ] with t > t such that p is not on the
right of lj+1 , p delivers the information I by time tj+1 + 2T .
Proof. Let p be a node that at some time t [tj , tj+1 ] is located at lp [lj , lj+1 ].
Assume that it does not exist a time t [tj , tj+1 ] with t > t such that p is not on the
right of lj+1 . Then if at time tj p is either on the left of lj or on the right of lj+1 , then
the claim follows respectively by Lemma 7 and Observation 9, or by Lemma 9. Finally,
consider the case where node p is inside the interval [lj , lj+1 ] throughout all the interval
[tj , tj+1 ]. We prove that p delivers information I by time tj+1 + T . If at time tj + T
p is within distance from lj , p delivers the information I by time tj + T , because of
Lemma 11. Then assume that p is located in the interval [lj + , lj+1 ] at time tj + T .
At that time q is located on the left of location lj + . At time [tj+1 T, tj+1 ] q and
qj+1 are neighbors because of the third bullet of Lemma 6. At time tj+1 T one of

180

A.F. Anta and A. Milani

the following cases will happens: (1) p is in between of q and qj+1 , (2) p is on the right
of both these nodes but on the left of lj+1 or (3) p is on the left of both q and qj+1 .
But this means that p is a neighbor of q throughout [tj+1 , tj+1 + 2T ] or is a neighbor
of qj+1 throughout [tj+1 T, tj+1 ]. Since q broadcasts I once in each time interval
[tj + kT, tj + (k + 1)T ] with k {0, . . . , 3} and qj+1 broadcasts at time tj+1 , p
delivers I by time tj+1 + 2T and the claim holds.
Now we prove that if a node stays within distance d from the location where the geocast
has been invoked, throughout all the geocast period, then it is eventually inside one of
the intervals between two consecutive broadcasts at the right time and for long enough
to deliver the information I.
Lemma 11. If a node q stays within distance d from l throughout [t0 , ti+1 ] for i such
that l + d [li , li+1 ], then q delivers the information I by time ti+1 + 2T .
Proof. Let t0 be the time when the source node s performs the first broadcast(m)
because of a call of M -Geocast (I, d). If q is located at l0 (= l) at time t0 then the
lemma holds. Otherwise, without loss of generality, let q be located on the right of s at
time t0 . For every time in [t0 , ti+1 ] q is located either on or on the left of li+1 because
l + d li+1 .
By induction on j, it is easy to see that it exists a j i such that at time t [tj , tj+1 ]
q is in the interval [lj , lj+1 ] and it does not exist a time t [tj , tj+1 ] with t > t such
that q is on the right of lj+1 . Otherwise at time tj+1 q is on the right of lj+1 , and for
j = i we have that at time ti+1 q is on the right of li+1 . This means that at time ti+1 q
is at distance greater than d from l. By the Lemma 10, q will deliver the information I
by time ti+1 + 2T .
Observation 11. Let countI be the counter associated to the communication generated by a call of M -Geocast(I, d). countI is set to zero once when the source invokes
the first broadcast(I) at time t0 and it is never reset.
Observation 12. Let p be a node different from the source node. p invokes
broadcast(I) at some time t only if it has generated a receive(I) event at some time
before t.
Lemma 12. Let t be the time when a call of M -Geocast (I, d) is invoked. Every message broadcast or received at some time in [t, t + k] has counter at most equal to k.
Proof. The proof is by induction on k. For k = 0, we have to consider the time t. At that
time only the source node invokes a broadcast(I) and the counter of the broadcasted
message has value 0 (Line 9 of Figure 2). Then the claim holds. By inductive hypothesis,
assume that every message broadcast or received at some time in [t, t+ k] has counter at
most equal to k. Then, we prove that every message broadcast or received at some time
in [t, t + k + 1] has counter at most equal to k + 1. We know that this cannot happen
by time t + k because of the inductive hypothesis. Then, by contradiction assume that it
exists a message that is received at time t + k + 1 and whose counter has value greater
than k + 1. But since it takes at least 1 time unit to receive a message, this means that

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

181

the message received at time t + k + 1 was broadcast at the latest at time t + k. But then
if the message has counter k + 1 we contradict the inductive hypothesis.
Finally, consider the case where at time t + k + 1 a message m is broadcast by a
node p. p increments its counter possibly each time it receives a message or when it
broadcast a message. But by time t + k all the messages received by p have counter
smaller or equal to k, and p may have broadcast at most k messages. So at time t + k
the counter of p is at most k. Then, when at time t + k + 1 it broadcasts a message, this
message has a counter at most k + 1. Then the claim follows.
Finally we define the bound for the time to ensure the reliable delivery property and the
termination property. From the latter, we obtain the bound for the integrity property.
Theorem 13. If T  > T , the M -Geocast(I, d) algorithm with M = 3T (i + 1) + 2T
and i =  d T  ensures
1

T

(1) the Reliable Delivery Property 1 for C = 3T (i + 1) + 2T ,


(2) the Termination Property 2 for C  = (3T (i + 1) + 2T + 1)T , and
(3) the Integrity Property 3 for d = (C  + T )(2 + T ).
Proof. Let us first prove (1). From Lemma 6, we know that any 3T rounds starting
from t0 = t the information reaches some distance 1 T
T  farther from l. Formally,
li l i(1 T /T  ). Since we want all the nodes that during the geocast interval
remain within distance d from l deliver the information I, we need to compute the
maximum value that i could take in any execution such that (l + d) [li , li+1 ). Then
i  lilT  and because li l d, i  d T .
1

T

T

From Lemma 11, all the nodes that remain within distance d from l(= l0 ) throughout
[t0 , ti+1 ] deliver I by time ti+1 + T = t + C. By Lemma 6, ti+1 t 3T (i + 1), and
C = ti+1 t + T 3T (i + 1) + 2T . Then C 3T ( d T  + 1) + 2T .
1

T

We have finally to prove that, during [t, t + C], for any node, countI < M , where
M = C. This follows from Lemma 12.
We prove now (2). Every message received causes rebroadcasting of I in a message
with counter at least incremented by one. This will happen at least once every T times.
Termination happens after any message received has counter larger than 3T (i+1)+2T ,
where i =  d T . This happens within (3T (i + 1) + 2T + 1)T + T time, because all
1

T

messages broadcast after time (3T (i + 1) + 2T + 1)T have counters at least equal to
3T (i + 1) + 2T + 1 and all such messages are received within at most another T times.
Note that, in the worst case, each broadcast message is received exactly after T times
and then the counter is incremented by one unit, while in reality T steps have passed.
Therefore, C  = (3T ( d T  + 1) + 2T + 1)T .
1

T

Finally, we prove (3). A broadcast message will be received at least after one time
unit during which any node can traverse distance at most T . Therefore, if a node broadcasts a message from location l at time t , then its neighbors receive it the earliest at
time t + 1, when at distance less than 2 + T away from l . Then, if the source starts
M -Geocast(I, d) at time t from location l, at time t + m, the furthest node that delivers I is at distance less than m(2 + T ) away from l. By (2), after time t + C  ,
no node broadcasts messages with information I. Therefore, no node delivers I after

182

A.F. Anta and A. Milani

time t + C  + T . But at time t + C  + T , all nodes that have delivered I are within
distance less than (C  + T )(2 + T ) from l. Therefore, if a node remains further than
d = (C  + T )(2 + T ) from l, it will never deliver I.

6 Conclusion
We have studied the geocast problem in mobile ad-hoc networks. We have considered
a set of n mobile nodes which move in a continuous manner with bounded maximum
speed. We have addressed the question of how the speed of movement impacts on providing a deterministic reliable geocast solution, assuming that it takes some time T to
ensure a successful one-hop radio communication.
Our results improve and generalize the bounds presented in [1]. For the two-dimensional mobility model, we have presented a tight bound on the maximum speed of
movement that keeps the solvability of geocast. We have also proved that (nT ) is a
time complexity lower bound for a geocast algorithm to ensure deterministic reliable
delivery, and we have provided a distributed solution which is proved to be asymptotically optimal in time. This latter bound confirms the intuition, presented in [15] for the
brodcast problem by Prakash et al., that when nodes may move the number of nodes in
the system is the impact factor on the reliable communication completion time. In fact,
our solution and bounds are also applicable to 3 dimensions, a case that is rarely studied
but may be of growing interest.
Finally, assuming the one-dimensional mobility model, i.e. nodes moving on a line,
we have proved that vmax < 2
T is a necessary condition to solve the geocast, where
is a system parameter, and presented an efficient algorithm when vmax < T . This still
leaves a gap on the maximum speed to solve the geocast problem in one dimension.

References
1. Baldoni, R., Ioannidou, K., Milani, A.: Mobility Versus the Cost of Geocasting in Mobile
Ad-Hoc Networks. In: Pelc, A. (ed.) DISC 2007. LNCS, vol. 4731, pp. 4862. Springer,
Heidelberg (2007)
2. Bruschi, D., Del Pinto, M.: Lower bounds for the broadcast problem in mobile radio networks. Distributed Computing 10(3), 129135 (1997)
3. Clark, B.N., Colbourn, C.J., Johnson, D.S.: Unit Disk Graphs. Discrete Mathematics 86(1-3),
165177 (1990)
4. Jinag, X., Camp, T.: A review of geocasting protocols for a mobile ad hoc network. In:
Proceedings of Grace Hopper Celebration (2002)
5. Dolev, S., Gilbert, S., Lynch, N.A., Shvartsman, A.A., Welch, J.: Geoquorums: Implementing
atomic memory in mobile ad hoc networks. Distributed Computing 18(2), 125155 (2005)
6. Chlebus, B.S., Gasieniec, L., Gibbsons, A., Pelc, A., Rytter, W.: Deterministic broadcasting
in ad hoc radio networks. Distributed Computing 15(1), 2738 (2002)
7. Ko, Y.-B., Vaidya, N.H.: Geocasting in mobile ad-hoc networks: Location-based multicast
algorithms. In: Proceedings of IEEE WMCSA, NewOrleans, LA (1999)
8. Ko, Y., Vaidya, N.H.: Geotora: a protocol for geocasting in mobile ad hoc networks. In:
Proceedings of the 8th International Conference on Network Protocols (ICNP), p. 240. IEEE
Computer Society, Los Alamitos (2000)

Bounds for Deterministic Reliable Geocast in Mobile Ad-Hoc Networks

183

9. Ko, Y., Vaidya, N.H.: Flooding-based geocasting protocols for mobile ad hoc networks. Mobile Network and Application 7(6), 471480 (2002)
10. Gupta, S.K.S., Srimani, P.K.: An adaptive protocol for reliable multicast in mobile multi-hop
radio networks. In: Proceedings of the 2nd Workshop on Mobile Computing Systems and
Applications (WMCSA), p. 111. IEEE Computer Society, Los Alamitos (1999)
11. Imielinski, T., Navas, J.C.: Gps-based geographic addressing, routing, and resource discovery. Communication of the ACM 42(4), 8692 (1999)
12. Liao, W., Tseng, Y., Lo, K., Sheu, J.: Geogrid: A geocasting protocol for mobile ad hoc
networks based on grid. Journal of Internet Technology 1(2), 2332 (2001)
13. Mohsin, M., Cavin, D., Sasson, Y., Prakash, R., Schiper, A.: Reliable broadcast in wireless
mobile ad hoc networks. In: Proceedings of the 39th Hawaii International Conference on
System Sciences (HICSS), p. 233.1. IEEE Computer Society, Los Alamitos (2006)
14. Navas, J.C., Imielinski, T.: Geocast: geographic addressing and routing. In: Proceedings of
the 3rd Annual ACM/IEEE International Conference on Mobile Computing and Networking
(MobiCom), pp. 6676. ACM Press, New York (1997)
15. Prakash, R., Schiper, A., Mohsin, M., Cavin, D., Sasson, Y.: A lower bound for broadcasting in mobile ad hoc networks. Ecole Polytechnique Federale de Lausanne, Tech. Rep.
IC/2004/37 (2004)
16. Pagani, E., Rossi, G.P.: Reliable broadcast in mobile multihop packet networks. In: Proceedings of the 3rd Annual ACM/IEEE International Conference on Mobile Computing and
Networking (MobiCom), pp. 3442. ACM Press, New York (1997)

Degree 3 Suces: A Large-Scale


Overlay for P2P Networks
Marcin Bienkowski1, , Andre Brinkmann2, , and Miroslaw Korzeniowski3,
1

University of Wroclaw, Poland


University of Paderborn, Germany
Wroclaw University of Technology, Poland
2

Abstract. Most peer-to-peer (P2P) networks proposed until now have


either logarithmic degree and logarithmic dilation or constant degree and
logarithmic dilation. In the latter case (which is optimal up to constant
factors), the constant degree is achieved either in expectation or with
high probability. We propose the rst overlay network, called SkewCCC,
with a maximum degree of 3 (minimum possible) and logarithmic dilation. Our approach can be viewed as a decentralized and distorted version
of a Cube Connected Cycles network. Additionally, basic network operations such as join and leave take logarithmic time and are very simple to
implement, which makes our construction viable in elds other than P2P
networks. A very good example is scatternet construction for Bluetooth
devices, in which case it is crucial to keep the degree at most 7.

Introduction

Peer-to-peer networks have become an established paradigm of distributed computing and data storage. One of the main issues tackled in this research area is
building an overlay network that provides a sparse set of connections for communication between all node pairs. The aim is to build the network in a way
that an underlying routing scheme is able to quickly reach any node from any
other, without maintaining a complete graph of connections. In this paper, we
investigate such networks suitable not only for peer-to-peer networks but also
for Bluetooth scatternet formation and one-hop radio networks.
An important property of the investigated networks is their scalability. We
introduce a scalable and dynamic network structure which we call SkewCCC.
The maximum in- and out-degree of a node inside the SkewCCC network is 3
and routing and lookup times are logarithmic in the current number of nodes.
Naturally, it is impossible to decrease the degree to 2 while preserving any network topology besides a ring. Our routing scheme is name-driven, i.e. packet





This work has been partially supported by EU Commission COST 295 Action
DYNAMO Foundations and Algorithms for Dynamic Networks.
Supported by MNiSW grant number N206 001 31/0436, 2006-2008.
Partially supported by the EU within the 6th Framework Programme under contract IST-2005-034891 Hydra.
Supported by MNiSW grant number PBZ/MNiSW/07/2006/46.

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 184196, 2008.
c Springer-Verlag Berlin Heidelberg 2008


Degree 3 Suces: A Large-Scale Overlay for P2P Networks

185

routes can be calculated based on their destination address without requiring


extensive routing tables [8].
The construction of the SkewCCC network is fully distributed and remains
very simple from the computational and communication perspective. This is
crucial when designing algorithms for weak devices like small embedded systems,
which are often found in sensor networks. To minimize the production costs such
devices have very limited computational power and have to reduce their energy
usage [6].
It is widely known in the area of P2P networks that if we use a ring-based
network such as Chord [12] as a basis for a distributed hash table (DHT), then
the load balance is not even, unless special algorithms are employed in order to
smoothen the load. In particular, if nodes choose random places for themselves,
the expected ratio of the highest loaded to the lowest loaded node is (n log n).
Our design applied as a topology for a DHT is perfectly balanced, i.e. without
applying any additional load balancing schemes each node has an in- and outdegree independent of the network size.
Besides being used as an overlay sensor network or as a peer-to-peer network,
the proposed SkewCCC architecture can also be applied for underlay networks.
In particular, for the construction of Bluetooth scatternets and one-hop radio
networks, it is not sucient to ensure that the (expected) network degree is
bounded by some constant, but it is absolutely necessary to keep the degree
smaller than 7.

Related Work

The current research in the area of P2P networks concentrates on providing


dynamic overlay networks with good properties (small diameter, small in- and
out-degree). The basic task of peer-to-peer applications, i.e. eciently locating
the peer that stores a sought data item, is performed by means of consistent
hashing [7]. The main requirement of this technique is the ability to perform
a name-driven routing in the network; the parameters of this routing, like dilation
(max. path length) inuences directly the performance of the whole system. In
this section, we compare our solution with existing dynamic networks. By n we
denote the number of nodes currently in the system.
The rst proposed distributed hash table (DHT) solutions were ring topologies: Chord [12] and Chord-like structures: Tapestry [13] and Pastry [11]. By
introducing shortcuts inside the ring, these DHTs achieve good properties: their
dilation is O(log n) and also the runtime of joining and leaving the network is
logarithmic. On the other hand, each node has to keep (log n) pointers to other
nodes and thus the out-degree is large.
SkipNets [5] and Skip-Graphs [1] as well their deterministic versions such as
Hyperrings [2] are based on a hierarchical structure. In this approach the network
is organized into levels, where the rst level is just a ring of all nodes and higher
levels are built of independent rings being renements of rings from previous
levels. The renement continues until each node itself is a ring. The approach

186

M. Bienkowski, A. Brinkmann, and M. Korzeniowski

yields good properties concerning storage of sorted or unevenly distributed data


but does not improve the degree/distance factor of Chord. In all mentioned
designs, the degree of each node is logarithmic (as there are logarithmic number
of levels and each node has degree two in each level) and logarithmic dilation
(one step has to be done in each ring).
The rst overlay network with constant out-degree and logarithmic diameter
was Viceroy [9]. The network is based on the randomized approximation of a
buttery network [8] that adds a constant number of outgoing links for long
range communications. Furthermore, join or leave operations inside Viceroy require only a constant number of local updates on expectation and a logarithmic
number with high probability. Nevertheless, the in-degree of at least one node
inside the network becomes with high probability (log n/ log log n) if each node
has only a single choice for its address during the join process. The in-degree
can also be bounded with high probability if each node has (log n) random
choices for its address during the join operation. However this leads to (log2 n)
updates for each join operation.
The approach of using multiple choices is also used inside the Distance Halving
DHT network [10]. The structure is based on a dynamic decomposition of a
continuous space into cells, which are assigned to nodes. The underlying deBruijn
structure ensures a degree of d for each node and leads to a path length of
O(logd n) for key lookups.
Besides providing solutions for general networks, there has been some research on overlay networks which consider the special properties of Bluetooth
networks. The rst scalable overlay for Bluetooth (a network of constant degree
and poly-logarithmic diameter) has been presented in [3]. The network is based
on a backbone that enables routing based on virtual labeling of nodes without
large routing tables or complicated path-discovery methods. The scheme is fully
distributed, but still poses high demands on the computational abilities of the
underlying devices.
In [4], we have presented an overlay topology with special support for Bluetooth networks which is based on Cube Connected Cycles networks [8]. The resulting network has constant in- and out-degree as well as a dilation of O(log n).
The main drawback of the approach is that the scheme is centralized (in particular, each node has to know the current number of nodes in the system) and
hence not scalable. Although the scheme we present in this paper is also based
on the CCC network, it is a big step forward as it is completely distributed and
self-balancing.
In Fig. 1, we summarize the parameters of dierent distributed solutions, most
of them holding with high probability.

SkewCCC

As we base our approach on the hypercube and the CCC (Cube Connected
Cycles) networks, we shortly review their construction. These networks were
extensively studied and have good properties concerning maintenance, diameter,

Degree 3 Suces: A Large-Scale Overlay for P2P Networks


Network
out-degree in-degree
Chord-like networks
(log n)
(log n)
Viceroy (single choice)
O(1)
( logloglogn n )
O(1)
O(1)
Viceroy (multiple choice)
O(d)
O(d)
Distance Halving (m.c.)
3
3
SkewCCC

187

dilation
join runtime
(log n)
(log n)
(log n)
O(log n)
(log n)
(log2 n)
(log d n) (log n logd n)
(log n)
(log n)

Fig. 1. Summary of old and new results

degree of nodes and routing speed. On the other hand, they are meant exclusively
for static networks. For a thorough introduction to these kinds of networks, we
refer the reader to [8].
In this paper, we use the following notation. By a string we always mean a
binary string, whose bits are numbered from 0. For any two strings a and b,
we write a " b to denote that a is a (not necessarily proper) prex of b. We
denote an empty string by  and use to denote a concatenation of two strings;
denotes the bitwise xor operation. We also identify strings of xed lengths
with binary numbers they represent.
Denition 1. The d-dimensional hypercube network has n = 2d nodes. Each
node is represented by a number 0 i < n. Two nodes i and j are connected if
and only if i 2k = j for an integer 0 k < d.
A d-dimensional Cube Connected Cycles (CCC) network is essentially a
d-dimensional hypercube in which each node is replaced with a ring of length d
and each of its d connections is assigned to one of the ring nodes. This way the
degree of the network is reduced from d to 3, whereas almost all of the network
properties (e.g. diameter) are changed only slightly.
Denition 2. The d-dimensional CCC network has d 2d nodes. Each node is
represented by a pair (i, j), where 0 i < d and 0 j < 2d . Each such node is
connected to three neighbors: two cycle ones with indices ((i 1) mod d, j) and
a hypercubic one (i, j 2i ).
Examples of a 3-dimensional hypercube and CCC network are given in Fig. 2.

a)

101

111

001

011

100
000

b)

110
010

Fig. 2. a) 3-dimensional hypercube, b) 3-dimensional CCC

b)

188

M. Bienkowski, A. Brinkmann, and M. Korzeniowski

We present our network in two stages. First, we describe the network and its
properties. Second, we describe algorithms, which assure proper structure when
nodes join and leave the network.
3.1

Network Structure

When we compare the CCC network with the hypercube, we may think that the
d-dimensional hypercube is a skeleton for the d-dimensional CCC. In other words,
if we replace each node of the hypercube (to which we refer as a corner) by a cycle
of nodes, then the resulting network is a CCC. In the following description we
start from a description of such a skeleton for our network, called SkewHypercube,
then we show how to replace each of its corners by a ring of real network nodes,
nally creating a structure, which we call a SkewCCC network.
SkewHypercube. In the following we describe a skeleton network called skewHypercube. Each node of this network will correspond to a group of real nodes.
To avoid ambiguity, we refer to a skeleton node as a corner.
First, we dene the set of corners. Each corner i has an identier, which is
a string si of length di . The number di is called the dimension of the corner.
We require that the set of corner identiers C = {si } is prex-free and complete.
Prex-freeness means that for any two strings si , sj , neither si " sj nor sj " si .
Completeness means that for any innite string s, there exists an identier si C,
s.t. si " s. The description above implies that (i) a single corner with empty
name s =  constitutes a correct set C and (ii) any correct set of corners C can be
obtained by multiple use of the following operation (starting from the set {}):
take a corner i and replace it with two corners j and k with identiers sj = si 0
and sk = si 1.
Second, we dene the set of edges in a SkewHypercube.
Denition 3. Two SkewHypercube corners i and j are connected i
(i) there exists 0 ki < di , s.t. si 2ki " sj or
(ii) there exists 0 kj < dj , s.t. sj 2kj " si .
We note that if all identiers of corners have the same length d, then our SkewHypercube is just a regular d-dimensional hypercube. On the other hand, the
denition above allows the following situation to occur. It may happen that a
single corner s has identier 0 (dimension 1) and there are 2k corners with dimension k + 1 with identiers starting with 1. This results in corner s having
the degree of 2k . We will explicitly forbid such situations in the construction
of our network and require that the dimensions of neighboring corners can differ at most by 1. This ensures that each corner of dimension d has at most 2d
neighbors. An example of a SkewHypercube is presented in Fig. 3.
Identiers. To specify which nodes are stored in particular corners of the
SkewHypercube, we have to give nodes unique identiers. These identiers are
innite strings, chosen randomly upon joining the network, where each bit is

Degree 3 Suces: A Large-Scale Overlay for P2P Networks


a)

101

189

b)

111
011

00
100
010

1101
1100

Fig. 3. a) SkewHypercube skeleton, b) SkewCCC, only core nodes are depicted

equiprobably 0 or 1. To avoid the burden of handling innite strings, one may


follow the approach of SkipGraphs [1], i.e. a node chooses for an identier a string
of a xed length, and when two nodes with the same identier meet, they choose
additional random bits until their identiers dier on at least one bit. Moreover, it will follow from our construction that such an identier conict will be
detected right after a node joins the network. In practical applications, it is sufcient to choose identiers of length 160 bits, as in such case the probability of
a conict is overwhelmingly low.
The identier of a node decides where the node should be placed in the network
and constitutes its address. Namely, a node with identier x is stored in a corner
s, s.t. s " x. It remains to show how nodes are connected within a corner and how
neighbor relations between corners are represented by real edges between nodes.
From Skeleton to SkewCCC. As mentioned previously, a corner identied
by a string si contains a group of nodes whose identiers have prex si . Nodes
within a corner are managed in a centralized fashion.
As each skeleton node of degree d can have up to 2d neighbors, we require that
each corner has at least 2d nodes, called core ones. These nodes are connected in
a ring and each of them is responsible for a (potential) connection to a dierent
corner. For eciency of routing, we demand that core nodes are sorted in the ring
in the same way as in the original CCC network. It means that a node in corner
s responsible for a connection to s 2k+1 follows on the ring a node responsible
for a connection to s 2k , whereas a node responsible for a connection to s 20
follows the one responsible for a connection to s 2d1 .
It might happen that a corner contains not only its core nodes, but also has to
manage additional ones, which are called spare. When they join the corner, we
put them on the ring between core nodes and we balance the path lengths between two consecutive core nodes. It means that if there are m spare nodes in the
corner of dimension d, then there are between m/2d and m/2d spare nodes
between any two consecutive core nodes. Due to this construction, each node in
the network has degree at most 3. An example of such network is given in Fig. 3.
3.2

Network Maintenance

In this section, we show how to maintain the shape of the network in a distributed
way in a dynamic setting, where nodes may join or leave the system. We start

190

M. Bienkowski, A. Brinkmann, and M. Korzeniowski

with showing a name-driven routing scheme, in which it is sucient to know


the destination identier to reach the corresponding node in a greedy manner in
a logarithmic number of steps (in the total number of nodes in the network).
We introduce a constant
2, which is a parameter of our algorithms. The
runtime of our operations increase linearly with
, and the probability that the
network is not balanced (the dimensions of corners vary) decreases exponentially
with
.
Routing and Searching. When we search for a node with name s, we want
to nd a corner with a name being prex of s. Routing is performed using the
bit-xing algorithm in which we x bits of s one by one starting from the lowest
bit. When we want to x the k-th bit of s, and s diers from the current corner
name in this bit, we go to a node in the current corner which has a connection
to a corner diering exactly on this bit and traverse this connection. When we
reach a destination corner (whose name is a prex of s), we forward the message
through all nodes in the corner. The message either reaches its destination or
reaches some node for the second time. If the latter happens, this node answers
the request with a negative (not found) answer.
Joining. We assume that when a node joins a network, it knows (or can discover
and contact) another node which is already a member of the network. After
choosing an identier, the joining node asks its contact to nd this identier
in the network. As the identiers are supposed to be dierent, the result of
the search will be negative, but a node with the longest prex matching the
identier will be returned as a new contact. The new node joins (as a spare
node) the corner to which its new contact belongs. At this point, it is checked if
a split of the corner is necessary.
Splitting. A corner s of dimension d could be allowed to split when there are
enough resources to form two corners of dimension d + 1, i.e. if there are at
least 2(d + 1) nodes with prex s 0 and with prex s 1. However, for eciency
in a dynamic system, we split a corner when the number of nodes with both
prexes exceeds 12
(d + 1), where
2 is a constant parameter described
above. Additionally, such a corner is allowed to split only if it has no neighbors
of dimension d 1.
After splitting, we create two corners: s 0 and s 1 and assign nodes to
them according to their names. A connection is established between the two
new corners: a core node is responsible for this connection in each of them. Each
connection from s is assigned to a proper one of the two new corners. If the
connection has been to a corner of dimension d, both corners connect to the
neighboring corner and the latter has to assign an additional core node to serve
this connection. If the connection has been to two corners of dimension d + 1,
then s 0 connects to the corner with a 0 on position d and s 1 to the corner
with a 1 on position d.
Leaving. When a node i wants to leave the network, it searches for a special
spare node j in its corner, tells j to take its place, and leaves. The special spare

Degree 3 Suces: A Large-Scale Overlay for P2P Networks

191

node j is chosen so that after removing i and migrating j into its place, spare
nodes are distributed evenly in the corner. After i leaves and j migrates, a check
is performed if the number of nodes in the corner has decreased suciently to
call a merging operation.
Merging. When the number of nodes in a corner s of dimension d drops below
2d, we have to merge s with its neighbor s = s 2d1, i.e. with the one diering
from s on the last bit. Actually, we do it already when the number of nodes in
such a corner drops to (6
+ 7) d. First, we send a message to all neighboring
corners of dimension d + 1 (possibly including the neighbors across bit d), telling
them to merge rst. They merge recursively and after all neighbors of s are of
dimension d or d 1, we merge the two corners. Naturally, whenever a corner
receives a message from one of its neighbors (of lower dimension) telling it to
merge, it starts the merge procedure too.
3.3

Analysis

Before we bound the runtime of all operations on the system, we prove that with
high probability the system is balanced, i.e. the dimensions of each corner are
roughly the same.
To formally dene this notion, we introduce (just for the analysis) a parameter
du , which would be the current dimension of the network if it would be a CCC.
This means that if n is the current number of nodes in the network, then du 2du
n < (du + 1) 2du +1 . For du > 2, it holds that du /2 ln n 2du . Additionally,
we introduce a parameter dl diering from du by a constant: dl := du log
5.
For simplicity of notation, we assume that all nodes identiers are innite.
Denition 4. A skew CCC network is balanced if all corners dimension are
between dl and du .
Now we show that the network is balanced with high probability. We note that
even in case of bad luck (happening with polynomially small probability), the
system still works it might just work slower.
Lemma 1. If the network is stable, i.e. no nodes are currently joining or leaving
and no split or merge operations are currently being executed or pending, then
with probability 1 2 n the network is balanced.
Proof. We prove two claims:
(i) the probability that there exists a corner with dimension du or greater is
at most n ;
(ii) the probability that there exists a corner with dimension dl or smaller is
at most n .
For proving (i), we take a closer look at the set S of all node identiers. For any
string s, let Ss = {si S : s " si }, i.e. Ss consists of all identiers starting with
s. We say that S is well separated if for each du -bit string s, |Ss | (6
+ 7) du .

192

M. Bienkowski, A. Brinkmann, and M. Korzeniowski

First, we observe that if S is well separated, then there is no corner of dimension du or higher. Assume the contrary and choose the corner si with highest
dimension (at least du ). Then there are at most (6
+ 7) du identiers starting with si , i.e. remaining in this corner. As all the neighbors of corner si have
smaller or equal dimension, this corner should be merged, which contradicts the
assumption that the network is stable.
Second, we show that the probability that S is not well separated is at
most n . Fix any string s of length du . For each node i with identier si let an
indicator random variable Xis be equal
nto 1 ifs s " si . Since s is a du -bit string,
we have 
E[Xis ] = 2du . Let X s =
i=1 Xi ; by the linearity of expectation,
n
E[X s ] = i=1 E[Xis ] = n 2du du . Using the Cherno bound, we obtain that
Pr [X s (6
+ 7)du ] Pr [X s E[X s ] (6
+ 6) E[X s ]]


(6
+ 6) E[X s ]
exp
3
e(2+2)du
n(+1) .
There are 2du n possible du -bit strings, and thus (by the sum argument) the
probability that S is not well separated is at most n n(+1) = n .
Proving (ii) is analogous. We say that S is well glued if for each dl -bit string s,
|Ss | 12
dl .
Again, we prove that if S is well glued, then there is no corner of dimension dl
or lower. Assume the contrary and let si be the corner with lowest dimension.
There are at least 12
dl nodes in corner si . As all the neighbors of corner si have
greater or equal dimension, this corner should be splitted, which contradicts the
assumption that the network is stable.
Again, we show that the probability that S is not well glued is at most n .
Let t be
random variables Xit denote if t " si and
nany dtl -bit string, tindicator(d
t
X = i=1 Xi . Then E[X ] = n 2 u 5log ) 32
du . Using the Cherno
bound, we get that




Pr X t 12
dl Pr X t (1 1/2) E[X t ]
eE[X

]/8

e4du
n2 .
There are 2dl n possible dl -bit strings, and thus the probability that S is not
well glued is at most n n2 n .


According to Lemma 1, the system is balanced with high probability. Now, we
will show that all basic operations are performed in a time that is logarithmic in the current number of nodes. In the following, we assume that the high
probability event of the network being balanced actually happens.
Lemma 2. If the network is balanced, then each search operation is performed
in logarithmic time.

Degree 3 Suces: A Large-Scale Overlay for P2P Networks

193

Proof. Since each corner is of dimension (log n), we have to x (log n) bits
in order to reach the destination corner. As the number of nodes in each corner
is within a constant factor from its dimension, there are (log n) nodes in each
corner. In particular, there is a constant number of spare nodes between any two
consecutive core nodes. Thus, in order to x the i-th bit after xing the (i 1)-st
bit we have to traverse only a constant number of edges. In order to x the rst
bit and to reach the destination node after reaching the destination corner, we
have to travel at most through all the nodes of two corners. Hence, the total
number of traversed edges is O(log n).


Before we prove upper bounds for join and leave operations we bound the time
of split and merge.
Lemma 3. If the network is balanced, then each split operation is performed in
logarithmic time.
Proof. When we split a corner s of dimension d into corners s 0 and s 1 of
dimension d + 1, then there are more than sucient nodes for each corner and
we know that each neighboring corner is of dimension d or d + 1.
Since the corner s currently has to split and
1, there are at least 12(d + 1)
nodes of each type in s. Starting in any node, we traverse the ring a constant
number of times and do the following. In the rst pass, we make sure that
there are two connections to neighbor corners across every bit. If there are two
connections already, there is nothing to do and if there is only one, we take any
spare node in the corner we are currently splitting and make a connection to
a spare node in the neighboring corner. From now on, these two spare nodes
(one in each corner) are core nodes. Finally, we add two core nodes without an
outside connection to s; they will be responsible for connecting the two corners
into which we split s.
In the second pass, we use all spare nodes to create two additional rings: one
built of nodes with the d-th bit equal to 0 and the other built of nodes with the
d-th bit equal to 1. Since each ring has at most 2(d + 1) core nodes and at least
12(d + 1) nodes in total, each of the newly created rings has at least 10(d + 1)
nodes. In the next pass, we can go along each of the three rings in parallel and
pass the responsibility for a connection to another corner from the old ring to
one of the new ones. This means that the ring with nodes with d-th bit equal
to 0 takes responsibility for the connections to corners (s 2k ) 0 (analogously
if the last bit is equal to 1).
In the last traversal of all rings, we delete nodes from the old ring and make
them join one of the new rings as spare nodes. Again, we move nodes with the
d-th bit equal to 0 to the newly created corner s 0 and nodes with the d-th bit
equal to 1 to the newly created corner s 1.
As we have used only a constant number of traversals of rings of length
O(log n), the whole split operation needs time O(log n).


We note that in case of search and split operations, the system can be balanced
in a weaker sense than we dened, i.e. we just need that the corner dimension

194

M. Bienkowski, A. Brinkmann, and M. Korzeniowski

is O(log n). The merge operation is the only operation which depends on the
property that the dierence between dimension of the corners is constant. It is
also the most time and work consuming function, as a single merge operation
might need executing other merge operations before it can start.
Lemma 4. If the network is balanced, then a merge operation is performed in
time O(polylog(n)), including recursive execution of other needed merge operations, whereas the amortized cost per merge operation is O(log n).
Proof. Since the total number of dierent dimensions of all corners in the network is bounded by a constant (log
+ 5), the recursive execution of other possibly necessary merge operations for neighboring corners of higher dimensions
has only constant depth. As on each level a corner can have d + O(1) = O(log n)
neighbors, the total number of involved corners is logO(1) n. Below we prove that
a single merge operation of corners s0 and s1 of dimension d+ 1 into a corner s
of dimension d has cost O(d) = O(log n), if all neighbors have been reduced to
equal or lower dimension.
Similarly to the split operation, we can traverse in parallel both rings s 0 and
s 1 which we want to merge. We denote their dimension by d. As no neighbor
of s 0 and s 1 is of dimension d + 1, only d connections of 2d core nodes are
actually used in each of them. We rst build a ring composed of these used core
nodes of both old rings, whereas we zip them into one ring interleaving the core
nodes of s0 and the core nodes of s1. When we remove core nodes from the old
rings, we glue the holes so that we get two rings composed of spare nodes. Next,
we remove two core nodes which have been responsible for the connection across
bit d 1 (they connected s 0 to s 1) and move them to one of the old rings as
spare nodes. In the next traversal of the old rings we calculate how many spare
nodes they contain and then, in the last traversal, we evenly distribute the spare
nodes in the newly created corner s.
Notice that there is no need to add any connections to neighboring corners
all necessary connections already exist. On the other hand, if our new (d 1)dimensional corner is a neighbor to another (d 1)-dimensional one, we have
a double connection with this corner. We should remove one of the connections,
namely the one which originates from s 1.
Since the cost of merging s0 and s1 into s (not including recursive merging of
neighbors) is (d), and the cost of splitting s into s0 and s1 has also been (d),
we can amortize the cost of merging into s against the cost of splitting s. This
shows that the amortized cost of a merge operation together with its symmetric
split operation of a corner of dimension d is (d) = (log nm ) = (log ns ),
where nm is the number of nodes in the system at the moment when we perform
the merge operation and ns is the number of nodes at the moment when we have
performed the split operation.


Lemma 5. The join operation can be performed in logarithmic time.
Proof. Each time a node joins the network, it has to search its position inside the
network and to take its position inside the ring. Based on the previous lemmas,

Degree 3 Suces: A Large-Scale Overlay for P2P Networks

195

the search operation can be performed in logarithmic time and the update of
the ring structure involves the creation of two new edges and the removal of
one existing edge. Furthermore, it might happen that the corner has to split its
dimension, resulting in additional O(log n) operations.


Finally, in the following lemma, we analyze the cost of a leave operation.
Lemma 6. The leave operation can be performed in polylogarithmic and amortized logarithmic time.
Proof. A typical leave operation only triggers a few connection updates. If the
node has been a spare node in its corner, the leave operation involves two connection updates, if the node has been a core node, one additional update has to
be performed to re-connect to the neighboring corner.
Besides the connection updates, a node leaving the network might also trigger
a merge operation of the corner. Based on the previous lemmas, each merge
operation costs at most polylogarithmic time (and logarithmic amortized time)
and majorizes the cost of a leave operation.



Conclusion and Outlook

We have shown a fully distributed but simple scheme which joins a potentially
very large set of computationally weak nodes into an organized network with
minimal possible degree of 3, logarithmic dilation and name-driven routing.
Based on the properties and the structure of the SkewCCC network, it is
possible to further investigate aspects of heterogeneity and locality. The former
means allowing the existence of network nodes which can have a higher degree
and potentially also greater computational power. The latter aspect would incorporate distances of the underlying network.

References
1. Aspnes, J., Shah, G.: Skip graphs. ACM Transactions on Algorithms 3(4) (2007);
Also appeared in: Proc. of the 14th SODA, pp. 384393 (2003)
2. Awerbuch, B., Scheideler, C.: The hyperring: a low-congestion deterministic data
structure for distributed environments. In: Proc. of the 15th ACM-SIAM Symp.
on Discrete Algorithms (SODA), pp. 318327 (2004)
3. Barri`ere, L., Fraigniaud, P., Narayanan, L., Opatrny, J.: Dynamic construction
of bluetooth scatternets of xed degree and low diameter. In: Proc. of the 14th
ACM-SIAM Symp. on Discrete Algorithms (SODA), pp. 781790 (2003)
4. Bienkowski, M., Brinkmann, A., Korzeniowski, M., Orhan, O.: Cube connected
cycles based bluetooth scatternet formation. In: Proc. of the 4th International
Conference on Networking, pp. 413420 (2005)
5. Harvey, N.J.A., Jones, M.B., Saroiu, S., Theimer, M., Wolman, A.: Skipnet: a
scalable overlay network with practical locality properties. In: Proc. of the 4th
USENIX Symposium on Internet Technologies and Systems (2003)

196

M. Bienkowski, A. Brinkmann, and M. Korzeniowski

6. Jiang, X., Polastre, J., Culler, D.: Perpetual environmentally powered sensor networks. In: Proc. of the 4th Int. Symp. on Information Processing in Sensor Networks
(IPSN), pp. 463468 (2005)
7. Karger, D., Lehman, E., Leighton, T., Levine, M., Lewin, D., Panigrahy, R.: Consistent hashing and random trees: Distributed caching protocols for relieving hot
spots on the world wide web. In: Proc. of the 29th ACM Symp. on Theory of
Computing (STOC), pp. 654663 (1997)
8. Leighton, F.T.: Introduction to parallel algorithms and architectures: array, trees,
hypercubes. Morgan Kaufmann Publishers, San Francisco (1992)
9. Malkhi, D., Naor, M., Ratajczak, D.: Viceroy: A scalable and dynamic emulation
of the buttery. In: Proc. of the 21st ACM Symp. on Principles of Distributed
Computing (PODC), pp. 183192 (2002)
10. Naor, M., Wieder, U.: Novel architectures for P2P applications: The continuousdiscrete approach. ACM Transactions on Algorithms 3(3) (2007); Also appeared
in: Proc. of the 15th SPAA, pp 5059 (2003)
11. Rowstron, A.I.T., Druschel, P.: Pastry: Scalable, decentralized object location, and
routing for large-scale peer-to-peer systems. In: Guerraoui, R. (ed.) Middleware
2001. LNCS, vol. 2218, pp. 329350. Springer, Heidelberg (2001)
12. Stoica, I., Morris, R., Liben-Nowell, D., Karger, D.R., Kaashoek, M.F., Dabek, F.,
Balakrishnan, H.: Chord: a scalable peer-to-peer lookup protocol for internet applications. IEEE/ACM Transactions on Networking 11(1), 1732 (2003); In: Proc. of
the ACM SIGCOMM, pp. 149160 (2001)
13. Zhao, B.Y., Huang, L., Stribling, J., Rhea, S.C., Joseph, A.D., Kubiatowicz, J.:
Tapestry: A resilient global-scale overlay for service deployment. IEEE Journal on
Selected Areas in Communications 22(1), 4153 (2004)

On the Time-Complexity of
Robust and Amnesic Storage
Dan Dobre, Matthias Majuntke, and Neeraj Suri
TU Darmstadt, Hochschulstr. 10, 64289 Darmstadt, Germany
{dan,majuntke,suri}@cs.tu-darmstadt.de

Abstract. We consider wait-free implementations of a regular read/


write register for unauthenticated data using a collection of 3t + k base
objects, t of which can be subject to Byzantine failures. We focus on amnesic algorithms that store only a limited number of values in the base
objects. In contrast, non-amnesic algorithms store an unbounded number of values, which can eventually lead to problems of space exhaustion.
Lower bounds on the time-complexity of read and write operations are
currently met only by non-amnesic algorithms. In this paper, we show for
the rst time that amnesic algorithms can also meet these lower bounds.
We do this by giving two amnesic constructions: for k = 1, we show that
the lower bound of two communication rounds is also sucient for every
read operation to complete and for k = t + 1 we show that the lower
bound of one round is also sucient for every operation to complete.
Keywords: distributed storage, Byzantine failures, wait-free algorithms.

Introduction

Motivated by recent advances in the Storage-Area Network (SAN) technology,


and also by the availability of cheap commodity disks, distributed storage has
become a popular method to provide increased storage space, high availability and disaster tolerance. We address the problem of implementing a reliable
read/write distributed storage service from unreliable storage units (e.g. disks),
a threshold of which might fail in a malicious manner. Fault-tolerant access to
replicated remote data can easily become a performance bottleneck, especially
for data-centric applications usually requiring frequent data access. Therefore,
minimizing the time-complexity of read and write operations is essential. In this
paper, we show how optimal time-complexity can be achieved using algorithms
that are also space-ecient.
An essential building block of a distributed storage system is the abstraction
of a read/write register, which provides two primitives: a write operation, which
writes a value into the register, and a read operation which returns a value previously written [1]. Much recent work, and this paper as well, focuses on regular


Research funded in part by DFG GRK 1362 (TUD GKmM), EC NoE ReSIST and
Microsoft Research via the European PhD Fellowship.

T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 197216, 2008.
c Springer-Verlag Berlin Heidelberg 2008


198

D. Dobre, M. Majuntke, and N. Suri

registers where read operations never return outdated values. A regular register
is deemed to return the last value written before the read was invoked, or one
written concurrently with the read (see [1] for a formal denition). Regular registers are attractive because even under concurrency, they never return spurious
values as sometimes done by the weaker class of safe registers [1]. Furthermore,
they can be used, for instance, together with a failure detector to implement
consensus [2].
The abstraction of a reliable storage is typically built by replicating the data
over multiple unreliable distributed storage units called base objects. These can
range from simple (low-level) read/write registers to more powerful base objects
like active disks [3] that can perform some more sophisticated operations (e.g. an
atomic read-modify-write). Taken to the extreme, base objects can also be implemented by full-edged servers that execute more complex protocols and actively
push data [4]. We consider Byzantine-fault tolerant register constructions where
a threshold t < n/3 of the base objects can fail by being non-responsive or by returning arbitrary values, a failure model called NR-arbitrary [5]. Furthermore, we
consider wait-free implementations where concurrent access to the base objects
and client failures must not hamper the liveness of the algorithm. Wait-freedom
is the strongest possible liveness property, stating that each client completes its
operations independent of the progress and activity of other clients [6]. Algorithms that wait-free implement a regular register from Byzantine components
are called robust [7]. An implementation of a reliable register requires the (client)
processes accessing the register via a high-level operation to invoke multiple lowlevel operations on the base objects. In a distributed setting, each invocation of a
low-level operation results in one round of communication from the client to the
base object and back. The number of rounds needed to complete the high-level
operation is used as a measure for the time-complexity of the algorithm.
Robust algorithms are particularly dicult to design when the base objects
store only a limited number of written values. Algorithms that satisfy this property are called amnesic. With amnesic algorithms, values previously stored are
not permanently kept in storage but are eventually erased by a sequence of values
written after them. Amnesic algorithms eliminate the problem of space exhaustion raised by (existing) non-amnesic algorithms, which take the approach of
storing the entire version history. Therefore, the amnesic property captures an
important aspect of the space requirements of a distributed storage implementation. The notion of amnesic storage was introduced in [7] and dened in terms of
write-reachable congurations. A conguration captures the state of the correct
base objects. Starting from an initial conguration, any low-level read/write operation (i.e., one changing the state of a base object) leads the system to a new
conguration. A conguration C  is write-reachable from a conguration C when
there is a sequence consisting only of (high-level) write operations that starting
from C, leads the system to C  . Intuitively, a storage algorithm is amnesic if,
except a nite number of congurations, all congurations reached by the algorithm are eventually erased by a sucient number of values written after them.
Erasing a conguration C  , which itself was obtained from a conguration C,

On the Time-Complexity of Robust and Amnesic Storage

199

means to reach a conguration C  that could have been obtained directly from
C without going through C  . This means that once in C  , the system cannot
tell whether it has ever been in conguration C  . For instace, an algorithm that
stores the entire history of written values in the base objects is not amnesic.
In contrast, an algorithm that stores in the base objects only the last l written
values is amnesic because after writing the l + 1st value, the algorithm cannot
recall the rst written value anymore.
1.1

Previous and Related Work

Despite the importance of amnesic and robust distributed storage, most implementations to date are either not robust or not amnesic. While some relax
wait-freedom and provide weaker termination guarantees instead [2, 8], others
relax consistency and implement only the weaker safe semantics [5,9,2,10]. Generally, when it comes to robustly accessing (unauthenticated) data, most algorithms store an unlimited number of values in the base objects [11, 10, 12]. Also
in systems where base objects push messages to subscribed clients [4,13,14], the
servers store every update until the corresponding message has been received
by every non-faulty subscriber. Therefore, when the system is asynchronous, the
servers might store an unbounded number of updates. A dierent approach is to
assume a stronger model where data is self-verifying [9, 15, 16], typically based
on digital signatures. For unauthenticated data, the only existing robust and
amnesic storage algorithms [17, 18] do not achieve the same time-complexity as
non-amnesic ones. Time-complexity lower bounds have shown that protocols using the optimal number of 3t + 1 base objects [4] require at least two rounds to
implement both read/write operations [10, 2]. So far these bounds are met only
by non-amnesic algorithms [12]. In fact, the only robust and amnesic algorithm
with optimal resilience [17] requires an unbounded number of read rounds in the
worst case. For the 4t + 1 case, the trivial lower bound of one round for both
operations is not reached by the only other existing amnesic implementation [18]
that albeit elegant, requires at least three rounds for reading and two for writing.
1.2

Paper Contributions

Current state of the art protocols leave the following question open: Do amnesic
algorithms inherently have a non-optimal time complexity? This paper addresses
this question and shows, for the rst time, that amnesic algorithms can achieve
optimal time complexity in both the 3t + 1 and 4t + 1 cases. Justied by the
impossibility of amnesic and robust register constructions when readers do not
write [7], one of the key principles shared by our algorithms is having the readers
change the base objects state. The developed algorithms are based on a novel
concurrency detection mechanism and a helping procedure, by which a writer
detects overlapping reads and helps them to complete. Specically, the paper
makes the following two main contributions:
A rst algorithm, termed DMS, which uses 4t + 1 base objects, described in
Section 3. With DMS, every (high-level) read and write operation is fast, i.e.,

200

D. Dobre, M. Majuntke, and N. Suri

it completes after only one round of communication with the base objects.
This is the rst robust and amnesic register construction (for unauthenticated data) with optimal time-complexity.
A second algorithm, termed DMS3, which uses the optimal number of 3t + 1
base objects, presented in Section 4. With DMS3, every (high-level) read
operation completes after only two rounds, while write operations complete
after three rounds. This is the rst amnesic and robust register construction
(for unauthenticated data) with optimal read complexity. Note also that,
compared to the optimal write complexity, it needs only one additional communication round.
Table 1 summarizes our contributions and compares DMS and DMS3 with recent
distributed storage solutions for unauthenticated data.
Table 1. Distributed storage for unauthenticated data

Protocol

Resilience

Abraham et al. [18]


DMS
Guerraoui and Vukolic [10]
Byzantine Disk Paxos [2]
Guerraoui et al. [17]
DMS3

4t + 1
4t + 1
3t + 1
3t + 1
3t + 1
3t + 1

Worst-Case Time-complexity
Read
Write
Amnesic Robust

3
2

1
1

2
2

t+1
2

unbounded
3

2
3

System Model and Preliminaries

2.1

System Model

We consider an asynchronous shared memory system consisting of a collection


of processes interacting with a nite collection of n base objects. Up to t out of
n base objects can suer NR-arbitrary failures [5] and any number of processes
may fail by crashing. Each object implements one or more registers. A register is
an object type with value domains Val, an initial value v0 and two invocations:
read, whose response is v Vals and write(v), v Vals, whose response is ack.
A read/write register is single-reader single-writer (SRSW) if only one process
can read it and only one can write to it; a register is multi-reader single-writer
(MRSW) if multiple processes can read it. Sometimes processes need to perform
two operations on the same base object, a write (of a register) followed by a read
(of a dierent register). To reduce the number of rounds, we collapse consecutive write/read operations accessing the same base object to a single low-level
operation called write&read. The write&read operation can be implemented in
a single round, for instance using active disks [3] as base objects1 .
1

Note that since write&read is not an atomic operation, it can be implemented from
simple read/write registers and thus the model is not strengthened.

On the Time-Complexity of Robust and Amnesic Storage

2.2

201

Preliminaries

In order to distinguish between the target registers interface and that of the
base registers, throughout the paper we denote the high-level read (resp. write)
operation as read (resp. write). Each of the developed protocols uses an underlying layer that invokes operations on dierent base objects in separate threads
in parallel. We use the notation from [2] and write invoke write(Xi ,v) (resp.
invoke x[i] read(Xi )) to denote that a write(v) operation on register Xi
(resp. a read of register Xi whose response will be stored in a local variable x[i])
is invoked in a separate thread by the underlying layer. The notation invoke
x[i] write&read(Yi , v, Xi ) denotes the invocation of an operation write&read
on base object i, consisting of a write(v) on register Yi followed by a read of register Xi (whose response will be stored in x[i]).
As base objects may be non-responsive, high-level operations can return while
there are still pending invocations to the base objects. The underlying layer keeps
track of which invocations are pending to ensure well-formedness, i.e., that a
process does not invoke an operation on a base object while invocations of the
same process and on the same base object are pending. Instead, the operation
is denoted enabled. If an operation is enabled when a pending one responds,
the response is discarded and the enabled operation is invoked. See e.g. [2] for a
detailed implementation of such layers.
We say that an operation op is complete in a run if the run contains a response
step for op. For any two operations op1 and op2 , when the response step of op1
precedes the invocation step of op2 , we say op1 precedes op2 . If neither op1 nor
op2 precedes the other then the two operations are said to be concurrent.
In order to better convey the insight behind the protocols, we simplify the
presentation in two ways. We introduce a shared object termed safe counter and
describe both algorithms in terms of this abstraction. Although easy to follow,
the resulting implementations require more rounds than the optimal number.
Thus, for each of the protocols we explain how with small changes these rather
didactic versions can be condensed to achieve the announced time-complexity.
The full details of the optimizations can be found in our publicly available technical report [19]. Secondly, for presentation simplicity we implement a SRSW
register. Conceptually, a MRSW register for m readers can be constructed using
m copies of this register, one for each reader. In a distributed storage setting,
the writer accesses all m copies in parallel, whereas the reader accesses a single
copy. It is worth noting that this approach is heavy and that in practice, cheaper
solutions are needed to reduce the communication complexity and the amount
of memory needed in the base objects.
We now introduce the safe counter abstraction used in our algorithms. A
safe counter has two wait-free operations inc and get. inc modies the counter
by incrementing its value (initially 0) and returns the new value. Specically,
the k th inc operation denoted inck returns k. get returns the current value
of the counter without modifying it. The counter provides the following
guarantees:

202

D. Dobre, M. Majuntke, and N. Suri

Validity: If get returns k then get does not precede inck .


Safety: If inck precedes get and for all l > k get precedes incl , then get
returns k.
Note that under concurrency, a safe counter might return an outdated value, but
never a forged value. In the absence of concurrency, the newest value is returned.
We now explain the intuition behind our algorithms. Both algorithms use the
safe counter introduced above to arbitrate between writer and reader. During
each read (resp. write) operation, the reader (resp. writer) executes inc to
advance the counter (resp. get to read the counter). The values returned by the
counters operations are termed views. By incrementing its current view, a read
announces its intent to read from the base objects. A subsequent invocation
of get by the writer returns the updated view. When the writer detects a
concurrent read, indicated by a view change, it freezes the most recent value
previously written. Freezing a value v means that v may be overwritten only if
the read operation that attempts to read v has completed. We note that the
read operation that caused a value v to be frozen does not violate regularity by
returning v because all newer values were written concurrently with the read.
However, reads must not return old values previously frozen. This is necessary
to ensure regularity and it is done by freezing a value v together with the view
of the read due to which v is frozen. A read whose view is higher than the
one associated with v knows that it must pick a newer value. A read operation
completes when it nds a value v to return such that (a) v is reported by a
correct base object and (b) v is not older than the latest value written before
the read is invoked.

A Fast Robust and Amnesic Algorithm

We start by describing an initial version of protocol DMS that uses the safe
counter abstraction. It is worth noting that the algorithm requires more rounds
than the optimum, but it conveys the main idea. Next, we explain the changes
applied to DMS to obtain an algorithm with optimal time-complexity.
3.1

Protocol Description

We present a robust and amnesic SRSW register construction using a safe


counter and 4t + 1 regular base registers, out of which t can incur NR-arbitrary
failures. Figure 1 illustrates a simple construction of the safe counter used. The
description of the counter is omitted for the sake of brevity. The shared objects
used by DMS are detailed in Figure 2 and the algorithm appears in Figure 3.
The write performs in two phases, (1) a write phase where it rst writes a
timestamp-value pair to n t registers and (2) a subsequent read phase, where
it executes get to read the current view. In case a view change occurs between
two successive writes, the value of the rst write is frozen. Recall that once
frozen, a value is not erased before the next view change. Similarly, the read

On the Time-Complexity of Robust and Amnesic Storage

Predicates:
safe(c) 
|{i : c y[i] c c}| t + 1
get()
for 1 i n do y[i]
for 1 i n do
invoke y[i] read (Yi )
wait for n t responses
return max{c Integers : safe(c)}

203

Local variables:
y[1 . . . n] Integers
k Integers, initially 0
inc()
k k+1
for 1 i n do
invoke write(Yi , k)
wait for n t responses
return k

Fig. 1. Safe counter from 4t + 1 safe registers Yi Integers

consists of (1) a write phase, where it rst executes inc to increment the current
view and (2) a subsequent read phase, where it reads at least n t registers. To
ensure that read never returns a corrupted value, the returned value must be
read from t+1 registers, a condition captured by the predicate safe. Moreover, to
ensure regularity, read must not return old values written before the last write
preceding the read. This condition is captured by the predicate highestCand.
We now give a more detailed description of the algorithm. As depicted in
Figure 2, each base register consists of three value elds current, prev and frozen
holding timestamp-value pairs, and an integer eld view. The writer holds a
variable x of the same type and uses x to overwrite the base registers. Each
write operation saves the timestamp-value pair previously written in x.prev.
Then, it chooses an increasing timestamp, stores the value together with the
timestamp in x.curr and overwrites n t registers with x. Subsequently, the
writer executes get. If the view returned by get is higher than the current
view (indicating a concurrent read), then x.view is updated and the most recent
value previously written is frozen, i.e., the content of x.prev is stored in x.frozen
(line 14, Figure 3). Finally, write returns ack and completes. It is important
to note that the algorithm is amnesic because each correct base object stores at
most three values (curr, prev and frozen).
The read rst executes inc to increment the current view, and then it reads
at least nt registers into the array x[1...n], where element i stores the content of
register Xi . If necessary, it waits for additional responses until there is a candidate
for returning, i.e., a read timestamp-value pair that satises both predicates safe
Types:
TSVals  Integers Vals, with selectors ts and val
Shared objects:
- regular registers Xi TSVals 3 Integers with selectors curr, prev,
frozen and view, initially 0, v0 , 0, v0 , 0, v0 , 0
- safe counter object Y Integers, initially Y = 0

Fig. 2. Shared objects used by DMS

204

D. Dobre, M. Majuntke, and N. Suri

Predicates (reader):
readFrom(c, i)  (c = x[i].curr x[i].view < view)
(c = x[i].frozen x[i].view = view)
safe(c)  |{i : c {x[i].curr, x[i].prev, x[i].f rozen}}| t + 1
highestCand(c)  |{i : readFrom(c , i) c .ts c.ts}| 2t + 1
Local variables (reader):
view Integers, initially 0
x[1 . . . n] TSVals 3 Integers
1
2
3
4
5

read()
for 1 i n do x[i]
view inc(Y )
for 1 i n do invoke x[i] read (Xi )
wait until n t responded c TSVals: safe(c) highestCand(c)
return c.val
Local variables (writer):
newView, ts Integers, initially 0
x TSVals 3 Integers, initially 0, v0 , 0, v0 , 0, v0 , 0

6
7
8
9
10
11
12
13
14
15

write(v )
ts ts+1
x.prev x.curr
x.curr ts, v
for 1 i n do invoke write(Xi , x)
wait for n t responses
newView get(Y )
if newView > x.view then
x.view newView
x.frozen x.prev
return ack
Fig. 3. Robust and amnesic storage algorithm DMS (4t + 1)

and highestCand. A timestamp-value pair c is safe when it appears in some eld


curr, prev or frozen of t+1 elements of x, ensuring that c was reported by at least
one correct register. Enforcing regularity is more subtle. Simply waiting until
the highest timestamped value read becomes safe might violate liveness because
it may be reported by a faulty register. To solve this problem, we introduce
the predicate highestCand. A value c is highestCand when 2t + 1 base registers
report values that were written not after c, which implies that newer values
are missing from t + 1 correct registers. As any complete write skips at most
t correct registers, all values newer than c were written not before read is
invoked and consequently, they can be discarded from the set of possible return
candidates.
We now explain with help of Figure 4 why reads are wait-free. We consider
the critical situation when multiple writes are concurrent with a read. Specifically, we consider the k th read (henceforth readk ), whose inc results in k

On the Time-Complexity of Robust and Amnesic Storage

205

readk
k
. . . inc

Rd

wr
Wr . . .

c, ,
x=

get
<k

<k


x=

rd

write(c.val )
curr, prev, frozen
view

wr
ch , c,
<k

get
k

write(ch .val )

wr

get . . . get

, ch , c
k

wr

...

, , c
k

write()

rd: read at least n t registers Xi


wr: write(x) to n t registers Xi

Fig. 4. Correctness argument of the read operation in DMS

(henceforth inck ), and the last write that still reads a view lower than k, i.e.,
the corresponding get returns a view lower than k. Note that by the safety
property of the counter, inck does not precede get and thus c is stored in 2t + 1
correct registers before any of them is read. A key aspect of the algorithm is
to ensure that no matter how many writes are subsequently invoked, c never
disappears from all elds of those 2t + 1 correct registers, as long as readk is
still in progress. Essentially this holds because the subsequent write re-writes
c to all registers and it also freezes c to ensure that future writes do the same.
In this process, c migrates from curr to prev and from prev to frozen where it
stays until the next view change. Therefore, c eventually becomes safe. But what
if c is not highestCand? In this situation, at least t + 1 correct registers report
timestamp-value pairs higher than c. We note that if any of them had stored c in
its frozen eld, then it would report c. This implies that none of these registers
has stored c in its frozen eld and thus, also none of these registers has stored a
timestamp-value pair higher than ch in its curr eld. Therefore, ch is reported
by t + 1 correct registers, and hence it is safe. Note that ch is also highestCand
because only faulty registers report values with higher timestamps.
We now explain how the fast algorithm is derived from DMS. The principle
underlying the optimization is to condense one round of write to the base objects and a subsequent round of read of the base objects into a single round
of write&read. For this purpose we disregard the safe counter abstraction and
directly weave inc and get (Fig. 1) into read and write (Fig. 3) respectively.
As a result, the reader advances the view and reads the base registers in one
round. Likewise, the writer stores a value in the base registers and reads the
view in a single round. The reader code (Fig. 3) is modied as follows: variable view is incremented locally, and line 3 is replaced with the statement for
1 i n do invoke x[i] write&read(Yi , view, Xi ). Similarly, in the writer
code (Fig. 3), line 9 is replaced with the statement for 1 i n do invoke
y[i] write&read(Xi , x, Yi ). Additionally in line 11, instead of executing get,
the writer picks the t + 1st highest element of y.

206

D. Dobre, M. Majuntke, and N. Suri

We now informally argue that the optimization is correctness preserving. As


in the above example, we consider readk and the last write that reads a view
lower than k. Recall that the write operation stores c in 2t + 1 correct base
objects and each of them responds with the current view it has stored. The
writer then picks the t + 1st highest view reported. We argue that t + 1 correct
base objects have stored c before any of them respond to readk . This would
imply that c is safe. As the write operation reads a view lower than k, out
of the 2t + 1 correct base objects accessed by it, at most t report k. Thus, the
remaining t + 1 objects are accessed by readk only after c was written to them.
Applying the above arguments, it is not dicult to see that c is never erased from
t + 1 correct registers before readk completes, and thus it eventually becomes
safe. Regarding regularity, again, arguments similar to above can be used. A
formal proof of the optimized algorithm can be found in the full paper [19]. The
remainder of this section is concerned with the correctness of DMS.
3.2

Protocol Correctness

Lemma 1 (Regularity). Algorithm DMS in Figure 3 implements a regular


register.
Proof. We show that the read operation always returns the value of the latest
write preceding the read, or a newer written value. Suppose that c.val is
the value returned by readk . We assume by contradiction that there exists
a value ch .val such that ch .ts > c.ts and write(ch .val) precedes readk . As
write(ch .val) is complete, n 2t correct registers have stored ch or a higher
timestamp-value pair before any of them is read. The fact that c.val is returned
implies that c is highestCand. Thus, there are at least 2t + 1 registers Xi and
values c with timestamp c .ts c.ts such that readFrom(c ,i) is true. Note that
one of them is a correct register Xi updated with ch . As values are written with
monotonically increasing timestamps, by denition of readFrom, necessarily c
is read from x[i].f rozen and x[i].view = k. However, because the counter is
valid, the rst time a write operation reads view k is only after the write of
ch .val. Thus, in view k only timestamp-value pairs ch or higher are frozen, a
contradiction.

Lemma 2 (Wait-freedom). Algorithm DMS in Figure 3 implements wait-free
read and write operations.
Proof. The write operation is nonblocking because it never waits for more than
n t responses. Showing that reads are also live is more subtle. To derive a
contradiction, we assume that readk blocks at line 4 and show that there exists
a candidate for returning. We consider the time after which all correct base
objects (at least 3t + 1) have responded. We choose c as the 2t + 1st lowest
timestamp-value pair readFrom a correct register. Note that c is highestCand by
construction because values with timestamps c.ts are readFrom 2t + 1 correct
registers (set L). Also, we note that values with timestamps c.ts are readFrom
t + 1 correct registers (set R). In the following, we distinguish the cases where

On the Time-Complexity of Robust and Amnesic Storage

207

the write of c.val reads a view equal to k (case 1), or lower than k (case 2).
Note that by the validity of the counter, only views k are returned. Case 1
implies that (a) only timestamp-value pairs lower than c are frozen, and (b) c
is the highest timestamp-value pair readFrom the curr eld of a correct register.
Together (a) and (b) imply that c is the highest timestamp-value pair readFrom
a correct register. Thus, for all registers Xi R ( t + 1), readFrom(c ,i) implies
that c = c and hence, c is safe. We now consider case 2 where write(c.val)
reads a view lower than k. This implies that c or a higher timestamp-value pair
is frozen in view k. If t + 1 registers in L were updated with c before they
are read, then they would report c either from their curr or their frozen eld,
and clearly c would be safe. Therefore, c is missing from t + 1 correct registers.
Thus, write(c.val)s write phase (lines 910) does not precede readk s read
phase (lines 34). By the transitivity of the precedence relation, inck (line 2)
precedes get (line 11). By the safety of the counter, write(c.val) reads view k,
a contradiction.

Theorem 1 (Robustness). The algorithm in Figure 3 wait-free implements a
regular register.
Proof. Immediately follows from Lemma 1 and 2.

A Robust and Amnesic Algorithm with Optimal


READ-Complexity and Resilience

Similar to the previous section, we describe an initial version of DMS3 that uses
a safe counter. The algorithm requires more rounds than the optimum but it
is easier to understand because most of its complexity is hidden in the counter
implementation. Then, we overview the changes necessary to obtain the optimal
algorithm. The full details of the optimized DMS3 such as the pseudocode and
proofs can be found in our technical report [19]. We proceed in a bottom-up
fashion and describe the counter implementation rst.
4.1

A Safe Counter with Optimal Resilience

We present a safe counter with operations inc and get using 3t + 1 base objects
i {1 . . . n}, where t base objects can be subject to NR-arbitrary failures. The
types and shared objects used by the counter are depicted in Figure 5 and
the algorithm appears in Figure 6. Each base object i implements two regular
registers: a register Ti holding a timestamp written by get and read by inc, and
a second register Yi consisting of two elds pw and w, modied by inc and read
by get. While the pw eld stores only the counter value, the w eld stores the
counter value together with a high-resolution timestamp [20]. A high-resolution
timestamp is a timestamp-array with n entries, one for each base object.
The get operation performs in two phases. The rst phase reads from the
base objects until n t registers Yi have responded and all responses are nonconicting. This condition is captured by the predicate conict. When two base

208

D. Dobre, M. Majuntke, and N. Suri

Additional Types:
TSs  Integers array of size n, Integers[n]
TSsInt  TSs Integers with selectors hrts (high-resolution timestamp)
and cnt
Shared objects:
- regular registers Yi Integers TSsInt with selectors pw and w,
initially Yi = 0, [0, . . . , 0], 0
- regular registers Ti Integers, initially 0

Fig. 5. Shared objects used by the safe counter (3t + 1)

objects i and j are in conict, then at least one of them is malicious. In this
situation, the get operation can wait for more than n t responses without
blocking, eectively ltering out responses from malicious base objects. Next, the
get operation uses the responses to build a candidate set from values appearing
in the w eld of Yi . In the second phase, the get operation chooses an increasing
timestamp ts and overwrites nt registers Ti with ts; at the same time it re-reads
the registers Yi until n t of them have responded and there exists a candidate
to return. This condition is captured by the predicates safe and highCand. If no
candidate can be returned (because of overlapping inc operations), get returns
the initial counter value 0.
Similarly, the inc operation performs in two phases, a pre-write and a write
phase. The pre-write phase accesses n t base objects i, overwriting the pw eld
of Yi with an increasing counter value and reading the individual timestamps
stored in Ti into a single high-resolution timestamp. Subsequently, in the write
phase, inc stores the counter value together with the high-resolution timestamp
in the w eld of n t registers Yi and returns.
We now show that the algorithm in Figure 6 wait-free implements a safe
counter. We do this by showing that the two following properties are satised:
Validity: If get returns k then get does not precede inck .
Safety: If inck precedes get and for all l > k get precedes incl , then get
returns k.
Lemma 3 (Validity). The counter object implemented in Figure 6 is valid.
Proof. If the initial value is returned then we are done. Else only a value c.cnt = k
is returned such that c is safe. This implies that t + 1 base objects report values
k or higher either from their pw or w elds. As not all of them are faulty, there
exists a correct object Yi and a value l k such that l was indeed written to Yi .
As inck precedes incl (or it is the same operation) and get does not precede
incl , it follows that get does not precede inck .

Lemma 4 (Safety). The counter object implemented in Figure 6 is safe.
Proof. Let inck be the last operation preceding the invocation of get. Furthermore, for all l > k, get precedes incl . By assumption, c.cnt = k was written to

On the Time-Complexity of Robust and Amnesic Storage

Local variables (inc):


y Integers TSsInt, initially 0, [0, . . . , 0], 0
cnt Integers, initially 0 //counter value
hrts[1 . . . n] Integers, initially [0, . . . , 0] //high-resolution timestamp
1
2
3
4
5
6
7
8
9

inc()
cnt cnt + 1
y.pw cnt
for 1 i n do invoke hrts[i] write&read (Yi , y, Ti )
wait for n t responses
y.w.hrts hrts
y.w.cnt cnt
for 1 i n do invoke write(Yi , y)
wait for n t responses
return ack
Predicates (get):
conict(i, j)  y[i].w.hrts[j] ts
safe(c)  |{i : max{P W [i]} c.cnt (c W [i] c .cnt c.cnt)}| > t
highCand(c)  c C (c.cnt = max{c .cnt : c C})
Local variables (get):
P W [1 . . . n] 2Integers , W [1 . . . n] 2TSsInt , C 2TSsInt
y[1 . . . n] Integers TSsInt {}
ts Integers, initially 0

10
11
12
13
14
15
16
17
18
19
20
21
22

get()
for 1 i n do y[i] ; P W [i] W [i]
C
ts ts + 1
for 1 i n do invoke y[i] read (Yi )
repeat
check
until a set S of n t objects responded i, j S : conict(i, j)
C {y[i].w : |{j : y[j].w
= y[i].w}| 2t}
for 1 i n do invoke y[i] write&read (Ti , ts, Yi )
repeat
check
C C \ {c C : |{i : c W [i] c
= c}| 2t + 1}
until n t responded c C: (safe(c) highCand(c)) C =
if C
= then return c.cnt else return 0
check
if Yi responded then
P W [i] P W [i] {y[i].pw}
W [i] W [i] {y[i].w}
Fig. 6. Safe counter algorithm (3t + 1)

209

210

D. Dobre, M. Majuntke, and N. Suri

the w eld of t + 1 correct objects before get is invoked. Therefore, c is added


to the candidate set C (line 16) and because at most 2t objects respond without
c, it is never removed. Furthermore, t + 1 correct objects eventually report c
in the second get round and c becomes safe. As there are no concurrent inc
operations, eventually 2t+1 correct objects report values k or lower from their w
eld and hence all ch where ch .cnt > k are removed from C. Thus, c eventually
becomes both safe and highCand and c.cnt = k is returned.

Lemma 5 (Wait-freedom). The counter object implemented in Figure 6 is
wait-free.
Proof. As the inc operation never waits for more than n t responses, clearly it
never blocks. In the following we prove that the get operation does not block (1)
at line 15 and (2) at line 21. We assume by contradition that the get operation
blocks. Case (1): as the get operation never updates a correct base object with
ts before the second round, correct base objects are never in conict with each
other and thus the get operation does not block at line 15. Case (2): The get
operation blocks at line 21. Therefore, there exists c C and c is not safe. Let
c.cnt = k. If some correct base object has reported c in its w eld in the rst
round of get, then t + 1 correct base objects report k or higher in their pw eld
in the second round and thus c is safe. Therefore, we assume that no correct
base object reports c in w in the rst round. If no correct object reports c in
w in the second round, then 2t + 1 correct base objects respond with c = c in
their w eld and c is removed from C. In the following we assume that some
correct object reports c in w in the second round. Let F (|F | > 0) denote the
set of faulty objects that report c in their w eld in the rst round. Let X
(|X| 0) be the set of correct base objects i such that Yi reports to the second
get round a value lower than k in both elds pw and w. This implies that the
pre-write phase of inc at Yi does not precede the second get round reading
Yi (see Fig. 7 (a)). By the semantics of write&read, the second get round has
updated Ti with ts before reading Yi (line 17). Similarly, the rst round of inc
has pre-written k to Yi before reading Ti (line 3). By transitivity, the second get
round has completed the update of Ti before the rst inc round has read Ti , and
thus Ti reports ts (Fig. 7 (a)). Let X  = {j X : c.hrts[j] = ts}, that is, the
objects in X that have actually responded to the rst inc round. Note that for
all i F and for all j X  , conict(i, j) is true. Hence, the 2t + 1 |F | objects
that have responded without c in their w eld in the rst round of get do not
include any object in X  . Overall, after the second get round, 2t + 1 |F | + |X  |
base objects have responded without c in their w eld. If |F | |X  | then c is
removed from the set of candidates C (line 20), a contradiction. Therefore, we
consider the case |F | > |X  |. Out of the t + 1 correct base objects updated by
the pre-write phase of inc, t + 1 |X | respond with a timestamp lower than ts.
Consequently, for every such base object i, get has completed updating Ti with
ts not before inc reads Ti (see Figure 7 (b)). By the semantics of write&read
and by the transitivity of the precedence relation, register Yi has stored k in its
pw eld before the second get round reads Yi . Hence, at least t + 1 |X  | + |F |

On the Time-Complexity of Robust and Amnesic Storage


pre-write k to Yi

inc (1st round)

write ts to Ti

get (2nd round)

211

read Ti
read Yi

a)
inc (1st round)

pre-write k to Yi

get (2nd round)

read Ti

write ts to Ti

read Yi

b)

Fig. 7. Safe counter correctness argument

base objects report values k or higher. As |F | > |X  |, t + 1 base objects report


k or a higher value, and thus c is safe, a contradiction.

Theorem 2. The Algorithm in Figure 6 wait-free implements a safe counter.
Proof. Follows directly from Lemma 3, 4 and 5.
4.2

The DMS3 Protocol

Protocol Description
In this section we present a robust and amnesic SRSW register construction from
a safe counter and 3t + 1 regular base registers, out of which t can be subject to
NR-arbitrary failures. We now describe the write and read operations of the
DMS3 algorithm illustrated in Figure 8.
The write operation performs in three phases, (1) a pre-write phase (lines 7
9) where it stores a timestamp-value pair c in the pw eld of n t registers, (2) a
read phase (line 10), where it calls get to read the current view and (3) a write
phase (lines 1416), where it overwrites the w eld of n t registers with c. If
the read phase results in a view change, the most recent value previously written
is frozen together with the new view. This is done by updating the view eld
and copying the value stored in w to the frozen eld (lines 1113). The reader
performs exactly the same steps as in DMS (see Section 3).
We now explain with help of Figure 9 why reads are wait free. Similar to the
description of DMS in Section 3, we consider readk and the last write that
reads a view lower than k. Note that inck does not precede get and thus, c is
stored in the pw eld of t + 1 correct registers before they are read. Also, the
w eld of t + 1 correct registers is updated with c. As the subsequent write
encounters a view change, c is written to the frozen eld of t+1 correct registers,
where it stays until readk completes. Hence, c is sampled from t + 1 correct
registers pw, w or frozen eld and thus it is safe. Note that c is also highestCand
because only faulty registers report newer values.

212

D. Dobre, M. Majuntke, and N. Suri


Shared objects:
regular registers Xi TSVals 3 Integers, with selectors pw, w, frozen and
view, initially Xi = 0, v0 , 0, v0 , 0, v0 , 0
Predicates (reader):
readFrom(c, i)  (c = x[i].w x[i].view < view)
(c = x[i].frozen x[i].view = view)
safe(c)  |{i : c {x[i].pw, x[i].w, x[i].frozen}}| t + 1
highestCand(c)  |{i : readFrom(c , i) c .ts c.ts}| 2t + 1
Local variables (reader):
view Integers, initially 0
x[1 . . . n] TSVals 3 Integers {}

1
2
3
4
5

read()
for 1 i n do x[i]
view inc(Y )
for 1 i n do invoke x[i] read (Xi )
wait until n t responded c TSVals: safe(c) highestCand(c)
return c.val
Local variables (writer):
ts, newView Integers, initially 0
x TSVals 3 Integers, initially 0, v0 , 0, v0 , 0, v0 , 0

6
7
8
9
10
11
12
13
14
15
16
17

write(v )
ts ts+1
x.pw ts, v
for 1 i n do invoke write(Xi , x)
wait for n t responses
newView get(Y )
if newView > x.view then
x.view newView
x.frozen x.w
x.w ts, v
for 1 i n do invoke write(Xi , x)
wait for n t responses
return ack
Fig. 8. Robust and amnesic storage algorithm DMS3 (3t + 1)

With DMS3, the high-level operations have a non-optimal time-complexity.


We now explain how the optimized version is obtained by collapsing individual
low-level operations. More precisely, a write operation and a consecutive read
operation are merged together to a write&read operation. The safe counter abstraction is disregarded and the counter operations inc and get are weaved into
read and write respectively. Recall that the counter operations consist of two
rounds each. In the write implementation, the pre-write phase and the rst
round of get are collapsed. Note that the three-phase structure of the write
is preserved in that the writer reads the current view before it moves to the
write phase. Similarly, in the read implementation, the second inc round and

On the Time-Complexity of Robust and Amnesic Storage

213

readk
k
. . . inc

Rd

wr
Wr . . .

c, ,
x=

rd
get

wr

<k

<k

c, c,
<k

write(c.val )

x=

pw, w, frozen
view

wr

get

wr

, c,
<k

write()

, , c
k

. . . wr

get

wr

, , c
k

, , c
k

write()

rd: read at least n t registers Xi


wr: write(x) to n t registers Xi

Fig. 9. Correctness argument of the read operation in DMS3

the read phase are merged together. Overall, this results in a time-complexity
of three rounds for the write and two rounds for the read.
We now informally argue that the optimization is correctness preserving. As
above, we consider readk and the last write that reads a view lower than k.
We argue that t + 1 correct base registers have stored c in their pw eld before
any of them is read. This would imply that c is safe. The fact that the write of
c.val reads a view lower than k implies that k is missing from at least 2t + 1 base
objects. We know from the safe counter algorithm in the previous section that
if only 2t base objects respond without k, then k is never removed from the set
of candidates. As the safe counter implementation is wait-free, k is eventually
read, contradicting the initial assumption. Therefore, 2t+ 1 base objects respond
without k, and thus there are t + 1 correct base objects among them that are
accessed by (the read phase of) readk only after c was pre-written to them. By
applying similar arguments as above, it is not dicult to see that c does not
disappear from any of the t + 1 correct base objects before readk completes.
This would imply that c eventually becomes safe. For a formal treatment we
refer the interested reader to our full paper [19]. The remainder of this section
is concerned with the correctness of DMS3.
Protocol Correctness
Lemma 6 (Regularity). Algorithm DMS3 in Figure 8 implements a regular
register.
Proof. Identical to the proof of Lemma 1.

Lemma 7 (Wait-freedom). Algorithm DMS3 in Figure 8 implements waitfree read and write operations.
Proof. The write operation is nonblocking because it never waits for more than
n t responses. To derive a contradiction we assume that readk blocks at line 4

214

D. Dobre, M. Majuntke, and N. Suri

and show that there exists a candidate for returning. We consider the time after
which all correct base objects (at least 2t + 1) have responded. We choose c
as the highest timestamp-value pair readFrom a correct register. Note that c is
highestCand by construction because values with timestamps c.ts are readFrom
2t + 1 correct registers. In the following, we distinguish the cases where the view
read by the write of c.val is equal to k (case 1) or it is lower than k (case
2). Note that by the validity of the counter, only views k are returned. Case
1: Let Xi be a correct register such that readFrom(c, i). Since by assumption
x[i].view = k, c is readFrom the frozen eld of Xi . However, in view k only
timestamp-value pairs lower than c are frozen, a contradiction. Now we consider
case 2, where the write(c.val) reads a view lower than k. This implies that
inck does not precede get. As the pre-write phase (lines 89) precedes get
(line 10), and inck (line 2) precedes the read phase (lines 34), by transitivity,
the pre-write phase also precedes the read phase (see Figure 9). Thus, t + 1
correct registers have stored c in their pw eld before they are read. What is
left to show is that no subsequent write erases c from all elds of those t + 1
correct registers. Note that in view k, only timestamp-value pairs c or higher a
frozen. Thus, if c was stored in the w eld of t + 1 correct registers before they
are read, then c would be safe. Hence, c is missing from t + 1 correct registers w
eld. Consequently, write(c.val)s write phase (lines 1516) does not precede
readk s read phase (lines 34). By transitivity, the subsequent write reads
view k and freezes c. Note that c is erased from pw only after c was previously
stored in w (line 14). Furthermore, c is erased from w only after it was stored
in frozen (line 13). As k is the last view, by the validity of the safe counter, c is
never erased from frozen.

Theorem 3 (Robustness). Algorithm DMS3 in Figure 8 implements a robust
register.
Proof. Immediately follows from Lemma 6 and 7.

Concluding Remarks

We have presented amnesic algorithms that robustly implement a shared register


from a collection of n base objects, of which up to t < n/3 can be subject
to NR-arbitrary failures. For n 3t + 1 we have shown that two rounds of
communication with the base objects are sucient for every read operation to
complete. This is the rst robust and amnesic register construction that matches
the two-round lower bound proved in [10]. For the n 4t + 1 case, we have
presented the rst robust and amnesic register construction that matches the
(trivial) one-round lower bound for every operation. Note that our construction
is tight because with less than 4t+ 1 base objects, both the read and the write
operations require at least two communication rounds [2, 10].
The main result of this paper, that robust access to amnesic storage is possible
in optimal time is somewhat surprising given the large body of literature on nonamnesic [11,10,12,4,14,13] and non-robust [8,9,18,5] algorithms. Moreover, our

On the Time-Complexity of Robust and Amnesic Storage

215

result is counter-intuitive because so far, only non-amnesic algorithms match


the time-complexity lower bounds. As a corollary, our result suggests that the
intuition of amnesic algorithms being inherently less ecient than non-amnesic
ones is largely unjustied.
Some of the prior amnesic (but not robust) register implementations assume
that the readers cannot modify the base objects (see e.g. [2]). This assumption
in fact results in implementations that possess several properties that could be
valuable in practice, for instance the ability to tolerate any number of malicious
readers while using only O(1) memory at the base objects. We are not aware of
any robust implementation supporting that as well, and in fact, our algorithms
are not an exception. We leave as an open problem the question whether robust
and amnesic register implementations exist, that would support any number of
readers while using only O(1) memory at the base objects.

Acknowledgments
We thank Gregory Chockler, Felix Freiling, Marco Serani and Jay Wylie for
many useful comments on an earlier version of this paper.

References
1. Lamport, L.: On interprocess communication. part II: Algorithms. Distributed
Computing 1(2), 86101 (1986)
2. Abraham, I., Chockler, G., Keidar, I., Malkhi, D.: Byzantine disk paxos: optimal
resilience with byzantine shared memory. Distributed Computing 18(5), 387408
(2006)
3. Chockler, G., Malkhi, D.: Active disk paxos with innitely many processes. Distributed Computing 18(1), 7384 (2005)
4. Martin, J.P., Alvisi, L., Dahlin, M.: Minimal Byzantine Storage. In: Malkhi, D.
(ed.) DISC 2002. LNCS, vol. 2508, pp. 311325. Springer, Heidelberg (2002)
5. Jayanti, P., Chandra, T.D., Toueg, S.: Fault-tolerant wait-free shared objects. J.
ACM 45(3), 451500 (1998)
6. Herlihy, M.: Wait-free synchronization. ACM Trans. Program. Lang. Syst. 13(1),
124149 (1991)
7. Chockler, G., Guerraoui, R., Keidar, I.: Amnesic Distributed Storage. In: Pelc, A.
(ed.) DISC 2007. LNCS, vol. 4731, pp. 139151. Springer, Heidelberg (2007)
8. Hendricks, J., Ganger, G.R., Reiter, M.K.: Low-overhead byzantine fault-tolerant
storage. In: SOSP 2007: Proceedings of twenty-rst ACM SIGOPS symposium on
Operating systems principles, pp. 7386. ACM, New York (2007)
9. Malkhi, D., Reiter, M.: Byzantine quorum systems. Distrib. Comput. 11(4), 203
213 (1998)
10. Guerraoui, R., Vukolic, M.: How fast can a very robust read be? In: PODC
2006: Proceedings of the twenty-fth annual ACM symposium on Principles of
distributed computing, pp. 248257. ACM, New York (2006)
11. Goodson, G.R., Wylie, J.J., Ganger, G.R., Reiter, M.K.: Ecient byzantinetolerant erasure-coded storage. In: DSN 2004: Proceedings of the 2004 International
Conference on Dependable Systems and Networks (DSN 2004), Washington, DC,
USA, pp. 135144. IEEE Computer Society, Los Alamitos (2004)

216

D. Dobre, M. Majuntke, and N. Suri

12. Guerraoui, R., Vukolic, M.: Rened quorum systems. In: PODC 2007: Proceedings
of the twenty-sixth annual ACM symposium on Principles of distributed computing, pp. 119128 (2007)
13. Bazzi, R.A., Ding, Y.: Non-skipping timestamps for byzantine data storage systems. In: Guerraoui, R. (ed.) DISC 2004. LNCS, vol. 3274, pp. 405419. Springer,
Heidelberg (2004)
14. Aiyer, A., Alvisi, L., Bazzi, R.A.: Bounded wait-free implementation of optimally
resilient byzantine storage without (unproven) cryptographic assumptions. In: Pelc,
A. (ed.) DISC 2007. LNCS, vol. 4731, pp. 719. Springer, Heidelberg (2007)
15. Cachin, C., Tessaro, S.: Optimal resilience for erasure-coded byzantine distributed
storage. In: DSN 2006: Proceedings of the International Conference on Dependable
Systems and Networks (DSN 2006), Washington, DC, USA, pp. 115124. IEEE
Computer Society, Los Alamitos (2006)
16. Liskov, B., Rodrigues, R.: Tolerating byzantine faulty clients in a quorum system.
In: ICDCS 2006: Proceedings of the 26th IEEE International Conference on Distributed Computing Systems, Washington, DC, USA, pp. 3443. IEEE Computer
Society, Los Alamitos (2006)
17. Guerraoui, R., Levy, R.R., Vukolic, M.: Lucky read/write access to robust atomic
storage. In: DSN 2006: Proceedings of the International Conference on Dependable
Systems and Networks (DSN 2006), pp. 125136 (2006)
18. Abraham, I., Chockler, G., Keidar, I., Malkhi, D.: Wait-free regular storage from
byzantine components. Inf. Process. Lett. 101(2) (2007)
19. Dobre, D., Majuntke, M., Suri, N.: On the time-complexity of robust and amnesic storage. Technical Report TR-TUD-DEEDS-04-01-2008, Technische Universit
at Darmstadt (2008),
http://www.deeds.informatik.tu-darmstadt.de/dan/amnesicTR.pdf
20. Chockler, G., Rachid Guerraoui, I.K., Vukolic, M.: Reliable distributed storage.
IEEE Computer (to appear, 2008)

Graph Augmentation via Metric Embedding


Emmanuelle Lebhar1 and Nicolas Schabanel2
1

CNRS and CMM-Univesidad de Chile, Chile


[email protected]
2
CNRS and Universite Paris Diderot, France
[email protected]

Abstract. Kleinberg [17] proposed in 2000 the rst random graph


model achieving to reproduce small world navigability, i.e. the ability
to greedily discover polylogarithmic routes between any pair of nodes in
a graph, with only a partial knowledge of distances. Following this seminal work, a major challenge was to extend this model to larger classes of
graphs than regular meshes, introducing the concept of augmented graphs
navigability. In this paper, we propose an original method of augmentation, based on metrics embeddings. Precisely, we prove that, for any
> 0, any graph G such that its shortest paths metric admits an embedding of distorsion into Rd can be augmented by one link per node such
that greedy routing computes paths of expected length O( 1 d log2+ n)
between any pair of nodes with the only knowledge of G. Our method
isolates all the structural constraints in the existence of a good quality
embedding and therefore enables to enlarge the characterization of augmentable graphs.
Keywords: Small world, metrics embedding, greedy routing.

Introduction

The small world eect, or six degrees of separation, is the well known property observed in social networks [9,21] that any pair of nodes in these networks
is connected by a very short chain of acquaintances (typically polylogarithmic
in the size of the network), that, moreover, can be discovered locally. In the
literature, a small world graph can either refer to this property or to a graph
with polylogarithmic diameter and high clustering (see e.g. [23]). In this paper, a
small world graph refers to a graph of polylogarithmic diameter and whose short
paths can be discovered locally, i.e. which is navigable. This surprising property
has gained a lot of interest recently since Kleinberg [17] introduced the rst analytical graph model for navigability, and because of its potential in the design
of large decentralized networks with ecient routing schemes. The model proposed by Kleinberg in 2000 consists in a d-dimensional mesh augmented by one
extra random link in each node, distributed according to the d-harmonic distribution. The local search is then modeled by greedy routing, which is the simple
algorithm that, at each node, forwards the message to the neighbor that is the
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 217225, 2008.
c Springer-Verlag Berlin Heidelberg 2008


218

E. Lebhar and N. Schabanel

closest to the destination in the mesh. Kleinberg demonstrates that greedy routing computes paths of expected length (log2 n) between any pair of nodes in
his model, with the only knowledge of the distances in the mesh: the augmented
mesh is navigable
Following this seminal work, a major challenge was to extend this model to
larger classes of graphs than regular meshes, i.e. to determine which n-node
graphs G admit an augmentation with one link in each node such that greedy
routing with the only of G computes polylog(n)-length paths between any pair
in the augmented graph. Kleinberg [18] and Duchon et al. [7] showed that this
is possible for all graphs of bounded growth, i.e. where, for any node u and
radius r 1, the 2r-neighborhood of u is of size at most a constant times
its r-neighborhood. Fraigniaud [10] demonstrates that any bounded treewidth
graph can also be augmented by one link per node to become navigable, and
Abraham and Gavoille [4] showed that, more generally, this is possible for all
graphs excluding a xed minor. The denition of the problem can directly be
extended to metric spaces by asking which n-points metric spaces1 M = (V, )
can be augmented by O(log n) links such that, in the resulting graph, greedy
routing computes polylog(n) routes between any pair with the only knowledge
of M . In this framework, Slivkins [22] showed that any doubling metric can be
augmented to become navigable. A doubling metric is a metric where, for all
r 1, any ball of radius 2r can be covered by at most C balls of radius r, for
some constant C.
However, it was recently proven by Fraigniaud et al. [13] that such an augmentation is not possible for all graphs: there exist an innite familiy of n-node
graphs on which any distribution
of augmented links will leave the greedy paths

of expected length (n1/ log n ) for some pairs. The best upper bound valid for
1/3 ) between
arbitrary graphs up to our knowledge is an expected length O(n
any pair, due to Fraigniaud et al. [12], with some specic link augmentation.
The remaining gap between these two bounds is today still open and leaves a
question mark on the limiting characteristics of a metric for the navigability
augmentation.
Orthogonally to the navigability question, studies on embeddings of metric
spaces have known huge developments this last decade (cf. Chapter 15 of [20] for a
review), due in particular to their applications in approximation algorithms [15]
and more recently in handling eciently large decentralized networks [6]. An
embedding of a metric M = (V, ) into a metric M  = (V  ,  ) is an injective
function on V into V  . Its quality is characterized by the distorsion it induces
on the distances. For the sake of simplicity, we consider only non-contracting
embeddings, we then say that has distorsion if and only i for any u, v V ,
 ((u), (v)) (u, v). Crucial networking problems like routing, resource
location or nearest neighbor search are easy to handle on a low dimensional
euclidean space. However, large real networks like the Internet do not present
1

A metric space M = (V, ) is a set of points V associated with a distance function


. Therefore, any weighted graph naturally denes a metric M on its set of nodes V
with the distance function being the length of a shortest path between two nodes.

Graph Augmentation via Metric Embedding

219

such a simple structure. The increasing interest for metrics embeddings comes
therefore partially from the fact that, if the embedding is of good quality, it
can provide a way to develop ecient algorithms on complex, or even arbitrary,
metric spaces, by solving them on a simple metric space that approximates them
well (cf. e.g. [14,15]). In addition, many good quality embeddings are computed
with randomized local algorithms that only require a distance oracle, making
them particularly appropriate to the large decentralized networks setting (cf.
e.g. [5] for a seminal example).
In this paper, we propose a new way to tackle the augmented graphs navigability problem through the metric embedding setting.
1.1

Our Contribution

We introduce a generalized augmentation process. The main feature of our augmentation process is to use an embedding of the input graph shortest paths metric into a metric that is easy to augment into a navigable graph. This distinction
between the augmentation process in itself (handled on the easy metric) and
the structural characteristics of the input (captured by the embedding quality)
provides a new way to characterize the classes of navigable graphs. We consider
embedding into (Rd ,
p ) which is the d-dimensional euclidean space associated
to the
p norm, for d, p 1: for any u = (u1 , . . . , ud ) and v = (v1 , . . . , vd ), we
d
have ||u v||p = ( i=1 |ui vi |p )1/p . We prove the following theorem:
Theorem 1. Let p, n, , d 1. For any > 0, any n-node graph G whose
shortest path metric M = (V, ) admits an embedding of distorsion into (Rd ,
p )
can be augmented with one link per node such that greedy routing in the resulting
graph computes paths of expected length O( 1 d log2+ n) between any pair with
the only knowledge of M .
For instance, using the recent embedding result of Abraham et al. [3], we get as
an immediate corollary that, for any 0 < 1 and any n 1, any n-node graph
G of doubling dimension D (cf. [14]) can be augmented so that the expected
lengths of all greedy paths is O((log(1+) n)O(D/) log2 n) = O((log n)O(D) ) with
the only knowledge of G. This provides a more direct proof to the fact that
bounded doubling dimension graphs are navigable (proved in [22]).
Intuitively, if the metric considered is not too far from a metric M which can
be easily augmented, we use a low distorsion embedding of the metric into M ,
draw the random links in M , and then map back appropriately these links to
the original metric so that they will still be useful shortcuts for greedy routing.
Moreover, the design of the augmented links in our process can be done in a
fully decentralized way and only requires to know the embedding. In the case
where the chosen embedding is itself local (like e.g. the seminal Bourgain embedding [5] if a distance oracle is available), we thus provide an algorithm which
locally adds one address to each routing table in a network and guarantees a
small number of hops decentralized routing between any pair for a large class of
input graphs.

220

E. Lebhar and N. Schabanel

A Universal Augmentation Process via Metric


Embedding

In this section, we present our augmentation process that adds one directed link
per node. This process is universal in the sense that it only requires as an input
the base graph (arbitrary) and an embedding function of this graph into Rdp , for
some p, d 1. Such a function exists for any graph and therefore the algorithm
is not restricted to a specic graph class. However, as we will see in the next
section, the analysis of greedy routing might give a poor routing time result if the
embedding is not of good quality. There exists lower bound results on arbitrary
metric embedding quality. A typical example is that embedding some n-node
constant degree expander graph into Rdp requires distortion (log n) [20] and
dimension d = (log n) [2]. Nevertheless, expander graphs are always navigable
without any augmentation given their polylogarithmic diameter.
The augmentation algorithm is based on the well known augmentation of ddimensional meshes of the Kleinberg model, where the shortcuts are distributed
according to the d-harmonic distribution. The idea is to map back these links
to the original set of nodes. Given that not all the extremities of the shortcuts
added in
dp are images of the original nodes, this requires some careful rewiring.
Augmentation Process AP
Input: An n-node graph G = (V, E), an embedding of its shortest path
metric M = (V, ) into
dp , and a constant > 0;
Output: G augmented with one directed link in each node.
Begin
For each u V do
Pick a point u Rdp with probability density:
1
1
,


Z ||(u) ||p d ln1+ (||(u) ||p + e)
over all Rdp .
Add a directed link from u to v V where v is the node such that (v) is
the closest point to u in (V ).
End.
Note: e stands here for exp(1) and is only used to allow distance to be zero
in the formula. Z is the normalizing factor of the probability density described:

S(t)
dt, where S(t) is the surface of an hypersphere of raius t in
Z = t>0 td ln1+
(t+e)
Rd . Figure 1 illustrates the process AP.

Navigability of Graphs Augmented with AP

In this section, we demonstrate our main result. The intrinsic dimension [3], or
doubling dimension [1] of a graph G characterizes its geometric property, this is

Graph Augmentation via Metric Embedding

221

Fig. 1. Illustration of the augmented link from vertex 1 to 3 with process AP

the minimum constant such that any ball in G can be covered by at most 2
balls of half the radius. We show that, if a graph has low intrinsic dimension,
AP process provides augmented shortcuts that enables navigability. We have
the following theorem:
Theorem 2. Let p, n, , d 1, > 0, G an n-node graph and an embedding
of distorsion of the shortest path metric M of G into (Rd ,
p ). Then, greedy
routing in AP(G, , ) computes paths of expected length at most O( 1 d log2+ n)
between any pair, with the only information of the distances in M .
Proof. In order to analyze greedy routing performances in AP(G, , ), we begin
by analyzing some technical properties of the probability distribution of the
chosen points in (Rd ,
p ). For any u G, we say that u , as dened in algorithm
AP, is the contact point of u.
Let Z be the normalizing factor of the contact points distribution. We have:

S(t)
Z=
dt,
1+
d
(t + e)
t>0 t ln
where S(t) stands for the surface of a sphere of radius t in Rdp . This surface is


at most cp 2d /(d 1)! td1 , where cp > 1 is a constant depending on p. It
follows:

2d
dt
2d
(1 + e)

.
Z cp
cp
1+
(d 1)! t>1 t ln (t + e)

(d 1)!
Let s and t G be the source and the target of greedy routing in AP(G, , ).
Let M = (V, ) be the shortest paths metric of G. Let v be the current node of
greedy routing, and let 1 i log (s, t) such that (v, t) [2i1 , 2i ).

222

E. Lebhar and N. Schabanel

Since has distorsion , we have:


(v, t) ||(v) (t)||p (v, t).
Let X = ||(v) (t)||p , and let E be the event: ||v (t)||p X/(4). Let
L(v) be the contact of v (i.e. the closest point to v in (V ).
Claim. If E occurs, then (L(v), t) (v, t)/2.
Indeed, assume that E occurs. From the triangle inequality, we have:
||(L(v)) (t)||p ||(L(v)) v ||p + ||v (t)||p .
And since (L(v)) = is closer to v than (t) by denition of AP, we get:
||(L(v)) (t)||p 2||v (t)||p X/(2).
Finally:
(L(v), t) ||(L(v)) (t)||p X/(2) (v, t)/2.

Claim. The probability that E occurs is greater than


C

d5d d

1+

ln

1
,
(2(v, t) + e)

for some constant C > 0.


Proof of the claim. Let P be the probability that E occurs. P is the probability
that v belongs to the ball of radius X/(4) centered at (t) in Rdp . Let B be
this ball. We have, by denition of AP:

1
1
P =
Z B (||(v) ||p )d ln1+ (||(v) ||p + e)

1
1

,


Z B (1 + 1/(4))X d ln1+ ((1 + 1/(4))X + e)
since (1 + 1/(4))X is the largest distance from (v) to any point in B.
d
On the other hand, the volume of B is at least cp 2d! (X/(4))d , for some
constant cp > 0. We get:
P

1 cp 2d (X/4)d
1

1 d d
1+
1
Z d!(1 + 4
) X
ln ((1 + 4
)X + e)
cp

1
1

1
cp (1 + e) d5d d ln1+ ((1 + 4
)X + e)

.
1+
d
d
d5 ln (2(v, t) + e)

Claim. If the current node v of greedy routing satises (v, t) [2i1 , 2i ) for
some 1 i log (s, t) , then after O( 1 d (i 1)1+ ) steps on expectation,
greedy routing is at distance less than 2i1 from t.

Graph Augmentation via Metric Embedding

223

Proof of the claim. Combining the claims, we get that, with probability
([ 1 d ln1+ ((v, t))]1 ) (where the notation hides a linear factor in ),
the contact L(v) of v is at distance at most 2i1 to t. If this does not occur,
greedy routing moves to a neighbor v  at distance strictly less than (v, t) to t
and strictly greater than 2i1 and we can repeat the same argument. Therefore,
after O( 1 d ln1+ ((v, t))) = O( 1 d (i 1)1+ ) steps, greedy routing is at dis$
tance less than 2i1 to t with constant probability.
Finally, from this last claim, the expected number of steps of greedy routing
from s to t is at most:

log((s,t))

i=1

1
O( d (i 1)1+ ) = O( d log2+ n).

From this theorem, results giving new insights on the navigability problem can
be derived from the very recent advances in metric embeddings theory. In particular, graphs of bounded doubling dimension, that subsumes graphs of bounded
growth, received an increasing interest recently. They are of particular interest
for scalable and distributed network applications since it is possible to decompose
them greedily into clusters of exponentially decreasing diameter.
Corollary 1. For any > 0, any n-node graph G of bounded doubling dimension can be augmented with one link per node so that greedy routing compute
paths of expected length O( 1 log(2++2) n) between any pair of vertices with the
only knowledge of G.
Indeed, from Theorem 1.1 of [3], it is known that, for every n-point metric space
M of doubling dimension and every (0, 1], there exists an embedding of M
into Rdp with distorsion O(log1+ n) and dimension O(/). Taking = 1 gives
the corollary. This result was previously proved in [22] by another method of
augmentation, using rings of neighbor. The originality of our method is that
it is not specic to a given graph or metric class, this dependency lying only in
the embedding function. Therefore, it enables to get more direct proofs that a
graph is augmentable into a navigable small world than previous ones.
This new kind of augmentation process via embedding is also promising to derive lower bounds on metrics embedding quality. Indeed, since not all graphs can
be augmented to become navigable, necessarily, if there exists a positive result
on small world augmentation via some embedding, then this embedding cannot
keep the same quality for all graphs. For the particular case of Theorem 2, we
derive that any injective function that embeds any arbitrary metric into Rdp

1/ log n ). This lower bound is howwith distorsion has to satisfy d = (n


ever subsumed by the bound provided by the Johnson-Lindenstrauss attening
2
2
lemma [16]: d = O((1 + )log n/ ) = O(n(1+)/ ) for any 0 < < 1, which is
essentially tight (cf. e.g. [20]).
It is worth to note that Fraigniaud and Gavoille [11] recently tackled the
question of navigating in a graph that has been augmented using the distances

224

E. Lebhar and N. Schabanel

in a spanner2 of this graph. They remarked that greedy routing usually requires
to know the spanner map of distances in order to achieve an ecient routing. On
the contrary, our augmentation process does not requires greedy routing to be
aware of distances in Rd . This is due to the geography of the spaces considered:
an embedding of a graph in Rd preserves geographical neighboring regions.

Discussion

The result presented in this paper gives new perspectives in the understanding
of networks small world augmentations. Indeed, the augmentation process AP
isolates all the dependencies on the graph structure in the embedding function.
On the other hand, such an augmentation process focuses on the geography
of the graph and cannot capture the augmentation processes that are based on
graph separator decomposition. It can be distinguished two main kinds of augmentation processes in the navigable networks literature. One kind of augmentation relies on the graph density and its similarity with a mesh (like augmentations
in [7,17,18,22]), while the other kind relies on the existence of good separators in
the graph (like augmentations in [4,10]). Augmentation via embedding cannot
be directly extended to augmentations using separators because of the diculty
to handle the distortion in the analysis of greedy routing. Finally, the extension
of AP to graphs that are close to a tree metric (using embeddings into tree metrics) could open the path to the exhaustive characterization of graph classes that
can be augmented to become navigable, as well as provide new lower and upper
bounds on embeddings as side results. More generally, the exhaustive characterization of the graphs that can be augmented to become navigable is still an
important open problem, as well as the design of good quality embeddings into
low dimensional spaces.

References
1. Assouad, P.: Plongements lipschitzien dans Rn . Bull. Soc. Math. France 111(4),
429448 (1983)
2. Abraham, I., Bartal, Y., Neiman, O.: Advances in metric embedding theory. In:
Proceeeding of the the 38th annual ACM symposium on Theory of Computing
(STOC), pp. 271286 (2006)
3. Abraham, I., Bartal, Y., Neiman, O.: Embedding Metric Spaces in their Intrinsic
Dimension. In: Proceedings of the nineteenth annual ACM-SIAM symposium on
Discrete algorithms (SODA), pp. 363372 (2008)
4. Abraham, I., Gavoille, C.: Object location using path separators. In: Proceedings
of the Twenty-Fifth Annual ACM Symposium on Principles of Distributed Computing (PODC), pp. 188197 (2006)
5. Bourgain, J.: On Lipschitz embedding of nite metric spaces in Hilbert space. Israel
Journal of Mathematics 52, 4652 (1985)
2

A k-spanner of a graph G is a subgraph G such that for any u, v G, distG (u, v)


kdistG (u, v).

Graph Augmentation via Metric Embedding

225

6. Dabek, F., Cox, R., Kaashoek, F., Morris, R.: Vivaldi: A decentralized network
coordinate system. In: ACM SIGCOMM (2004)
7. Duchon, P., Hanusse, N., Lebhar, E., Schabanel, N.: Could any graph be turned
into a small-world? Theoretical Computer Science 355(1), 96103 (2006)
8. Duchon, P., Hanusse, N., Lebhar, E., Schabanel, N.: Towards small world emergence. In: 18th Annual ACM Symp. on Parallel Algorithms and Architectures
(SPAA), pp. 225232 (2006)
9. Dodds, P.S., Muhamad, R., Watts, D.J.: An experimental study of search in global
social networks. Science 301, 827829 (2003)
10. Fraigniaud, P.: Greedy routing in tree-decomposed graphs: a new perspective on the
small-world phenomenon. In: Brodal, G.S., Leonardi, S. (eds.) ESA 2005. LNCS,
vol. 3669, pp. 791802. Springer, Heidelberg (2005)
11. Fraigniaud, P., Gavoille, C.: Polylogarithmic network navigability using compact
metrics with small stretch. In: 20th Annual ACM Symp. on Parallel Algorithms
and Architectures (SPAA), pp. 6269 (2008)
12. Fraigniaud, P., Gavoille, C., Kosowski, A., Lebhar, E., Lotker, Z.: Universal Aug
mentation Schemes for Network Navigability: Overcoming the n-Barrier. In: Proceedings of the 19th Annual ACM Symposium on Parallel Algorithms and Architecture (SPAA), pp. 17 (2007)
13. Fraigniaud, P., Lebhar, E., Lotker, Z.: A Doubling Dimension Threshold
(log log n) for Augmented Graph Navigability. In: Azar, Y., Erlebach, T. (eds.)
ESA 2006. LNCS, vol. 4168, pp. 376386. Springer, Heidelberg (2006)
14. Gupta, A., Krauthgamer, R., Lee, J.R.: Bounded geometries, fractals, and lowdistortion embeddings. In: Proceedings of the 44th Annual IEEE Symposium on
Foundations of Computer Science (FOCS), pp. 534543 (2003)
15. Indyk, P.: Algorithmic aspects of geometric embeddings. In: Proceedings of the
42nd Annual IEEE Symposium on Foundations of Computer Science, FOCS (2001)
16. Johnson, W.B., Lindenstrauss, J.: Extensions of Lipschitz maps into a Hilbert
space. Contemporary mathematics 26, 189206 (1984)
17. Kleinberg, J.: The Small-World Phenomenon: An Algorithmic Perspective. In: 32nd
ACM Symp. on Theo. of Comp. (STOC), pp. 163170 (2000)
18. Kleinberg, J.: Small-World Phenomena and the Dynamics of Information. Advances in Neural Information Processing Systems (NIPS) 14 (2001)
19. Kleinberg, J.: Complex networks and decentralized search algorithm. In: Intl.
Congress of Math, ICM (2006)
20. Matousek, J.: Lectures on Discrete Geometry. Graduate Texts in Mathematics,
vol. 212. Springer, Heidelberg (2002)
21. Milgram, S.: The Small-World Problem. Psychology Today, 6067 (1967)
22. Slivkins, A.: Distance estimation and object location via rings of neighbors. In:
24th Annual ACM Symp. on Princ. of Distr. Comp. (PODC), pp. 4150 (2005)
23. Watts, D.J., Strogatz, S.H.: Collective dynamics of small-world networks. Nature 393, 440442 (1998)

A Lock-Based STM Protocol


That Satisfies Opacity and Progressiveness
Damien Imbs and Michel Raynal
IRISA, Universite de Rennes 1, 35042 Rennes, France
{damien.imbs,raynal}@irisa.fr

Abstract. The aim of a software transactional memory (STM) system is to facilitate the delicate problem of low-level concurrency management, i.e. the design of
programs made up of processes/threads that concurrently access shared objects.
To that end, a STM system allows a programmer to write transactions accessing shared objects, without having to take care of the fact that these objects are
concurrently accessed: the programmer is discharged from the delicate problem
of concurrency management. Given a transaction, the STM system commits or
aborts it. Ideally, it has to be efficient (this is measured by the number of transactions committed per time unit), while ensuring that as few transactions as possible
are aborted. From a safety point of view (the one addressed in this paper), a STM
system has to ensure that, whatever its fate (commit or abort), each transaction
always operates on a consistent state.
STM systems have recently received a lot of attention. Among the proposed
solutions, lock-based systems and clock-based systems have been particularly investigated. Their design is mainly efficiency-oriented, the properties they satisfy
are not always clearly stated, and few of them are formally proved. This paper
presents a lock-based STM system designed from simple basic principles. Its
main features are the following: it (1) uses visible reads, (2) does not require the
shared memory to manage several versions of each object, (3) uses neither timestamps, nor version numbers, (4) satisfies the opacity safety property, (5) aborts
a transaction only when it conflicts with some other live transaction (progressiveness property), (6) never aborts a write only transaction, (7) employs only
bounded control variables, (8) has no centralized contention point, and (9) is formally proved correct.
Keywords: Atomic operation, Commit/abort, Concurrency control, Consistent
global state, Lock, Opacity, Progressiveness, Shared object, Software transactional memory, Transaction.

1 Introduction
Software transactional memory. Recent advances in technology, and more particularly
in multicore processors, have given rise to a new momentum to practical and theoretical research in concurrency and synchronization. Software transactional memory (STM)
constitutes one of the most visible domains impacted by these advances. Given that concurrent processes (or threads) that share data structures (base objects) have to synchronize, the transactional memory concept originates from the observation that traditional
T.P. Baker, A. Bui, and S. Tixeuil (Eds.): OPODIS 2008, LNCS 5401, pp. 226245, 2008.
c Springer-Verlag Berlin Heidelberg 2008


A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness

227

lock-based solutions have inherent drawbacks. On one side, if the set of data whose accesses are controlled by a single lock is too large (large grain), the parallelism can be
drastically reduced, while, on another side, the solutions where a lock is associated with
each datum (fine grain), are difficult to master, error-prone, and difficult to prove correct.
The software transactional memory (STM) approach has been proposed in [23].
Considering a set of sequential processes that accesses shared objects, it consists in
decomposing each process into (a sequence of) transactions (plus possibly some parts
of code not embedded in transactions). This is the job of the programmer. The job of
the STM system is then to ensure that the transactions are executed as if each was
an atomic operation (it would make little sense to move the complexity of concurrent programming from the fine management of locks to intricate decompositions into
transactions). So, basically, the STM approach is a structuring approach. (STM borrows
ideas from database transactions; there are nevertheless fundamental differences with
database transactions that are examined below [10].)
Of course, as in database transactions, the fate of a transaction is to abort or commit.
(According to its aim, it is then up to the issuing process to restart -or not- an aborted
transaction.) The great challenge any STM system has to take up is consequently to be
efficient (the more transactions are executed per time unit, the better), while ensuring
that few transactions are aborted. This is a fundamental issue each STM system has to
address. Moreover, in the case where a transaction is executed alone (no concurrency)
or in the absence of conflicting transactions, it should not be aborted. Two transactions
conflict if they access the same object and one of them modifies that object.
Consistency of a STM. In the past recent years, several STM concepts have been proposed, and numerous STM systems have been designed and analyzed. On the correctness side (safety), an important notion that has been introduced very recently is the
concept of opacity. That concept, introduced and formalized by Guerraoui and Kapaka
[12], is a consistency criterion suited to STM executions. Its aim is to render aborted
transactions harmless.
The classical consistency criterion for database transactions is serializability [19]
(sometimes strengthened in strict serializability, as implemented when using the 2phase locking mechanism). The serializability consistency criterion involves only the
transactions that are committed. Said differently, a transaction that aborts is not prevented from accessing an inconsistent state before aborting. In a STM system, the code
encapsulated in a transaction can be any piece of code and consequently a transaction
has to always operate on a consistent state. To be more explicit, let us consider the following example where a transaction contains the statement x a/(b c) (where a, b
and c are integer data), and let us assume that b c is different from 0 in all the consistent states. If the values of b and c read by a transaction come from different states, it is
possible that the transaction obtains values such as b = c (and b = c defines an inconsistent state). If this occurs, the transaction raises an exception that has to be handled
by the process that invoked the corresponding transaction1 . Such bad behaviors have to
be prevented in STM systems: whatever its fate (commit or abort) a transaction has to
1

Even worse undesirable behaviors can be obtained when reading values from inconsistent
states. This occurs for example when an inconsistent state provides a transaction with values
that generate infinite loops.

228

D. Imbs and M. Raynal

always see a consistent state of the data it accesses. The important point is here that a
transaction can (a priori) be any piece of code (involving shared data), it is not restricted
to predefined patterns. This also motivates the design of STM protocols that reduce the
number of aborts (even if this entails a slightly lower throughput for short transactions).
Roughly speaking, opacity extends serializability to all the transactions (regardless of
whether they are committed or aborted). Of course, a committed transaction is considered entirely. Differently, only an appropriately defined subset of an aborted transaction
has to be considered.
Opacity (like serializability) states only what is a correct execution, it is a safety
property. It does not state when a transaction has to commit, i.e., it is not a liveness
property. Several types of liveness properties are investigated in [22].
Context of the work. Among the numerous STM systems that have been designed in
the past years, only four of them are considered here, namely, JVSTM [8], TL2 [9],
LSA-RT [21], and RSTM [16]. This choice is motivated by (1) the fact that (differently
from a lot of other STM systems) they all satisfy the opacity property, and (2) additional
properties that can be associated with STM systems.
Before introducing these properties, we first consider underlying mechanisms on
which the design of STM systems is based.
From an operational point of view, locks and (physical or logical) clocks constitute base synchronization mechanisms used in a lot of STM systems. Locks allow
mutex-based solutions. Clocks allow to benefit from the progress of the (physical
or logical) time in order to facilitate the validation test when the system has to decide the fate (commit or abort) of a transaction. As a clock can always increase,
clock-based systems require appropriate management of the clock values.
An important design principle that differentiates STM systems is the way they implement base objects. More specifically we have the following.
Two types of implementation of base objects can be distinguished, namely, the
single version implementations, and the multi-version implementations. The aim
of the latter is to allow the commit of more (mainly read only) transactions, but
requires to pay a higher price from the shared memory occupation point of view.
An STM implementation can also be characterized by the fact it satisfies or not
important additional properties. We consider here the progressiveness property.
The progressiveness notion, introduced in [12], is a safety property from the commit/abort termination point of view: it defines an execution pattern that forces a
transaction not to abort another one.
As already indicated, two transactions conflict if they access the same base
object and one of them updates it. The STM system satisfies the progressiveness
property, if it forcefully aborts T1 only when there is a time t at which T1 conflicts
with another concurrent transaction (say T2 ) that is not committed or aborted by
time t [12]. This means that, in all the other cases, T1 cannot be aborted due to T2 .
As an example, let us consider Figure 1 where two patterns are depicted. Both
involve the same conflicting concurrent transactions T1 that reads X, and T2 that
writes X (each transaction execution is encapsulated in a rectangle). On the left

A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness

T1
T2

BT1

X.read()

ET 1
X.write()
ET 2

BT2

T1
T2

X.read()

BT1

229
ET 1

X.write()
BT2

Time t

ET 2
Time t

Fig. 1. The progressiveness property

side, T2 has not yet terminated when T1 reads X. In that case, an STM system
that aborts T 1 , due to its its conflict with T2 , does not violate the progressiveness
property. Differently, when we consider the right side, T2 has terminated when T1
reads X. In that case, an STM system that guarantees the progressiveness property,
cannot abort T1 due to T2 .
Finally, a last criterion to compare STM systems lies in the way they cope with lower
bound results related to the cost of read and write operations.
Let k be the number of objects shared by a set of transactions. A theorem proved in
[12] states the following. For any STM protocol that (1) ensures the opacity consistency criterion, (2) is based on single version objects, (3) implements invisible read
operations, and (4) ensures the progressiveness property, each read/write operation
issued by a transaction requires (k) computation steps in the worst case. This
theorem shows an inescapable cost associated with the implementation of invisible
read operations as soon as we want single version objects and abort only due to
conflict with a live transaction.
Considering the previous list of items (base mechanisms, number of versions, additional properties, lower bound), Table 1 indicates how each of the TL2, LSA-RT,
JVSTM, and RSTM behaves. While traditional comparisons of STM systems are based
on efficiency measurements (usually from benchmark-based simulations), this table
provides a different view to compare STM systems. A read operation issued by a transaction is invisible if its implementation does not entail updates of the underlying control
variables (kept in shared memory). Otherwise, the read is visible.
Content of the paper. The (k) lower bound states an inherent cost for the STM systems that want to ensure invisible read operations and progressiveness while using a
single version per object. When looking at Table 1, we see that, while both TL2 and
JVSTM implement invisible read operations, each circumvents the (k) lower bound
in its own way. JVSTM uses several copies of each object and does not ensure the progressiveness property. TL2 does not ensure the progressiveness property either (it has
even scenarios in which a transaction is directed to abort despite the fact that it has read
consistent values.)
Progressiveness is a noteworthy safety property. As already indicated, it states circumstances where transactions must commit2 . Considering consequently progressiveness as a first class property, this paper presents a new STM system that circumvents
the (k) lower bound and satisfies the progressiveness property. To that end it employs
2

This can be particularly attractive when there are long-lived read-only transactions.

230

D. Imbs and M. Raynal


Table 1. Properties ensured by protocols (that satisfy the opacity property)

System
TL2 [9] LSA-RT [20] JVSTM [8] RSTM [16] This paper
Clock-free
no
no
no
yes
yes
Lock-based
yes
no
yes
no
yes
Single version
yes
no
no
yes
yes
Invisible read operations
yes
yes
yes
no
no
Progressiveness
no
yes
no
yes
yes
Circumvent the (k) lower bound yes
no
yes
no
yes

a single version per object and implements visible read operations. Moreover, differently from nearly all the STM systems proposed so far, whose designs have been driven
mainly by implementation concerns and efficiency, the paper strives for a protocol with
powerful properties that can be formally proved. Its formal proof gives us a deeper
understanding on how the protocol works and why it works. Combined with existing
protocols, it consequently enriches our understanding of STM systems.
Finally, let us notice that the proposed protocol exhibits an interesting property related to contention management. The shared control variables associated with each object X (it is their very existence that makes the read operations visible) can be used
by an underlying contention manager [11,24]. If the contention manager is called when
a transaction is about to commit, it can benefit from the content of these variables to
decide whether to accept the commit or to abort the transaction in case this abort would
entail more transactions to commit.
Roadmap. The paper is made up of 6 sections. Section 2 describes the computation
model, and the safety property we are interested in (opacity, [12]). The proposed protocol is presented incrementally. A base protocol is first presented in Section 3. This STM
protocol (also called STM system in the following) associates a lock and two atomic
control variables (sets) with each object X. It also uses a global control variable (a set
denoted OW ) that is accessed by all the update transactions (when they try to commit).
Section 4 presents a formal proof of the protocol. Then, Section 5 presents the final version of the protocol. The resulting STM system has the following noteworthy features.
It (1) does not require the shared memory to manage several versions of each object,
(2) does not use timestamps, (3) satisfies the opacity and progressiveness properties,
(4) never aborts a write only transaction, (5) employs only bounded control variables,
and (6) has no centralized contention point. The design of provable STM protocols is
an important issue for researchers interested in the foundations of STM systems [3].
Finally, Section 6 concludes the paper.

2 Computation Model and Problem Specification


2.1 Computation Model
Base computation model: processes, base objects, locks and atomic registers The base
system (on top of which one has to build a STM system) is made up of n asynchronous

A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness

231

sequential processes (also called threads) denoted p1 , . . . , pn (a process is also sometimes denoted p) that cooperate through base read/write atomic registers and locks. The
shared objects are denoted with upper case letters (e.g., the base object X). A lock, with
its classical mutex semantics, is associated with each base object X.
Each process p has a local memory (a memory that can be accessed only by p).
Variables in local memory are denoted with lower case letters indexed by the process id
(e.g., lrsi is a local variable of pi ).
High (user) abstraction level: transactions From a structural point of view, at the user
abstraction level, each process is made up of a sequence of transactions (plus some code
managing these transactions). A transaction is a sequential piece of code (computation
unit) that reads/writes base objects and does local computations. At the abstraction level
at which the transactions are defined, a transaction sees only base objects, it sees neither
the atomic registers nor the locks. (Atomic registers and locks are used by the STM
system to correctly implement transactions on top of the base model).
A transaction can be a read-only transaction (it then only reads base objects), or an
update transaction (it then modifies at least one base object). A write-only transaction
is an update transaction that does not read base objects. A transaction is assumed to be
executed entirely (commit) or not at all (abort). If a transaction is aborted, it is up to
the invoking process to re-issue it (as a new transaction) or not. Each transaction has its
own identifier, and the set of transactions can be infinite.
2.2 Problem Specification
Intuitively, the STM problem consists in designing (on top of the base computation
model) protocols that ensure that, whatever the base objects they access, the transactions
are correctly executed. The following property formulates precisely what correctly
executed means in this paper.
Safety property. Given a run of a STM system, let C be the set of transactions that
commit, and A the set of transactions that abort. Let us assume that any transaction T
starts with an invocation event (BT ) and terminates with an end event (ET ).
Given T A, let T  = (T ) be the transaction built from T as follows ( stands
for reduced). As T has been aborted, there is a read or a write on a base object that
entailed that abortion. Let prex (T ) be the prefix of T that includes all the read and
write operations on the base objects accessed by T until (but excluding) the read or
write that provoked the abort of T . T  = (T ) is obtained from prex (T ) by replacing
its write operations on base objects and all the subsequent read operations on these
objects, by corresponding write and read operations on a copy in local memory. The
idea here is that only an appropriate prefix of an aborted transaction is considered: its
write operations on base objects (and the subsequent read operations) are made fictitious
in T  = (T ). Finally, let A = {T  | T  = (T ) T A}.
As announced in the Introduction, the safety property considered in this paper is
opacity (introduced in [12] with a different formalism). It expresses the fact that a transaction never sees an inconsistent state of the base objects. With the previous notation, it
can be stated as follows:

232

D. Imbs and M. Raynal

Opacity. The transactions in C A are linearizable (i.e., can be totally ordered


according to their real-time order [13]).
This means that the transactions in C A appear as if they have been executed one
after the other, each one being executed at a single point of the time line between its
invocation event and its end event.

3 A Lock-Based STM System: Base Version


This section presents a base protocol that builds a STM system on top of the base system
described in Section 2.1. Without ambiguity, the same identifier T is used to denote both
a transaction itself and its unique name.
3.1 The STM System Interface
The STM system provides the transactions with three operations denoted X.readT (),
X.writeT (), and try to commitT (), where T is a transaction, and X a base object.
X.readT () is invoked by the transaction T to read the base object X. That operation
returns a value of X or the control value abort. If abort is returned, the invoking
transaction is aborted.
X.writeT (v) is invoked by the transaction T to update X to the new value v. That
operation never forces a transaction to immediately abort.
If a transaction attains its last statement (as defined by the user) it executes the operation try to commitT (). That operation decides the fate of T by returning commit
or abort. (Let us notice, a transaction T that invokes try to commitT () has not
been aborted during an invocation of X.readT ().)
3.2 The STM System Variables
To implement the previous STM operations, the STM system uses a lock per base object
X, and the following atomic control variables that are sets (all initialized to ).
A read set RSX is associated with each object X. This set contains the id of the
transactions that have read X since its last update. A transaction adds its id to RSX
to indicate a possibility of conflict.
A set OW , whose meaning is the following: T OW means that the transaction
T has read an object Y and, since this reading, Y has been updated (so, there is a
conflict).
A set FBDX per base object X (FBDX stand for F orBiDden). T FBDX means
that the transaction T has read an object Y that since then has been overwritten
(hence T OW ), and the overwriting of Y is such that any future read of X by
T will be invalid (i.e., the value obtained by T from Y and any value it will obtain
from X in the future cannot be mutually consistent): reading X from the shared
memory is forbidden to the transactions in FBDX .

A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness


X.read()

T1
T2

Y.read()

X.read()

233

Y.read()

T1

Y.write()

T4

X.write()

T3

Y.write()

X.write()

Fig. 2. Meaning of the set FBDX

An example explaining the meaning of FBDX is described in Figure 2. On the left side,
the execution of three transactions are depicted (as before, each rectangle encapsulates
a transaction execution). T1 starts by reading X, executes local computation, and then
reads Y . The execution of T1 overlaps with two transactions, T2 that is a simple write
of Y , followed by T3 that is a simple write of X. It is easy to see that the execution
of these three transactions can be linearized: first T2 , then T1 and finally T3 . In this
execution, FBDX does not include T1 .
In the execution on the right side, T2 and T3 are combined to form a single transaction
T4 . It is easy to see that this concurrent execution of T1 and T4 cannot be linearized.
Due to its access to X, the STM system (as we will see) will force T4 to add T1 to
FBDY , entailing the abort of T1 when T1 will access Y (if T1 would not access Y , it
would not be aborted). Let us observe that the same thing occurs if, instead of T4 , we
have (with the same timing) a transaction made up of X.write() followed by another
transaction including Y.write().
The STM system also uses the following local variables (kept in the local memory
of the process that invoked the corresponding transaction). lrsT is a local set where T
keeps the ids of all the objects it reads. Similarly, lwsT is a local set where T keeps the
ids of all the objects it writes. Finally, read onlyT is a boolean variable initialized to
true.
The previous shared sets can be efficiently implemented using Bloom filters (e.g.,
[2,7,17]). In a very interesting way, the small probability of false positive on membership queries does not make the protocol incorrect (it can only affect its efficiency by
entailing non-necessary aborts).
Let us recall that a process is sequential and consequently executes transactions one
after the other. As local control variables are associated with a transaction, the corresponding process has to reset them to their initial values between two transactions. Similarly, if a transaction creates a local copy of an object, that copy is destroyed when the
transaction terminates (a given copy of an object is meaningful for one transaction only).
3.3 The Algorithms of the STM System
The three operations that constitute the STM system X.readT (), X.writeT (v), and
try to commitT (), are described in Figure 3.
The operation X.readT (). The algorithm implementing this operation is pretty simple.
If there is a local copy of X, its value is returned (lines 01 and 07). Otherwise, space
for X is allocated in the local memory (line 02), X is added to the set of objects read
by T (line 03), T is added to the read set RSX of X, and the current value of X is read
from the shared memory and saved in the local memory (line 04).

234

D. Imbs and M. Raynal

operation X.readT ():


(01) if (there is no local copy of X) then
(02) allocate local space for a copy;
(03) lrsT lrsT {X};
(04) lock X; local copy of X X; RSX RSX {T }; unlock X;
(05) if (T FBDX ) then return(abort) end if
(06) end if;
(07) return(value of the local copy of X)
=======================================================
operation X.writeT (v):
(08) read onlyT false;
(09) if (there is no local copy of X) then allocate local space for a copy end if;
(10) local copy of X v;
(11) lwsT lwsT {X}
=======================================================
operation try to commitT ():
(12) if (read onlyT )
(13) then return(commit)
(14) else lock all the objects in lrsT lwsT ;
(15)
if (T OW ) then release all the locks; return(abort) end if;
(16)
for each X lws
 copy of X end for;
T do X local
(17)
OW OW XlwsT RSX ;
(18)
for each X lwsT do FBDX OW ; RSX end for;
(19)
release all the locks;
(20)
return(commit)
(21) end if
Fig. 3. A lock-based STM system

Due to asynchrony, it is possible that the value read by T is overwritten before T


uses it. The predicate T FBDX is used to capture this type of read/write conflict. If
this predicate is true, T is aborted (line 05). Otherwise, the value obtained from X is
returned (line 07). It is easy to see that any object X is read from the shared memory at
most once by a transaction.
The operation X.writeT (). The text of the algorithm implementing X.writeT () is even
simpler than the text of X.readT (). The transaction first sets a flag to record that it is not
a read-only transaction (line 08). If there is no local copy of X, corresponding space is
allocated in the local memory (line 09); let us remark that this does not entail a reading
of X from the shared memory. Finally, T updates the local copy of X (line 10), and
records that it has locally written the copy of X (line 11).
It is important to notice that an invocation of X.writeT () is purely local: it involves
no access to the shared memory, and cannot entail an immediate abort of the corresponding transaction.
The operation try to commitT (). This operation works a follows. If the invoking transaction is a read-only transaction, it is committed (lines 12-13). So, a read-only transaction
can abort only during the invocation of a X.readT () operation (line 05 of that operation).

A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness

235

If the transaction T is an update transaction, try to commitT () first locks all the objects accessed by T (line 14). (In order to prevent deadlocks, it is assumed that these
objects are locked according to a predefined total order, e.g., their identity order.) Then,
T checks if it belongs to the set OW . If this is the case, there is a read-write conflict:
T has read an object that since then has been overwritten. T consequently aborts (after
having released all the locks, line 15). If the predicate T OW is false, T will necessarily commit. But, before committing (at line 20), T has to update the control variables
to indicate possible conflicts due to the objects it has written, the ids of which have been
kept by T in the local set lwsT during its execution.
So, after it has updated the shared memory with the new value of each object X
lwsT (line 16), T computes the union of their read sets; this union contains all the
transactions that will have a write/read conflict with T when they will read an object
X lwsT . This union set is consequently added to OW (line 17), and the set FBDX of
each object X lwsT is updated to OW (line 18). (It is important to notice that each set
FBDX is updated to OW in order not to miss the transitive conflict dependencies that
have been possibly created by other transactions). Moreover, as now the past read/write
conflicts are memorized in FBDX (line 18), the transaction T resets RSX to just after
it has set FBDX to OW . Finally, before committing, T releases all its locks (line 19).
On locking. As in TL2 [9], it is possible to adopt the following systematic abort strategy.
When a transaction T tries to lock an object that is currently locked, it immediately
aborts (after releasing the locks it has, if any).
3.4 On the Management of the Sets RSX , FBDX and OW
Let us recall that these sets are kept in atomic variables.
Management of RSX and FBDX . The set RSX is written only at line 04 (readT ()
operation), and reset to at line 18 (try to commitT () operation), and (due to the lock
associated with X) no two updates of RSX can be concurrent; so, no update of RSX is
missed. Its only read (line 16) is protected by the same lock. So, there is no concurrency
problem for RSX .
The set FBDX is read at line 06 (readT () operation), and its only write (line 18,
try to commitT () operation) is protected by a lock. As it is an atomic variable, there is
no concurrency problem for FBDX .
Management of the set OW . This set is read and written only by try to commitT ()
which reads it at lines 15 and 17, and writes it at line 17 (its read at line 18 can benefit
from a local copy saved at line 17).
Concurrent invocations of try to commitT () can come from transactions accessing
distinct sets of objects. When this occurs, the set OW is not protected by the locks associated with the objects and can consequently be concurrently accessed. As OW is kept
in an atomic variable there is no concurrency problem for the reads. Differently, writes
of OW (line 17) can be missed. Actually, when we look at
 the update of the atomic
set variable OW , namely OW OW XlwsT RSX (line 17), we can observe
that this update is nothing else than a F etch&Add() statement that has to atomically
add XlwsT RSX to OW . If such an operation on a set variable is not provided by

236

D. Imbs and M. Raynal

the hardware, there are several ways to implement it. One consists in using a lock to
execute this operation is mutual exclusion. Another consists in using specialized hardware operations such as Compare&swap() (manipulating a pointer on the set OW ,
or LL/SC (load-linked/store-conditional) [15,18]. Yet, another possible implementation
consists in considering the set OW as a shared array with one entry per process, pi
being the only process that can write OW [i]. Moreover, for efficiency, the current value
of OW [i] can be saved in a local variable owi . A write by pi in OW [i] then becomes
owi owi XlwsT RSX followed by OW [i] owi ; while the atomic read of the
set OW is implemented by a snapshot operation on the array OW [1..n] [1] (there are
efficient implementations of the snapshot operation, e.g., [4,5]).
Differently from the pair of sets RSX and FBDX , associated with each object X, the
set OW constitutes a global contention point. This contention point can be suppressed
by replacing OW by independent boolean variables (see Section 5). We have adopted
here an incremental presentation, to make the final protocol easier to understand.
3.5 Early Abort and Contention Manager
When the predicate T OW is satisfied, the transaction T has read an object that since
then has been overwritten. This fact is not sufficient to abort T if it is a read-only transaction. Differently, if T is an update transaction, it cannot be linearized; consequently, it
will be aborted when executing line 15 of try to commitT (). It is possible to abort such
an update transaction T earlier than during the execution of try to commitT (). This can
be simply done by adding the statement if T OW then return(abort) end if just
before the first line of the operation writeT (). Similarly, the statement if T FBDX
then return(abort) end if can be inserted between the first and the second line of the
operation readT ().
Interestingly, it is important to notice that the sets RSX , FBDX , and OW can be
used by an underlying contention manager [11,24] to abort transactions according to
predefined rules (namely, there are configurations where aborting a single transaction
can prevent the abort of other transactions).

4 Proof of the Base Protocol


4.1 Base Formalism and Definitions
Events and history at the shared memory level. An event is associated with the execution of each operation on the shared memory (base object, lock, set variable). We use
the following notation.
Let BT denote the event associated with the beginning of the transaction T , and
ET the event associated with its termination. ET can be of two types, namely AT
and CT , where AT is the event abort of T (line 05 or 15), and CT is the event
commit of T (line 20).
Let rT (X)v denote the event associated with the read of X from the shared memory
issued by the transaction T ; v denotes the value returned by the read. Given an
object X, there is a most one event rT (X)v per transaction T . If any, this read
occurs at line 04 (operation X.readT ()).

A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness

237

Let wT (X)v denote the event associated with the write of the value v in X. Given
an object X, there is at most one event wT (X)v per transaction T . If any, it corresponds to a write issued at line 16 in the try to commitT () operation. If the value
v is irrelevant wT (X)v is abbreviated wT (X).
Without loss of generality we assume that no two writes on the same object X write
the same value.
We also assume that all the objects are initially written by a fictitious transaction.
Let ALT (X, op) denote the event associated with the acquisition of the lock on
the object X issued by the transaction T during an invocation of op where op is
X.readT () or try to commitT ().
Similarly, let RLT (X, op) denote the event associated with the release of the lock
on the object X issued by the transaction T during an invocation of op.
Given an execution, let H be the set of all the events generated by the shared memory
accesses issued by the STM system described in Figure 3. As these shared memory
accesses are atomic, the previous events are totally ordered. Consequently, at the shared
 = (H, <H ) where <H
memory level, an execution can be represented by the pair H

denotes the total ordering on its events. H is called a shared memory history.
As <H is a total order, it is possible to consider each event in H as a date of the time
line. This date view of a sequential history on events will be used in the proof.
History at the transaction level. Given an execution, let TR be the set of transactions
issued during that execution. Let TR be the order relation defined on the transactions
of TR as follows: T 1 TR T 2 if ET 1 <H BT 2 (T 1 has terminated before T 2 starts).
If T 1 TR T 2 T 2 TR T 1, we say that T 1 and T 2 are concurrent (their executions
overlap in time). At the transaction level, that execution is defined by the partial order
TR = (TR, TR ), that is called a transaction level history.
The read-from relation between transactions, denoted rf , is defined as follows:
X

T 1 rf T 2 if T 2 reads the value that T 1 wrote in the object X.


! = (ST , ST ) is sequential if no two of its transactions
A transaction history ST
are concurrent. Hence, in a sequential history, T 1 ST T 2 T 2 ST T 1, thus ST
is a total order. A sequential transaction history is legal if each of its read operations
returns the value of the last write on the same object (because the history is sequential
and transactions are executed sequentially, no two operations can overlap).
! is equivalent to a transaction history TR if (1)
A sequential transaction history ST
ST = TR (i.e., they are made of the same transactions -same invocations and same
! and in TR), and (2) the total order ST respects the partial order TR
replies- in ST
(i.e., TR ST ).
! that is sequential,
A transaction history AA is linearizable if there exists a history SA
legal and equivalent to AA.
 denote the transaction history obtained from the history T R as described
Let (TR)

in Section 2.2. This means that (T
R) includes all the transactions of T R that commit,
and contains (T ) for each transaction T T R that aborts. As defined in Section
! that is
2.2, a transaction history T R is opaque if there exists a transaction history ST

sequential, legal and equivalent to (TR).

238

D. Imbs and M. Raynal

4.2 Principle of the Proof of the Opacity Property


According to the algorithms implementing the operations X.readT () and X.writeT (v)
described in Figure 3, we ignore all the read operations on an object that follow another operation on the same object within the same transaction, and all the write operations that follow another write operation on the same object within the same transaction
(these are operations local to the memory of the process that executes them). Building
(TR) from TR is then a straightforward process.
To prove that the protocol described in Figure 3 satisfies the opacity consistency
criterion, we need to prove that, for any transaction history TR produced by this proto This amounts to prove
! equivalent to (TR).
col, there is a sequential legal history ST

the following properties (where H is the shared memory level history generated by the
transaction history TR):
1. ST is a total
order,


2. T TR : T commits T ST T aborts (T ) ST ,
3. (T R) ST ,




X
4. T1 rf T2 T3 such that T1 ST T3 ST T2 wT3 (X ) H ,
X

5. T1 rf T2 T1 ST T2 .
4.3 Definition of the Linearization Points
ST is produced by ordering the transactions according to their linearization points. The
linearization point of the transaction T is denoted
T . The linearization points of the
transactions are defined as follows:
If a transaction T aborts,
T is the time just before T is added to the set OW (line
17 of the try to commitT () operation that entails its abort).
If a read only transaction T commits,
T is placed at the earliest of (1) the occurrence time of the test during its last read operation (line 05 of the X.read() operation) and (2) the time just before it is added to OW (if it ever is). (An example is
depicted in Figure 4.)
If an update transaction T commits,
T is placed just after the execution of line 17
by T (update of OW ).
The total order <H (defined on the events generated by T R) can be extended with
these linearization points. Transactions whose linearization points happen at the same
time (for example, in multi-core systems) are ordered arbitrarily. An example is given
in Figure 4.
4.4 Safety: Proof of the Opacity Property
! = ((TR), ST ) a history
Let TR = (TR, TR ) be a transaction history. Let ST
whose transactions are the transactions in (TR), and such that ST is defined according to linearization points of each transaction in (TR). If two transactions in (TR)
have the same linearization point, they are ordered arbitrarily. Finally, let us observe that

A Lock-Based STM Protocol That Satisfies Opacity and Progressiveness


BT2

BT1

T1 ;T2 ;CT2

Event/time line

rT1 (Y )

rT1 (X)

T1

T3 ;CT3
CT 1
BT3

239

wT2 (X)

T2

CT 2
wT3 (Y )

T3
RSX RSX {T1}

RSY RSY {T1}

OW OW {T1}

OW OW {T1}

Fig. 4. An example of linearization points

 = (H, H )
the linearization points can be trivially added to the sequential history H
defined on the events generated by the transaction history TR. So, we consider in the
following that the set H includes the linearization points of the transactions.
Lemma 1. ST is a total order.
Proof. Trivial from the ordering of the linearization points.

Lemma 2. (TR) ST .
Proof. This lemma follows from the fact that, given any transaction T , its linearization
point is placed within its lifetime. Therefore, if T 1 (TR) T 2 (T 1 ends before T 2
2
begins), then T 1 ST T 2.
Let ow(T, t) be the predicate at time t, T belongs to OW .
Lemma 3. ow(T, t)
T <H t.
Proof. We show that the linearization point of a transaction T cannot be after the time
at which the transactions id is added to OW . There are three cases.
By construction, if T aborts, its linearization
T is the time just before its id is
added to OW , which proves the lemma.
If T is read-only and commits, again by construction, its linearization
T point is
placed at the latest just before the time at which its id is added to OW (if it ever
is), which again proves the lemma.
If T writes and commits, its linearization point
T is placed during try to commit
(), while T holds the locks of every object that it has read. If T was in OW before
it acquired all the locks, it would not commit (due to line 15). Let us notice that T
can be added to OW only by an update transaction holding a lock on a base object
previously read by T . As T releases the locks just before committing (line 19), it
follows that
T occurs before the time at which its id is added to OW (if it ever is),
which proves the last case of the lemma.
2
Let rsX (T, t) be the predicate at time t, T belongs to RSX or OW .

240

D. Imbs and M. Raynal



X


 (X)
Lemma 4. TW rf TR TW
such that TW ST TW
ST TR wTW

H .

Proof. By contradiction, let us assume that there are transactions TW , TW
and TR and
an object X such that:
X

TW rf TR ,

 (X)v H,
wTW

ST TR .
TW ST TW

write X in shared memory, they have necessarily committed (a
As both TW and TW
write in shared memory occurs only at line 16 during the execution of try to commit(),

abbreviated ttc in the following). Moreover, their linearization points
TW and
TW
occur while they hold the lock on X (before committing), from which we have the
following implications:

 ,
TW ST TW
TW <H TW

 (X, ttc),
TW <H TW
RLTW (X, ttc) &l