Parallel Programming
Parallel Programming
Edited by:
Paul E. McKenney
Linux Technology Center
IBM Beaverton
[email protected]
January 2, 2017
ii
Legal Statement
This work represents the views of the editor and the authors and does not necessarily
represent the view of their respective employers.
Trademarks:
• IBM, zSeries, and PowerPC are trademarks or registered trademarks of Interna-
tional Business Machines Corporation in the United States, other countries, or
both.
• Linux is a registered trademark of Linus Torvalds.
The non-source-code text and images in this document are provided under the terms
of the Creative Commons Attribution-Share Alike 3.0 United States license.1 In brief,
you may use the contents of this document for any purpose, personal, commercial, or
otherwise, so long as attribution to the authors is maintained. Likewise, the document
may be modified, and derivative works and translations made available, so long as
such modifications and derivations are offered to the public on equal terms as the
non-source-code text and images in the original document.
Source code is covered by various versions of the GPL.2 Some of this code is
GPLv2-only, as it derives from the Linux kernel, while other code is GPLv2-or-later.
See the comment headers of the individual source files within the CodeSamples directory
in the git archive3 for the exact licenses. If you are unsure of the license for a given
code fragment, you should assume GPLv2-only.
Combined work © 2005-2016 by Paul E. McKenney.
1 http://creativecommons.org/licenses/by-sa/3.0/us/
2 http://www.gnu.org/licenses/gpl-2.0.html
3 git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git
Contents
2 Introduction 7
2.1 Historic Parallel Programming Difficulties . . . . . . . . . . . . . . . . 7
2.2 Parallel Programming Goals . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Productivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Generality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Alternatives to Parallel Programming . . . . . . . . . . . . . . . . . . 14
2.3.1 Multiple Instances of a Sequential Application . . . . . . . . 15
2.3.2 Use Existing Parallel Software . . . . . . . . . . . . . . . . . 15
2.3.3 Performance Optimization . . . . . . . . . . . . . . . . . . . 15
2.4 What Makes Parallel Programming Hard? . . . . . . . . . . . . . . . 16
2.4.1 Work Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Parallel Access Control . . . . . . . . . . . . . . . . . . . . . 18
2.4.3 Resource Partitioning and Replication . . . . . . . . . . . . . 18
2.4.4 Interacting With Hardware . . . . . . . . . . . . . . . . . . . 19
2.4.5 Composite Capabilities . . . . . . . . . . . . . . . . . . . . . 19
2.4.6 How Do Languages and Environments Assist With These Tasks? 19
2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
iii
iv CONTENTS
3.3.1 3D Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2 Novel Materials and Processes . . . . . . . . . . . . . . . . . . 31
3.3.3 Light, Not Electrons . . . . . . . . . . . . . . . . . . . . . . 32
3.3.4 Special-Purpose Accelerators . . . . . . . . . . . . . . . . . 32
3.3.5 Existing Parallel Software . . . . . . . . . . . . . . . . . . . 33
3.4 Software Design Implications . . . . . . . . . . . . . . . . . . . . . . 33
5 Counting 55
5.1 Why Isn’t Concurrent Counting Trivial? . . . . . . . . . . . . . . . . 56
5.2 Statistical Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.2.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.2.2 Array-Based Implementation . . . . . . . . . . . . . . . . . . 59
5.2.3 Eventually Consistent Implementation . . . . . . . . . . . . . 60
5.2.4 Per-Thread-Variable-Based Implementation . . . . . . . . . . 63
5.2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3 Approximate Limit Counters . . . . . . . . . . . . . . . . . . . . . . 64
5.3.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3.2 Simple Limit Counter Implementation . . . . . . . . . . . . . 65
5.3.3 Simple Limit Counter Discussion . . . . . . . . . . . . . . . . 71
5.3.4 Approximate Limit Counter Implementation . . . . . . . . . 72
5.3.5 Approximate Limit Counter Discussion . . . . . . . . . . . . 72
5.4 Exact Limit Counters . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.4.1 Atomic Limit Counter Implementation . . . . . . . . . . . . 73
5.4.2 Atomic Limit Counter Discussion . . . . . . . . . . . . . . . . 77
5.4.3 Signal-Theft Limit Counter Design . . . . . . . . . . . . . . . 77
5.4.4 Signal-Theft Limit Counter Implementation . . . . . . . . . . 79
5.4.5 Signal-Theft Limit Counter Discussion . . . . . . . . . . . . 85
5.5 Applying Specialized Parallel Counters . . . . . . . . . . . . . . . . 85
5.6 Parallel Counting Discussion . . . . . . . . . . . . . . . . . . . . . . 86
5.6.1 Parallel Counting Performance . . . . . . . . . . . . . . . . . 86
5.6.2 Parallel Counting Specializations . . . . . . . . . . . . . . . . 87
CONTENTS v
7 Locking 135
7.1 Staying Alive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.1.1 Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.1.2 Livelock and Starvation . . . . . . . . . . . . . . . . . . . . 145
7.1.3 Unfairness . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.1.4 Inefficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.2 Types of Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.2.1 Exclusive Locks . . . . . . . . . . . . . . . . . . . . . . . . 148
7.2.2 Reader-Writer Locks . . . . . . . . . . . . . . . . . . . . . . 148
7.2.3 Beyond Reader-Writer Locks . . . . . . . . . . . . . . . . . 148
7.2.4 Scoped Locking . . . . . . . . . . . . . . . . . . . . . . . . 150
7.3 Locking Implementation Issues . . . . . . . . . . . . . . . . . . . . . 152
7.3.1 Sample Exclusive-Locking Implementation Based on Atomic
Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.3.2 Other Exclusive-Locking Implementations . . . . . . . . . . 153
7.4 Lock-Based Existence Guarantees . . . . . . . . . . . . . . . . . . . 155
7.5 Locking: Hero or Villain? . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.5.1 Locking For Applications: Hero! . . . . . . . . . . . . . . . . . 157
7.5.2 Locking For Parallel Libraries: Just Another Tool . . . . . . . . 157
7.5.3 Locking For Parallelizing Sequential Libraries: Villain! . . . . . 161
7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
vi CONTENTS
11 Validation 271
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
11.1.1 Where Do Bugs Come From? . . . . . . . . . . . . . . . . . 272
11.1.2 Required Mindset . . . . . . . . . . . . . . . . . . . . . . . . 273
11.1.3 When Should Validation Start? . . . . . . . . . . . . . . . . . 274
CONTENTS vii
E Credits 679
E.1 Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
E.2 Reviewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
E.3 Machine Owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
E.4 Original Publications . . . . . . . . . . . . . . . . . . . . . . . . . . 680
E.5 Figure Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
E.6 Other Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
Chapter 1
The purpose of this book is to help you program shared-memory parallel machines
without risking your sanity.1 We hope that this book’s design principles will help
you avoid at least some parallel-programming pitfalls. That said, you should think
of this book as a foundation on which to build, rather than as a completed cathedral.
Your mission, if you choose to accept, is to help make further progress in the exciting
field of parallel programming—progress that will in time render this book obsolete.
Parallel programming is not as hard as some say, and we hope that this book makes your
parallel-programming projects easier and more fun.
In short, where parallel programming once focused on science, research, and grand-
challenge projects, it is quickly becoming an engineering discipline. We therefore
examine specific parallel-programming tasks and describe how to approach them. In
some surprisingly common cases, they can even be automated.
This book is written in the hope that presenting the engineering discipline underlying
successful parallel-programming projects will free a new generation of parallel hackers
from the need to slowly and painstakingly reinvent old wheels, enabling them to instead
focus their energy and creativity on new frontiers. We sincerely hope that parallel
programming brings you at least as much fun, excitement, and challenge that it has
brought to us!
1.1 Roadmap
This book is a handbook of widely applicable and heavily used design techniques, rather
than a collection of optimal algorithms with tiny areas of applicability. You are currently
reading Chapter 1, but you knew that already. Chapter 2 gives a high-level overview of
parallel programming.
Chapter 3 introduces shared-memory parallel hardware. After all, it is difficult
to write good parallel code unless you understand the underlying hardware. Because
hardware constantly evolves, this chapter will always be out of date. We will nevertheless
do our best to keep up. Chapter 4 then provides a very brief overview of common shared-
memory parallel-programming primitives.
Chapter 5 takes an in-depth look at parallelizing one of the simplest problems
imaginable, namely counting. Because almost everyone has an excellent grasp of
1 Or, perhaps more accurately, without much greater risk to your sanity than that incurred by non-parallel
programming. Which, come to think of it, might not be saying all that much.
1
2 CHAPTER 1. HOW TO USE THIS BOOK
counting, this chapter is able to delve into many important parallel-programming issues
without the distractions of more-typical computer-science problems. My impression is
that this chapter has seen the greatest use in parallel-programming coursework.
Chapter 6 introduces a number of design-level methods of addressing the issues
identified in Chapter 5. It turns out that it is important to address parallelism at the
design level when feasible: To paraphrase Dijkstra [Dij68], “retrofitted parallelism
considered grossly suboptimal” [McK12b].
The next three chapters examine three important approaches to synchronization.
Chapter 7 covers locking, which in 2014 is not only the workhorse of production-quality
parallel programming, but is also widely considered to be parallel programming’s worst
villain. Chapter 8 gives a brief overview of data ownership, an often overlooked but
remarkably pervasive and powerful approach. Finally, Chapter 9 introduces a number
of deferred-processing mechanisms, including reference counting, hazard pointers,
sequence locking, and RCU.
Chapter 10 applies the lessons of previous chapters to hash tables, which are heavily
used due to their excellent partitionability, which (usually) leads to excellent perfor-
mance and scalability.
As many have learned to their sorrow, parallel programming without validation is a
sure path to abject failure. Chapter 11 covers various forms of testing. It is of course
impossible to test reliability into your program after the fact, so Chapter 12 follows up
with a brief overview of a couple of practical approaches to formal verification.
Chapter 13 contains a series of moderate-sized parallel programming problems.
The difficulty of these problems vary, but should be appropriate for someone who has
mastered the material in the previous chapters.
Chapter 14 looks at advanced synchronization methods, including memory barriers
and non-blocking synchronization, while Chapter 15 looks at the nascent field of
parallel real-time computing. Chapter 16 follows up with some ease-of-use advice.
Finally, Chapter 17 looks at a few possible future directions, including shared-memory
parallel system design, software and hardware transactional memory, and functional
programming for parallelism.
This chapter is followed by a number of appendices. The most popular of these
appears to be Appendix B, which covers memory barriers. Appendix C contains the
answers to the infamous Quick Quizzes, which are discussed in the next section.
In short, if you need a deep understanding of the material, then you should invest
some time into answering the Quick Quizzes. Don’t get me wrong, passively reading
the material can be quite valuable, but gaining full problem-solving capability really
does require that you practice solving problems.
I learned this the hard way during coursework for my late-in-life Ph.D. I was
studying a familiar topic, and was surprised at how few of the chapter’s exercises I
could answer off the top of my head.2 Forcing myself to answer the questions greatly
increased my retention of the material. So with these Quick Quizzes I am not asking
you to do anything that I have not been doing myself!
Finally, the most common learning disability is thinking that you already know. The
quick quizzes can be an extremely effective cure.
2 So I suppose that it was just as well that my professors refused to let me waive that class!
4 CHAPTER 1. HOW TO USE THIS BOOK
5. If your primary focus is scientific and technical computing, and you prefer a
patternist approach, you might try Mattson et al.’s textbook [MSM05]. It covers
Java, C/C++, OpenMP, and MPI. Its patterns are admirably focused first on design,
then on implementation.
6. If your primary focus is scientific and technical computing, and you are interested
in GPUs, CUDA, and MPI, you might check out Norm Matloff’s “Programming
on Parallel Machines” [Mat13].
7. If you are interested in POSIX Threads, you might take a look at David R. Buten-
hof’s book [But97]. In addition, W. Richard Stevens’s book [Ste92] covers UNIX
and POSIX, and Stewart Weiss’s lecture notes [Wei13] provide an thorough and
accessible introduction with a good set of examples.
8. If you are interested in C++11, you might like Anthony Williams’s “C++ Concur-
rency in Action: Practical Multithreading” [Wil12].
9. If you are interested in C++, but in a Windows environment, you might try Herb
Sutter’s “Effective Concurrency” series in Dr. Dobbs Journal [Sut08]. This series
does a reasonable job of presenting a commonsense approach to parallelism.
10. If you want to try out Intel Threading Building Blocks, then perhaps James
Reinders’s book [Rei07] is what you are looking for.
11. Those interested in learning how various types of multi-processor hardware cache
organizations affect the implementation of kernel internals should take a look at
Curt Schimmel’s classic treatment of this subject [Sch94].
12. Finally, those using Java might be well-served by Doug Lea’s textbooks [Lea97,
GPB+ 07].
However, if you are interested in principles of parallel design for low-level software,
especially software written in C, read on!
This command will locate the file rcu_rcpls.c, which is called out in Sec-
tion 9.5.5. Other types of systems have well-known ways of locating files by filename.
To create patches or git pull requests, you will need the LATEX source to the
book, which is at git://git.kernel.org/pub/scm/linux/kernel/git/
paulmck/perfbook.git. You will of course also need git and LATEX, which
are available as part of most mainstream Linux distributions. Other packages may be
required, depending on the distribution you use. The required list of packages for a few
popular distributions is listed in the file FAQ-BUILD.txt in the LATEX source to the
book.
To create and display a current LATEX source tree of this book, use the list of Linux
commands shown in Figure 1.1. In some environments, the evince command that
displays perfbook.pdf may need to be replaced, for example, with acroread. The
git clone command need only be used the first time you create a PDF, subsequently,
you can run the commands shown in Figure 1.2 to pull in any updates and generate an
updated PDF. The commands in Figure 1.2 must be run within the perfbook directory
created by the commands shown in Figure 1.1.
PDFs of this book are sporadically posted at http://kernel.org/pub/linux/
kernel/people/paulmck/perfbook/perfbook.html and at http://www.
rdrop.com/users/paulmck/perfbook/.
The actual process of contributing patches and sending git pull requests is
similar to that of the Linux kernel, which is documented in the Documentation/
SubmittingPatches file in the Linux source tree. One important requirement is
that each patch (or commit, in the case of a git pull request) must contain a valid
Signed-off-by: line, which has the following format:
1. The contribution was created in whole or in part by me and I have the right to
submit it under the open source license indicated in the file; or
2. The contribution is based upon previous work that, to the best of my knowledge,
is covered under an appropriate open source License and I have the right under
6 CHAPTER 1. HOW TO USE THIS BOOK
that license to submit that work with modifications, whether created in whole
or in part by me, under the same open source license (unless I am permitted to
submit under a different license), as indicated in the file; or
3. The contribution was provided directly to me by some other person who certified
(a), (b) or (c) and I have not modified it.
4. I understand and agree that this project and the contribution are public and that
a record of the contribution (including all personal information I submit with
it, including my sign-off) is maintained indefinitely and may be redistributed
consistent with this project or the open source license(s) involved.
This is similar to the Developer’s Certificate of Origin (DCO) 1.1 used by the
Linux kernel. The only addition is item #4. This added item says that you wrote the
contribution yourself, as opposed to having (say) copied it from somewhere. If multiple
people authored a contribution, each should have a Signed-off-by: line.
You must use your real name: I unfortunately cannot accept pseudonymous or
anonymous contributions.
The language of this book is American English, however, the open-source nature
of this book permits translations, and I personally encourage them. The open-source
licenses covering this book additionally allow you to sell your translation, if you wish. I
do request that you send me a copy of the translation (hardcopy if available), but this
is a request made as a professional courtesy, and is not in any way a prerequisite to
the permission that you already have under the Creative Commons and GPL licenses.
Please see the FAQ.txt file in the source tree for a list of translations currently in
progress. I consider a translation effort to be “in progress” once at least one chapter has
been fully translated.
As noted at the beginning of this section, I am this book’s editor. However, if you
choose to contribute, it will be your book as well. With that, I offer you Chapter 2, our
introduction.
If parallel programming is so hard, why are there any
parallel programs?
Unknown
Chapter 2
Introduction
Parallel programming has earned a reputation as one of the most difficult areas a hacker
can tackle. Papers and textbooks warn of the perils of deadlock, livelock, race conditions,
non-determinism, Amdahl’s-Law limits to scaling, and excessive realtime latencies. And
these perils are quite real; we authors have accumulated uncounted years of experience
dealing with them, and all of the emotional scars, grey hairs, and hair loss that go with
such experiences.
However, new technologies that are difficult to use at introduction invariably become
easier over time. For example, the once-rare ability to drive a car is now commonplace
in many countries. This dramatic change came about for two basic reasons: (1) cars
became cheaper and more readily available, so that more people had the opportunity
to learn to drive, and (2) cars became easier to operate due to automatic transmissions,
automatic chokes, automatic starters, greatly improved reliability, and a host of other
technological improvements.
The same is true of a many other technologies, including computers. It is no
longer necessary to operate a keypunch in order to program. Spreadsheets allow
most non-programmers to get results from their computers that would have required
a team of specialists a few decades ago. Perhaps the most compelling example is
web-surfing and content creation, which since the early 2000s has been easily done
by untrained, uneducated people using various now-commonplace social-networking
tools. As recently as 1968, such content creation was a far-out research project [Eng68],
described at the time as “like a UFO landing on the White House lawn”[Gri00].
Therefore, if you wish to argue that parallel programming will remain as difficult as
it is currently perceived by many to be, it is you who bears the burden of proof, keeping
in mind the many centuries of counter-examples in a variety of fields of endeavor.
7
8 CHAPTER 2. INTRODUCTION
2. The typical researcher’s and practitioner’s lack of experience with parallel sys-
tems.
Many of these historic difficulties are well on the way to being overcome. First, over
the past few decades, the cost of parallel systems has decreased from many multiples of
that of a house to a fraction of that of a bicycle, courtesy of Moore’s Law. Papers calling
out the advantages of multicore CPUs were published as early as 1996 [ONH+ 96]. IBM
introduced simultaneous multi-threading into its high-end POWER family in 2000, and
multicore in 2001. Intel introduced hyperthreading into its commodity Pentium line
in November 2000, and both AMD and Intel introduced dual-core CPUs in 2005. Sun
followed with the multicore/multi-threaded Niagara in late 2005. In fact, by 2008, it
was becoming difficult to find a single-CPU desktop system, with single-core CPUs
being relegated to netbooks and embedded devices. By 2012, even smartphones were
starting to sport multiple CPUs.
Second, the advent of low-cost and readily available multicore systems means
that the once-rare experience of parallel programming is now available to almost all
researchers and practitioners. In fact, parallel systems are now well within the budget of
students and hobbyists. We can therefore expect greatly increased levels of invention
and innovation surrounding parallel systems, and that increased familiarity will over
time make the once prohibitively expensive field of parallel programming much more
friendly and commonplace.
Third, in the 20th century, large systems of highly parallel software were almost
always closely guarded proprietary secrets. In happy contrast, the 21st century has
seen numerous open-source (and thus publicly available) parallel software projects,
including the Linux kernel [Tor03], database systems [Pos08, MS08], and message-
passing systems [The08, UoC08]. This book will draw primarily from the Linux kernel,
but will provide much material suitable for user-level applications.
Fourth, even though the large-scale parallel-programming projects of the 1980s and
1990s were almost all proprietary projects, these projects have seeded other communities
with a cadre of developers who understand the engineering discipline required to develop
production-quality parallel code. A major purpose of this book is to present this
engineering discipline.
Unfortunately, the fifth difficulty, the high cost of communication relative to that
of processing, remains largely in force. Although this difficulty has been receiving
increasing attention during the new millennium, according to Stephen Hawking, the
finite speed of light and the atomic nature of matter is likely to limit progress in this
area [Gar07, Moo03]. Fortunately, this difficulty has been in force since the late 1980s,
so that the aforementioned engineering discipline has evolved practical and effective
strategies for handling it. In addition, hardware designers are increasingly aware of
these issues, so perhaps future hardware will be more friendly to parallel software as
discussed in Section 3.3.
Quick Quiz 2.1: Come on now!!! Parallel programming has been known to be
exceedingly hard for many decades. You seem to be hinting that it is not so hard. What
sort of game are you playing?
2.2. PARALLEL PROGRAMMING GOALS 9
1. Performance.
2. Productivity.
3. Generality.
Unfortunately, given the current state of the art, it is possible to achieve at best two
of these three goals for any given parallel program. These three goals therefore form the
iron triangle of parallel programming, a triangle upon which overly optimistic hopes all
too often come to grief.1
Quick Quiz 2.3: Oh, really??? What about correctness, maintainability, robustness,
and so on?
Quick Quiz 2.4: And if correctness, maintainability, and robustness don’t make the
list, why do productivity and generality?
Quick Quiz 2.5: Given that parallel programs are much harder to prove correct than
are sequential programs, again, shouldn’t correctness really be on the list?
Quick Quiz 2.6: What about just having fun?
Each of these goals is elaborated upon in the following sections.
2.2.1 Performance
Performance is the primary goal behind most parallel-programming effort. After all, if
performance is not a concern, why not do yourself a favor: Just write sequential code,
and be happy? It will very likely be easier and you will probably get done much more
quickly.
Quick Quiz 2.7: Are there no cases where parallel programming is about something
other than performance?
Note that “performance” is interpreted quite broadly here, including scalability
(performance per CPU) and efficiency (for example, performance per watt).
That said, the focus of performance has shifted from hardware to parallel software.
This change in focus is due to the fact that, although Moore’s Law continues to deliver
increases in transistor density, it has ceased to provide the traditional single-threaded
10000
100
10
0.1
1975
1980
1985
1990
1995
2000
2005
2010
2015
Year
performance increases. This can be seen in Figure 2.12 , which shows that writing
single-threaded code and simply waiting a year or two for the CPUs to catch up may
no longer be an option. Given the recent trends on the part of all major manufacturers
towards multicore/multithreaded systems, parallelism is the way to go for those wanting
the avail themselves of the full performance of their systems.
Even so, the first goal is performance rather than scalability, especially given that the
easiest way to attain linear scalability is to reduce the performance of each CPU [Tor01].
Given a four-CPU system, which would you prefer? A program that provides 100
transactions per second on a single CPU, but does not scale at all? Or a program that
provides 10 transactions per second on a single CPU, but scales perfectly? The first
program seems like a better bet, though the answer might change if you happened to
have a 32-CPU system.
That said, just because you have multiple CPUs is not necessarily in and of itself
a reason to use them all, especially given the recent decreases in price of multi-CPU
systems. The key point to understand is that parallel programming is primarily a
performance optimization, and, as such, it is one potential optimization of many. If your
program is fast enough as currently written, there is no reason to optimize, either by
parallelizing it or by applying any of a number of potential sequential optimizations.3
By the same token, if you are looking to apply parallelism as an optimization to a
sequential program, then you will need to compare parallel algorithms to the best
sequential algorithms. This may require some care, as far too many publications ignore
2 This plot shows clock frequencies for newer CPUs theoretically capable of retiring one or more
instructions per clock, and MIPS (millions of instructions per second, usually from the old Dhrystone
benchmark) for older CPUs requiring multiple clocks to execute even the simplest instruction. The reason for
shifting between these two measures is that the newer CPUs’ ability to retire multiple instructions per clock is
typically limited by memory-system performance. Furthermore, the benchmarks commonly used on the older
CPUs are obsolete, and it is difficult to run the newer benchmarks on systems containing the old CPUs, in part
because it is hard to find working instances of the old CPUs.
3 Of course, if you are a hobbyist whose primary interest is writing parallel software, that is more than
100000
10000
1000
10
0.1
1975
1980
1985
1990
1995
2000
2005
2010
2015
Year
2.2.2 Productivity
Quick Quiz 2.8: Why all this prattling on about non-technical issues??? And not just
any non-technical issue, but productivity of all things? Who cares?
Productivity has been becoming increasingly important in recent decades. To see
this, consider that the price of early computers was tens of millions of dollars at a time
when engineering salaries were but a few thousand dollars a year. If dedicating a team
of ten engineers to such a machine would improve its performance, even by only 10%,
then their salaries would be repaid many times over.
One such machine was the CSIRAC, the oldest still-intact stored-program computer,
which was put into operation in 1949 [Mus04, Dep06]. Because this machine was built
before the transistor era, it was constructed of 2,000 vacuum tubes, ran with a clock
frequency of 1kHz, consumed 30kW of power, and weighed more than three metric tons.
Given that this machine had but 768 words of RAM, it is safe to say that it did not suffer
from the productivity issues that often plague today’s large-scale software projects.
Today, it would be quite difficult to purchase a machine with so little computing
power. Perhaps the closest equivalents are 8-bit embedded microprocessors exemplified
by the venerable Z80 [Wik08], but even the old Z80 had a CPU clock frequency more
than 1,000 times faster than the CSIRAC. The Z80 CPU had 8,500 transistors, and could
be purchased in 2008 for less than $2 US per unit in 1,000-unit quantities. In stark
contrast to the CSIRAC, software-development costs are anything but insignificant for
the Z80.
The CSIRAC and the Z80 are two points in a long-term trend, as can be seen in
Figure 2.2. This figure plots an approximation to computational power per die over the
past three decades, showing a consistent four-order-of-magnitude increase. Note that
the advent of multicore CPUs has permitted this increase to continue unabated despite
the clock-frequency wall encountered in 2003.
One of the inescapable consequences of the rapid decrease in the cost of hardware
12 CHAPTER 2. INTRODUCTION
2.2.3 Generality
One way to justify the high cost of developing parallel software is to strive for maximal
generality. All else being equal, the cost of a more-general software artifact can be
spread over more users than that of a less-general one. In fact, this economic force
explains much of the maniacal focus on portability, which can be seen as an important
special case of generality.4
Unfortunately, generality often comes at the cost of performance, productivity, or
both. For example, portability is often achieved via adaptation layers, which inevitably
exact a performance penalty. To see this more generally, consider the following popular
parallel programming environments:
C/C++ “Locking Plus Threads” : This category, which includes POSIX Threads
(pthreads) [Ope97], Windows Threads, and numerous operating-system kernel
environments, offers excellent performance (at least within the confines of a
single SMP system) and also offers good generality. Pity about the relatively low
productivity.
MPI : This Message Passing Interface [MPI08] powers the largest scientific and
technical computing clusters in the world and offers unparalleled performance
and scalability. In theory, it is general purpose, but it is mainly used for scientific
and technical computing. Its productivity is believed by many to be even lower
than that of C/C++ “locking plus threads” environments.
OpenMP : This set of compiler directives can be used to parallelize loops. It is thus
quite specific to this task, and this specificity often limits its performance. It is,
however, much easier to use than MPI or C/C++ “locking plus threads.”
Productivity
Application
Middleware (e.g., DBMS)
Performance
Generality
System Libraries
Operating System Kernel
Firmware
Hardware
Special−Purpose
User 1 Env Productive User 2
for User 1
HW / Special−Purpose
Abs Environment
Productive for User 2
User 3
General−Purpose User 4
Environment
Special−Purpose Environment
Special−Purpose
Productive for User 3
Environment
Productive for User 4
It is important to note that a tradeoff between productivity and generality has existed
for centuries in many fields. For but one example, a nailgun is more productive than
a hammer for driving nails, but in contrast to the nailgun, a hammer can be used for
many things besides driving nails. It should therefore be no surprise to see similar
tradeoffs appear in the field of parallel computing. This tradeoff is shown schematically
in Figure 2.4. Here, users 1, 2, 3, and 4 have specific jobs that they need the computer to
help them with. The most productive possible language or environment for a given user is
one that simply does that user’s job, without requiring any programming, configuration,
or other setup.
Quick Quiz 2.10: This is a ridiculously unachievable ideal! Why not focus on
something that is achievable in practice?
Unfortunately, a system that does the job required by user 1 is unlikely to do
user 2’s job. In other words, the most productive languages and environments are
domain-specific, and thus by definition lacking generality.
Another option is to tailor a given programming language or environment to the
hardware system (for example, low-level languages such as assembly, C, C++, or Java)
or to some abstraction (for example, Haskell, Prolog, or Snobol), as is shown by the
circular region near the center of Figure 2.4. These languages can be considered to
be general in the sense that they are equally ill-suited to the jobs required by users 1,
2, 3, and 4. In other words, their generality is purchased at the expense of decreased
productivity when compared to domain-specific languages and environments. Worse yet,
a language that is tailored to a given abstraction is also likely to suffer from performance
and scalability problems unless and until someone figures out how to efficiently map
that abstraction to real hardware.
Is there no escape from iron triangle’s three conflicting goals of performance,
productivity, and generality?
It turns out that there often is an escape, for example, using the alternatives to
parallel programming discussed in the next section. After all, parallel programming can
be a great deal of fun, but it is not always the best tool for the job.
from parallelism is limited to roughly the number of CPUs (but see Section 6.5 for an
interesting exception). In contrast, the speedup available from traditional single-threaded
software optimizations can be much larger. For example, replacing a long linked list with
a hash table or a search tree can improve performance by many orders of magnitude. This
highly optimized single-threaded program might run much faster than its unoptimized
parallel counterpart, making parallelization unnecessary. Of course, a highly optimized
parallel program would be even better, aside from the added development effort required.
Furthermore, different programs might have different performance bottlenecks. For
example, if your program spends most of its time waiting on data from your disk drive,
using multiple CPUs will probably just increase the time wasted waiting for the disks.
In fact, if the program was reading from a single large file laid out sequentially on a
rotating disk, parallelizing your program might well make it a lot slower due to the
added seek overhead. You should instead optimize the data layout so that the file can be
smaller (thus faster to read), split the file into chunks which can be accessed in parallel
from different drives, cache frequently accessed data in main memory, or, if possible,
reduce the amount of data that must be read.
Quick Quiz 2.12: What other bottlenecks might prevent additional CPUs from
providing additional performance?
Parallelism can be a powerful optimization technique, but it is not the only such
technique, nor is it appropriate for all situations. Of course, the easier it is to parallelize
your program, the more attractive parallelization becomes as an optimization. Paral-
lelization has a reputation of being quite difficult, which leads to the question “exactly
what makes parallel programming so difficult?”
Performance Productivity
Work
Partitioning
Resource
Parallel
Partitioning and
Access Control Replication
Interacting
With Hardware
Generality
tasks. These tasks fall into the four categories shown in Figure 2.5, each of which is
covered in the following sections.
Performance Productivity
Work
Partitioning
Resource
Parallel
Partitioning and
Access Control Replication
Interacting
With Hardware
Generality
2.5 Discussion
This section has given an overview of the difficulties with, goals of, and alternatives
to parallel programming. This overview was followed by a discussion of what can
make parallel programming hard, along with a high-level approach for dealing with
parallel programming’s difficulties. Those who still insist that parallel programming
is impossibly difficult should review some of the older guides to parallel program-
mming [Seq88, Dig89, BK85, Inm85]. The following quote from Andrew Birrell’s
monograph [Dig89] is especially telling:
Writing concurrent programs has a reputation for being exotic and difficult.
I believe it is neither. You need a system that provides you with good
primitives and suitable libraries, you need a basic caution and carefulness,
you need an armory of useful techniques, and you need to know of the
common pitfalls. I hope that this paper has helped you towards sharing my
belief.
The authors of these older guides were well up to the parallel programming challenge
back in the 1980s. As such, there are simply no excuses for refusing to step up to the
parallel-programming challenge here in the 21st century!
We are now ready to proceed to the next chapter, which dives into the relevant
properties of the parallel hardware underlying our parallel software.
Premature abstraction is the root of all evil.
A cast of thousands
Chapter 3
Most people have an intuitive understanding that passing messages between systems is
considerably more expensive than performing simple calculations within the confines of
a single system. However, it is not always so clear that communicating among threads
within the confines of a single shared-memory system can also be quite expensive. This
chapter therefore looks at the cost of synchronization and communication within a
shared-memory system. These few pages can do no more than scratch the surface of
shared-memory parallel hardware design; readers desiring more detail would do well to
start with a recent edition of Hennessy and Patterson’s classic text [HP11, HP95].
Quick Quiz 3.1: Why should parallel programmers bother learning low-level prop-
erties of the hardware? Wouldn’t it be easier, better, and more general to remain at a
higher level of abstraction?
3.1 Overview
Careless reading of computer-system specification sheets might lead one to believe that
CPU performance is a footrace on a clear track, as illustrated in Figure 3.1, where the
race always goes to the swiftest.
Although there are a few CPU-bound benchmarks that approach the ideal shown
in Figure 3.1, the typical program more closely resembles an obstacle course than a
race track. This is because the internal architecture of CPUs has changed dramatically
over the past few decades, courtesy of Moore’s Law. These changes are described in the
following sections.
21
22 CHAPTER 3. HARDWARE AND ITS HABITS
matrices or vectors. The CPU can then correctly predict that the branch at the end of the
loop will be taken in almost all cases, allowing the pipeline to be kept full and the CPU
to execute at full speed.
However, branch prediction is not always so easy. For example, consider a program
with many loops, each of which iterates a small but random number of times. For
another example, consider an object-oriented program with many virtual objects that can
reference many different real objects, all with different implementations for frequently
invoked member functions. In these cases, it is difficult or even impossible for the
CPU to predict where the next branch might lead. Then either the CPU must stall
waiting for execution to proceed far enough to be certain where that branch leads, or
it must guess. Although guessing works extremely well for programs with predictable
control flow, for unpredictable branches (such as those in binary search) the guesses will
frequently be wrong. A wrong guess can be expensive because the CPU must discard
any speculatively executed instructions following the corresponding branch, resulting in
a pipeline flush. If pipeline flushes appear too frequently, they drastically reduce overall
performance, as fancifully depicted in Figure 3.3.
3.1. OVERVIEW 23
N
IO
CT
EDI
R
SP
PI MI
PE
LI CH
NE AN
ER BR
RO
R
Unfortunately, pipeline flushes are not the only hazards in the obstacle course that
modern CPUs must run. The next section covers the hazards of referencing memory.
1 It is only fair to add that each of these single cycles lasted no less than 1.6 microseconds.
24 CHAPTER 3. HARDWARE AND ITS HABITS
One such obstacle is atomic operations. The problem here is that the whole idea of an
atomic operation conflicts with the piece-at-a-time assembly-line operation of a CPU
pipeline. To hardware designers’ credit, modern CPUs use a number of extremely clever
tricks to make such operations look atomic even though they are in fact being executed
piece-at-a-time, with one common trick being to identify all the cachelines containing
the data to be atomically operated on, ensure that these cachelines are owned by the
CPU executing the atomic operation, and only then proceed with the atomic operation
while ensuring that these cachelines remained owned by this CPU. Because all the data
is private to this CPU, other CPUs are unable to interfere with the atomic operation
despite the piece-at-a-time nature of the CPU’s pipeline. Needless to say, this sort of
trick can require that the pipeline must be delayed or even flushed in order to perform
the setup operations that permit a given atomic operation to complete correctly.
In contrast, when executing a non-atomic operation, the CPU can load values from
cachelines as they appear and place the results in the store buffer, without the need
to wait for cacheline ownership. Fortunately, CPU designers have focused heavily on
atomic operations, so that as of early 2014 they have greatly reduced their overhead.
Even so, the resulting effect on performance is all too often as depicted in Figure 3.5.
Unfortunately, atomic operations usually apply only to single elements of data. Be-
cause many parallel algorithms require that ordering constraints be maintained between
updates of multiple data elements, most CPUs provide memory barriers. These memory
barriers also serve as performance-sapping obstacles, as described in the next section.
Quick Quiz 3.2: What types of machines would allow atomic operations on multiple
data elements?
3.1. OVERVIEW 25
Memory
Barrier
If the CPU were not constrained to execute these statements in the order shown, the
effect would be that the variable “a” would be incremented without the protection of
“mylock”, which would certainly defeat the purpose of acquiring it. To prevent such
destructive reordering, locking primitives contain either explicit or implicit memory
barriers. Because the whole purpose of these memory barriers is to prevent reorderings
26 CHAPTER 3. HARDWARE AND ITS HABITS
CACHE-
MISS
TOLL
BOOTH
that the CPU would otherwise undertake in order to increase performance, memory
barriers almost always reduce performance, as depicted in Figure 3.6.
As with atomic operations, CPU designers have been working hard to reduce
memory-barrier overhead, and have made substantial progress.
work being performed is a key design parameter. A major goal of parallel hardware de-
sign is to reduce this ratio as needed to achieve the relevant performance and scalability
goals. In turn, as will be seen in Chapter 6, a major goal of parallel software design is to
reduce the frequency of expensive operations like communications cache misses.
Of course, it is one thing to say that a given operation is an obstacle, and quite
another to show that the operation is a significant obstacle. This distinction is discussed
in the following sections.
3.2 Overheads
This section presents actual overheads of the obstacles to performance listed out in the
previous section. However, it is first necessary to get a rough view of hardware system
architecture, which is the subject of the next section.
Interconnect Interconnect
Cache Cache Cache Cache
CPU 4 CPU 5 CPU 6 CPU 7
1. CPU 0 checks its local cache, and does not find the cacheline.
2. The request is forwarded to CPU 0’s and 1’s interconnect, which checks CPU 1’s
local cache, and does not find the cacheline.
3. The request is forwarded to the system interconnect, which checks with the other
three dies, learning that the cacheline is held by the die containing CPU 6 and 7.
4. The request is forwarded to CPU 6’s and 7’s interconnect, which checks both
CPUs’ caches, finding the value in CPU 7’s cache.
5. CPU 7 forwards the cacheline to its interconnect, and also flushes the cacheline
from its cache.
6. CPU 6’s and 7’s interconnect forwards the cacheline to the system interconnect.
7. The system interconnect forwards the cacheline to CPU 0’s and 1’s interconnect.
8. CPU 0’s and 1’s interconnect forwards the cacheline to CPU 0’s cache.
9. CPU 0 can now perform the CAS operation on the value in its cache.
Quick Quiz 3.4: This is a simplified sequence of events? How could it possibly be
any more complex?
Quick Quiz 3.5: Why is it necessary to flush the cacheline from CPU 7’s cache?
This simplified sequence is just the beginning of a discipline called cache-coherency
protocols [HP95, CSG99, MHS12, SHW11].
Ratio
Operation Cost (ns) (cost/clock)
Clock period 0.6 1.0
Best-case CAS 37.9 63.2
Best-case lock 65.6 109.3
Single cache miss 139.5 232.5
CAS cache miss 306.0 510.0
Comms Fabric 5,000.0 8,330.0
Global Comms 195,000,000.0 325,000,000.0
the operations’s costs are nevertheless normalized to a clock period in the third column,
labeled “Ratio”. The first thing to note about this table is the large values of many of the
ratios.
The best-case compare-and-swap (CAS) operation consumes almost forty nanosec-
onds, a duration more than sixty times that of the clock period. Here, “best case” means
that the same CPU now performing the CAS operation on a given variable was the
last CPU to operate on this variable, so that the corresponding cache line is already
held in that CPU’s cache. Similarly, the best-case lock operation (a “round trip” pair
consisting of a lock acquisition followed by a lock release) consumes more than sixty
nanoseconds, or more than one hundred clock cycles. Again, “best case” means that
the data structure representing the lock is already in the cache belonging to the CPU
acquiring and releasing the lock. The lock operation is more expensive than CAS
because it requires two atomic operations on the lock data structure.
An operation that misses the cache consumes almost one hundred and forty nanosec-
onds, or more than two hundred clock cycles. The code used for this cache-miss
measurement passes the cache line back and forth between a pair of CPUs, so this cache
miss is satisfied not from memory, but rather from the other CPU’s cache. A CAS
operation, which must look at the old value of the variable as well as store a new value,
consumes over three hundred nanoseconds, or more than five hundred clock cycles.
Think about this a bit. In the time required to do one CAS operation, the CPU could
have executed more than five hundred normal instructions. This should demonstrate the
limitations not only of fine-grained locking, but of any other synchronization mechanism
relying on fine-grained global agreement.
Quick Quiz 3.6: Surely the hardware designers could be persuaded to improve
this situation! Why have they been content with such abysmal performance for these
single-instruction operations?
I/O operations are even more expensive. As shown in the “Comms Fabric” row,
high performance (and expensive!) communications fabric, such as InfiniBand or any
number of proprietary interconnects, has a latency of roughly five microseconds for an
end-to-end round trip, during which time more than eight thousand instructions might
have been executed. Standards-based communications networks often require some
sort of protocol processing, which further increases the latency. Of course, geographic
distance also increases latency, with the speed-of-light through optical fiber latency
around the world coming to roughly 195 milliseconds, or more than 300 million clock
30 CHAPTER 3. HARDWARE AND ITS HABITS
70 um
3 cm 1.5 cm
1. 3D integration,
2. Novel materials and processes,
3. Substituting light for electricity,
4. Special-purpose accelerators, and
5. Existing parallel software.
3.3.1 3D Integration
3-dimensional integration (3DI) is the practice of bonding very thin silicon dies to
each other in a vertical stack. This practice provides potential benefits, but also poses
significant fabrication challenges [Kni08].
Perhaps the most important benefit of 3DI is decreased path length through the
system, as shown in Figure 3.11. A 3-centimeter silicon die is replaced with a stack of
four 1.5-centimeter dies, in theory decreasing the maximum path through the system by
a factor of two, keeping in mind that each layer is quite thin. In addition, given proper
attention to design and placement, long horizontal electrical connections (which are
both slow and power hungry) can be replaced by short vertical electrical connections,
which are both faster and more power efficient.
However, delays due to levels of clocked logic will not be decreased by 3D in-
tegration, and significant manufacturing, testing, power-supply, and heat-dissipation
problems must be solved for 3D integration to reach production while still delivering on
its promise. The heat-dissipation problems might be solved using semiconductors based
on diamond, which is a good conductor for heat, but an electrical insulator. That said, it
remains difficult to grow large single diamond crystals, to say nothing of slicing them
into wafers. In addition, it seems unlikely that any of these technologies will be able to
deliver the exponential increases to which some people have become accustomed. That
said, they may be necessary steps on the path to the late Jim Gray’s “smoking hairy golf
balls” [Gra02].
limits, but there are nevertheless a few avenues of research and development focused on
working around these fundamental limits.
One workaround for the atomic nature of matter are so-called “high-K dielectric”
materials, which allow larger devices to mimic the electrical properties of infeasibly
small devices. These materials pose some severe fabrication challenges, but nevertheless
may help push the frontiers out a bit farther. Another more-exotic workaround stores
multiple bits in a single electron, relying on the fact that a given electron can exist at a
number of energy levels. It remains to be seen if this particular approach can be made
to work reliably in production semiconductor devices.
Another proposed workaround is the “quantum dot” approach that allows much
smaller device sizes, but which is still in the research stage.
The lesson should be quite clear: parallel algorithms must be explicitly designed with
these hardware properties firmly in mind. One approach is to run nearly independent
threads. The less frequently the threads communicate, whether by atomic operations,
locks, or explicit messages, the better the application’s performance and scalability will
be. This approach will be touched on in Chapter 5, explored in Chapter 6, and taken to
its logical extreme in Chapter 8.
Another approach is to make sure that any sharing be read-mostly, which allows the
CPUs’ caches to replicate the read-mostly data, in turn allowing all CPUs fast access.
This approach is touched on in Section 5.2.3, and explored more deeply in Chapter 9.
In short, achieving excellent parallel performance and scalability means striving for
embarrassingly parallel algorithms and implementations, whether by careful choice of
data structures and algorithms, use of existing parallel applications and environments, or
transforming the problem into one for which an embarrassingly parallel solution exists.
Quick Quiz 3.10: OK, if we are going to have to apply distributed-programming
techniques to shared-memory parallel programs, why not just always use these dis-
tributed techniques and dispense with shared memory?
So, to sum up:
1. The good news is that multicore systems are inexpensive and readily available.
2. More good news: The overhead of many synchronization operations is much
lower than it was on parallel systems from the early 2000s.
3. The bad news is that the overhead of cache misses is still high, especially on large
systems.
The remainder of this book describes ways of handling this bad news.
In particular, Chapter 4 will cover some of the low-level tools used for parallel
programming, Chapter 5 will investigate problems and solutions to parallel counting,
and Chapter 6 will discuss design disciplines that promote performance and scalability.
You are only as good as your tools, and your tools
are only as good as you are.
Unknown
Chapter 4
This chapter provides a brief introduction to some basic tools of the parallel-programming
trade, focusing mainly on those available to user applications running on operating
systems similar to Linux. Section 4.1 begins with scripting languages, Section 4.2
describes the multi-process parallelism supported by the POSIX API and touches on
POSIX threads, Section 4.3 presents analogous operations in other environments, and
finally, Section 4.4 helps to choose the tool that will get the job done.
Quick Quiz 4.1: You call these tools??? They look more like low-level synchro-
nization primitives to me!
Please note that this chapter provides but a brief introduction. More detail is available
from the references cited (and especially from Internet), and more information on how
best to use these tools will be provided in later chapters.
Lines 1 and 2 launch two instances of this program, redirecting their output to two
separate files, with the & character directing the shell to run the two instances of the
program in the background. Line 3 waits for both instances to complete, and lines 4
and 5 display their output. The resulting execution is as shown in Figure 4.1: the two
instances of compute_it execute in parallel, wait completes after both of them do,
and then the two instances of cat execute sequentially.
Quick Quiz 4.2: But this silly shell script isn’t a real parallel program! Why bother
with such trivia???
Quick Quiz 4.3: Is there a simpler way to create a parallel shell script? If so, how?
If not, why not?
For another example, the make software-build scripting language provides a -j
35
36 CHAPTER 4. TOOLS OF THE TRADE
wait
cat compute_it.1.out
cat compute_it.2.out
option that specifies how much parallelism should be introduced into the build process.
For example, typing make -j4 when building a Linux kernel specifies that up to four
parallel compiles be carried out concurrently.
It is hoped that these simple examples convince you that parallel programming need
not always be complex or difficult.
Quick Quiz 4.4: But if script-based parallel programming is so easy, why bother
with anything else?
1 pid = fork();
2 if (pid == 0) {
3 /* child */
4 } else if (pid < 0) {
5 /* parent, upon error */
6 perror("fork");
7 exit(-1);
8 } else {
9 /* parent, pid == child ID */
10 }
1 void waitall(void)
2 {
3 int pid;
4 int status;
5
6 for (;;) {
7 pid = wait(&status);
8 if (pid == -1) {
9 if (errno == ECHILD)
10 break;
11 perror("wait");
12 exit(-1);
13 }
14 }
15 }
Figure 4.2 (forkjoin.c). Line 1 executes the fork() primitive, and saves its return
value in local variable pid. Line 2 checks to see if pid is zero, in which case, this is the
child, which continues on to execute line 3. As noted earlier, the child may terminate via
the exit() primitive. Otherwise, this is the parent, which checks for an error return
from the fork() primitive on line 4, and prints an error and exits on lines 5-7 if so.
Otherwise, the fork() has executed successfully, and the parent therefore executes
line 9 with the variable pid containing the process ID of the child.
The parent process may use the wait() primitive to wait for its children to com-
plete. However, use of this primitive is a bit more complicated than its shell-script
counterpart, as each invocation of wait() waits for but one child process. It is there-
fore customary to wrap wait() into a function similar to the waitall() function
shown in Figure 4.3 (api-pthread.h), with this waitall() function having se-
mantics similar to the shell-script wait command. Each pass through the loop spanning
lines 6-15 waits on one child process. Line 7 invokes the wait() primitive, which
blocks until a child process exits, and returns that child’s process ID. If the process ID
is instead −1, this indicates that the wait() primitive was unable to wait on a child. If
so, line 9 checks for the ECHILD errno, which indicates that there are no more child
processes, so that line 10 exits the loop. Otherwise, lines 11 and 12 print an error and
exit.
Quick Quiz 4.5: Why does this wait() primitive need to be so complicated? Why
not just make it work like the shell-script wait does?
It is critically important to note that the parent and child do not share memory. This
is illustrated by the program shown in Figure 4.4 (forkjoinvar.c), in which the
child sets a global variable x to 1 on line 6, prints a message on line 7, and exits on
line 8. The parent continues at line 14, where it waits on the child, and on line 15 finds
that its copy of the variable x is still zero. The output is thus as follows:
38 CHAPTER 4. TOOLS OF THE TRADE
1 int x = 0;
2 int pid;
3
4 pid = fork();
5 if (pid == 0) { /* child */
6 x = 1;
7 printf("Child process set x=1\n");
8 exit(0);
9 }
10 if (pid < 0) { /* parent, upon error */
11 perror("fork");
12 exit(-1);
13 }
14 waitall();
15 printf("Parent process sees x=%d\n", x);
Quick Quiz 4.6: Isn’t there a lot more to fork() and wait() than discussed
here?
The finest-grained parallelism requires shared memory, and this is covered in Sec-
tion 4.2.2. That said, shared-memory parallelism can be significantly more complex
than fork-join parallelism.
Note that this program carefully makes sure that only one of the threads stores a
value to variable x at a time. Any situation in which one thread might be storing a
value to a given variable while some other thread either loads from or stores to that
4.2. POSIX MULTIPROCESSING 39
1 int x = 0;
2
3 void *mythread(void *arg)
4 {
5 x = 1;
6 printf("Child process set x=1\n");
7 return NULL;
8 }
9
10 int main(int argc, char *argv[])
11 {
12 pthread_t tid;
13 void *vp;
14
15 if (pthread_create(&tid, NULL,
16 mythread, NULL) != 0) {
17 perror("pthread_create");
18 exit(-1);
19 }
20 if (pthread_join(tid, &vp) != 0) {
21 perror("pthread_join");
22 exit(-1);
23 }
24 printf("Parent process sees x=%d\n", x);
25 return 0;
26 }
same variable is termed a “data race”. Because the C language makes no guarantee that
the results of a data race will be in any way reasonable, we need some way of safely
accessing and modifying data concurrently, such as the locking primitives discussed in
the following section.
Quick Quiz 4.8: If the C language makes no guarantees in presence of a data race,
then why does the Linux kernel have so many data races? Are you trying to tell me that
the Linux kernel is completely broken???
shared variable x.
Lines 5-28 defines a function lock_reader() which repeatedly reads the shared
variable x while holding the lock specified by arg. Line 10 casts arg to a pointer to a
pthread_mutex_t, as required by the pthread_mutex_lock() and pthread_
mutex_unlock() primitives.
Quick Quiz 4.10: Why not simply make the argument to lock_reader() on
line 5 of Figure 4.6 be a pointer to a pthread_mutex_t?
Lines 12-15 acquire the specified pthread_mutex_t, checking for errors and
exiting the program if any occur. Lines 16-23 repeatedly check the value of x, printing
the new value each time that it changes. Line 22 sleeps for one millisecond, which
allows this demonstration to run nicely on a uniprocessor machine. Lines 24-27 release
the pthread_mutex_t, again checking for errors and exiting the program if any
occur. Finally, line 28 returns NULL, again to match the function type required by
pthread_create().
Quick Quiz 4.11: Writing four lines of code for each acquisition and release of a
pthread_mutex_t sure seems painful! Isn’t there a better way?
Lines 31-49 of Figure 4.6 shows lock_writer(), which periodically update
the shared variable x while holding the specified pthread_mutex_t. As with
lock_reader(), line 34 casts arg to a pointer to pthread_mutex_t, lines 36-
39 acquires the specified lock, and lines 44-47 releases it. While holding the lock,
lines 40-43 increment the shared variable x, sleeping for five milliseconds between each
increment. Finally, lines 44-47 release the lock.
Figure 4.7 shows a code fragment that runs lock_reader() and lock_writer()
as threads using the same lock, namely, lock_a. Lines 2-6 create a thread running
lock_reader(), and then Lines 7-11 create a thread running lock_writer().
Lines 12-19 wait for both threads to complete. The output of this code fragment is as
follows:
Creating two threads using same lock:
lock_reader(): x = 0
Because both threads are using the same lock, the lock_reader() thread cannot
see any of the intermediate values of x produced by lock_writer() while holding
42 CHAPTER 4. TOOLS OF THE TRADE
the lock.
Quick Quiz 4.12: Is “x = 0” the only possible output from the code fragment shown
in Figure 4.7? If so, why? If not, what other output could appear, and why?
Figure 4.8 shows a similar code fragment, but this time using different locks: lock_
a for lock_reader() and lock_b for lock_writer(). The output of this code
fragment is as follows:
Creating two threads w/different locks:
lock_reader(): x = 0
lock_reader(): x = 1
lock_reader(): x = 2
lock_reader(): x = 3
Because the two threads are using different locks, they do not exclude each other,
and can run concurrently. The lock_reader() function can therefore see the inter-
mediate values of x stored by lock_writer().
Quick Quiz 4.13: Using different locks could cause quite a bit of confusion, what
with threads seeing each others’ intermediate states. So should well-written parallel
programs restrict themselves to using a single lock in order to avoid this kind of
confusion?
Quick Quiz 4.14: In the code shown in Figure 4.8, is lock_reader() guaran-
teed to see all the values produced by lock_writer()? Why or why not?
Quick Quiz 4.15: Wait a minute here!!! Figure 4.7 didn’t initialize shared variable
x, so why does it need to be initialized in Figure 4.8?
Although there is quite a bit more to POSIX exclusive locking, these primitives
provide a good start and are in fact sufficient in a great many situations. The next section
takes a brief look at POSIX reader-writer locking.
1.1
1
ideal
0.9
0.8
10M
0.5
0.4
0.3
1M
0.2 10K
0.1 100K
1K
0
0 20 40 60 80 100 120 140
Number of CPUs (Threads)
This variable is initially set to GOFLAG_INIT, then set to GOFLAG_RUN after all the
reader threads have started, and finally set to GOFLAG_STOP to terminate the test run.
Lines 12-41 define reader(), which is the reader thread. Line 18 atomically
increments the nreadersrunning variable to indicate that this thread is now running,
and lines 19-21 wait for the test to start. The READ_ONCE() primitive forces the
compiler to fetch goflag on each pass through the loop—the compiler would otherwise
be within its rights to assume that the value of goflag would never change.
Quick Quiz 4.16: Instead of using READ_ONCE() everywhere, why not just
declare goflag as volatile on line 10 of Figure 4.9?
Quick Quiz 4.17: READ_ONCE() only affects the compiler, not the CPU. Don’t we
also need memory barriers to make sure that the change in goflag’s value propagates
to the CPU in a timely fashion in Figure 4.9?
Quick Quiz 4.18: Would it ever be necessary to use READ_ONCE() when access-
ing a per-thread variable, for example, a variable declared using the gcc __thread
storage class?
The loop spanning lines 22-38 carries out the performance test. Lines 23-26 acquire
the lock, lines 27-29 hold the lock for the specified duration (and the barrier()
directive prevents the compiler from optimizing the loop out of existence), lines 30-33
release the lock, and lines 34-36 wait for the specified duration before re-acquiring the
lock. Line 37 counts this lock acquisition.
Line 39 moves the lock-acquisition count to this thread’s element of the readcounts[]
array, and line 40 returns, terminating this thread.
Figure 4.10 shows the results of running this test on a 64-core Power-5 system
with two hardware threads per core for a total of 128 software-visible CPUs. The
thinktime parameter was zero for all these tests, and the holdtime parameter set
to values ranging from one thousand (“1K” on the graph) to 100 million (“100M” on
the graph). The actual value plotted is:
LN
(4.1)
NL1
4.2. POSIX MULTIPROCESSING 45
succeeded and 0 if it failed, for example, if the prior value was not equal to the spec-
ified old value. The second variant returns the prior value of the location, which, if
equal to the specified old value, indicates that the operation succeeded. Either of the
compare-and-swap operation is “universal” in the sense that any atomic operation on a
single location can be implemented in terms of compare-and-swap, though the earlier
operations are often more efficient where they apply. The compare-and-swap operation
is also capable of serving as the basis for a wider set of atomic operations, though
the more elaborate of these often suffer from complexity, scalability, and performance
problems [Her90].
The __sync_synchronize() primitive issues a “memory barrier”, which con-
strains both the compiler’s and the CPU’s ability to reorder operations, as discussed in
Section 14.2. In some cases, it is sufficient to constrain the compiler’s ability to reorder
operations, while allowing the CPU free rein, in which case the barrier() primitive
may be used, as it in fact was on line 28 of Figure 4.9. In some cases, it is only necessary
to ensure that the compiler avoids optimizing away a given memory read, in which case
the READ_ONCE() primitive may be used, as it was on line 17 of Figure 4.6. Similarly,
the WRITE_ONCE() primitive may be used to prevent the compiler from optimizing a
way a given memory write. These last two primitives are not provided directly by gcc,
but may be implemented straightforwardly as follows:
Quick Quiz 4.24: Given that these atomic operations will often be able to generate
single atomic instructions that are directly supported by the underlying instruction set,
shouldn’t they be the fastest possible way to get things done?
int smp_thread_id(void)
thread_id_t create_thread(void *(*func)(void *), void *arg)
for_each_thread(t)
for_each_running_thread(t)
void *wait_thread(thread_id_t tid)
void wait_all_threads(void)
stop (which has no POSIX equivalent), kthread_stop() to wait for them to stop,
and schedule_timeout_interruptible() for a timed wait. There are quite
a few additional kthread-management APIs, but this provides a good start, as well as
good search terms.
The CodeSamples API focuses on “threads”, which are a locus of control.2 Each
such thread has an identifier of type thread_id_t, and no two threads running at a
given time will have the same identifier. Threads share everything except for per-thread
local state,3 which includes program counter and stack.
The thread API is shown in Figure 4.11, and members are described in the following
sections.
4.3.2.1 create_thread()
The create_thread() primitive creates a new thread, starting the new thread’s
execution at the function func specified by create_thread()’s first argument,
and passing it the argument specified by create_thread()’s second argument.
This newly created thread will terminate when it returns from the starting function
specified by func. The create_thread() primitive returns the thread_id_t
corresponding to the newly created child thread.
This primitive will abort the program if more than NR_THREADS threads are created,
counting the one implicitly created by running the program. NR_THREADS is a compile-
time constant that may be modified, though some systems may have an upper bound for
the allowable number of threads.
4.3.2.2 smp_thread_id()
Because the thread_id_t returned from create_thread() is system-dependent,
the smp_thread_id() primitive returns a thread index corresponding to the thread
making the request. This index is guaranteed to be less than the maximum number of
threads that have been in existence since the program started, and is therefore useful for
bitmasks, array indices, and the like.
4.3.2.3 for_each_thread()
The for_each_thread() macro loops through all threads that exist, including all
threads that would exist if created. This macro is useful for handling per-thread variables
as will be seen in Section 4.2.7.
2There are many other names for similar software constructs, including “process”, “task”, “fiber”,
“event”, and so on. Similar design principles apply to all of them.
3 How is that for a circular definition?
4.3. ALTERNATIVES TO POSIX OPERATIONS 49
4.3.2.4 for_each_running_thread()
4.3.2.5 wait_thread()
The wait_thread() primitive waits for completion of the thread specified by the
thread_id_t passed to it. This in no way interferes with the execution of the
specified thread; instead, it merely waits for it. Note that wait_thread() returns the
value that was returned by the corresponding thread.
4.3.2.6 wait_all_threads()
Figure 4.12 shows an example hello-world-like child thread. As noted earlier, each
thread is allocated its own stack, so each thread has its own private arg argument
and myarg variable. Each child simply prints its argument and its smp_thread_
id() before exiting. Note that the return statement on line 7 terminates the thread,
returning a NULL to whoever invokes wait_thread() on this thread.
The parent program is shown in Figure 4.13. It invokes smp_init() to initialize
the threading system on line 6, parses arguments on lines 7-14, and announces its
presence on line 15. It creates the specified number of child threads on lines 16-17, and
waits for them to complete on line 18. Note that wait_all_threads() discards
the threads return values, as in this case they are all NULL, which is not very interesting.
Quick Quiz 4.25: What happened to the Linux-kernel equivalents to fork() and
wait()?
4.3.3 Locking
A good starting subset of the Linux kernel’s locking API is shown in Figure 4.14, each
API element being described in the following sections. This book’s CodeSamples
locking API closely follows that of the Linux kernel.
50 CHAPTER 4. TOOLS OF THE TRADE
4.3.3.1 spin_lock_init()
The spin_lock_init() primitive initializes the specified spinlock_t variable,
and must be invoked before this variable is passed to any other spinlock primitive.
4.3.3.2 spin_lock()
The spin_lock() primitive acquires the specified spinlock, if necessary, waiting
until the spinlock becomes available. In some environments, such as pthreads, this
waiting will involve “spinning”, while in others, such as the Linux kernel, it will involve
blocking.
The key point is that only one thread may hold a spinlock at any given time.
4.3.3.3 spin_trylock()
The spin_trylock() primitive acquires the specified spinlock, but only if it is
immediately available. It returns true if it was able to acquire the spinlock and false
otherwise.
4.3.3.4 spin_unlock()
The spin_unlock() primitive releases the specified spinlock, allowing other threads
to acquire it.
spin_lock(&mutex);
counter++;
spin_unlock(&mutex);
Quick Quiz 4.26: What problems could occur if the variable counter were
incremented without the protection of mutex?
However, the spin_lock() and spin_unlock() primitives do have perfor-
mance consequences, as will be seen in Section 4.3.6.
Quick Quiz 4.27: How could you work around the lack of a per-thread-variable
API on systems that do not provide it?
4 You could instead use __thread or _Thread_local.
52 CHAPTER 4. TOOLS OF THE TRADE
4.3.5.1 DEFINE_PER_THREAD()
The DEFINE_PER_THREAD() primitive defines a per-thread variable. Unfortunately,
it is not possible to provide an initializer in the way permitted by the Linux kernel’s
DEFINE_PER_THREAD() primitive, but there is an init_per_thread() primi-
tive that permits easy runtime initialization.
4.3.5.2 DECLARE_PER_THREAD()
The DECLARE_PER_THREAD() primitive is a declaration in the C sense, as opposed
to a definition. Thus, a DECLARE_PER_THREAD() primitive may be used to access a
per-thread variable defined in some other file.
4.3.5.3 per_thread()
The per_thread() primitive accesses the specified thread’s variable.
4.3.5.4 __get_thread_var()
The __get_thread_var() primitive accesses the current thread’s variable.
4.3.5.5 init_per_thread()
The init_per_thread() primitive sets all threads’ instances of the specified vari-
able to the specified value. The Linux kernel accomplishes this via normal C initializa-
tion, relying in clever use of linker scripts and code executed during the CPU-online
process.
The value of the counter is then the sum of its instances. A snapshot of the value of
the counter can thus be collected as follows:
for_each_thread(i)
sum += per_thread(counter, i);
Again, it is possible to gain a similar effect using other mechanisms, but per-thread
variables combine convenience and high performance.
4.4. THE RIGHT TOOL FOR THE JOB: HOW TO CHOOSE? 53
4.3.6 Performance
It is instructive to compare the performance of the locked increment shown in Sec-
tion 4.3.4 to that of per-CPU (or per-thread) variables (see Section 4.3.5), as well as to
conventional increment (as in “counter++”).
The difference in performance is quite large, to put it mildly. The purpose of this
book is to help you write SMP programs, perhaps with realtime response, while avoiding
such performance pitfalls. Chapter 5 starts this process by describing a few parallel
counting algorithms.
Of course, the actual overheads will depend not only on your hardware, but most
critically on the manner in which you use the primitives. In particular, randomly hacking
multi-threaded code is a spectacularly bad idea, especially given that shared-memory
parallel systems use your own intelligence against you: The smarter you are, the deeper
a hole you will dig for yourself before you realize that you are in trouble [Pok16].
Therefore, it is necessary to make the right design choices as well as the correct choice
of individual primitives, as is discussed at length in subsequent chapters.
54 CHAPTER 4. TOOLS OF THE TRADE
As easy as 1, 2, 3!
Unknown
Chapter 5
Counting
Counting is perhaps the simplest and most natural thing a computer can do. However,
counting efficiently and scalably on a large shared-memory multiprocessor can be quite
challenging. Furthermore, the simplicity of the underlying concept of counting allows
us to explore the fundamental issues of concurrency without the distractions of elaborate
data structures or complex synchronization primitives. Counting therefore provides an
excellent introduction to parallel programming.
This chapter covers a number of special cases for which there are simple, fast, and
scalable counting algorithms. But first, let us find out how much you already know
about concurrent counting.
Quick Quiz 5.1: Why on earth should efficient and scalable counting be hard? After
all, computers have special hardware for the sole purpose of doing counting, addition,
subtraction, and lots more besides, don’t they???
Quick Quiz 5.2: Network-packet counting problem. Suppose that you need
to collect statistics on the number of networking packets (or total number of bytes)
transmitted and/or received. Packets might be transmitted or received by any CPU on
the system. Suppose further that this large machine is capable of handling a million
packets per second, and that there is a systems-monitoring package that reads out the
count every five seconds. How would you implement this statistical counter?
Quick Quiz 5.3: Approximate structure-allocation limit problem. Suppose
that you need to maintain a count of the number of structures allocated in order to
fail any allocations once the number of structures in use exceeds a limit (say, 10,000).
Suppose further that these structures are short-lived, that the limit is rarely exceeded,
and that a “sloppy” approximate limit is acceptable.
Quick Quiz 5.4: Exact structure-allocation limit problem. Suppose that you
need to maintain a count of the number of structures allocated in order to fail any
allocations once the number of structures in use exceeds an exact limit (again, say
10,000). Suppose further that these structures are short-lived, and that the limit is rarely
exceeded, that there is almost always at least one structure in use, and suppose further
still that it is necessary to know exactly when this counter reaches zero, for example, in
order to free up some memory that is not required unless there is at least one structure
in use.
Quick Quiz 5.5: Removable I/O device access-count problem. Suppose that
you need to maintain a reference count on a heavily used removable mass-storage device,
so that you can tell the user when it is safe to remove the device. This device follows
the usual removal procedure where the user indicates a desire to remove the device, and
55
56 CHAPTER 5. COUNTING
1 long counter = 0;
2
3 void inc_count(void)
4 {
5 counter++;
6 }
7
8 long read_count(void)
9 {
10 return counter;
11 }
1 Interestingly enough, a pair of threads non-atomically incrementing a counter will cause the counter to
increase more quickly than a pair of threads atomically incrementing the counter. Of course, if your only goal
is to make the counter increase quickly, an easier approach is to simply assign a large value to the counter.
Nevertheless, there is likely to be a role for algorithms that use carefully relaxed notions of correctness in
5.1. WHY ISN’T CONCURRENT COUNTING TRIVIAL? 57
800
700
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Number of CPUs (Threads)
This poor performance should not be a surprise, given the discussion in Chapter 3,
nor should it be a surprise that the performance of atomic increment gets slower as
the number of CPUs and threads increase, as shown in Figure 5.3. In this figure, the
horizontal dashed line resting on the x axis is the ideal performance that would be
achieved by a perfectly scalable algorithm: with such an algorithm, a given increment
would incur the same overhead that it would in a single-threaded program. Atomic
increment of a single global variable is clearly decidedly non-ideal, and gets worse as
you add CPUs.
Quick Quiz 5.8: Why doesn’t the dashed line on the x axis meet the diagonal line
at x = 1?
Quick Quiz 5.9: But atomic increment is still pretty fast. And incrementing a single
variable in a tight loop sounds pretty unrealistic to me, after all, most of the program’s
execution should be devoted to actually doing work, not accounting for the work it has
done! Why should I care about making this go faster?
For another perspective on global atomic increment, consider Figure 5.4. In order
for each CPU to get a chance to increment a given global variable, the cache line
containing that variable must circulate among all the CPUs, as shown by the red arrows.
Such circulation will take significant time, resulting in the poor performance seen in
Figure 5.3, which might be thought of as shown in Figure 5.5.
The following sections discuss high-performance counting, which avoids the delays
Interconnect Interconnect
Cache Cache Cache Cache
CPU 4 CPU 5 CPU 6 CPU 7
5.2.1 Design
Statistical counting is typically handled by providing a counter per thread (or CPU,
when running in the kernel), so that each thread updates its own counter. The aggregate
value of the counters is read out by simply summing up all of the threads’ counters,
relying on the commutative and associative properties of addition. This is an example
5.2. STATISTICAL COUNTERS 59
1 DEFINE_PER_THREAD(long, counter);
2
3 void inc_count(void)
4 {
5 __get_thread_var(counter)++;
6 }
7
8 long read_count(void)
9 {
10 int t;
11 long sum = 0;
12
13 for_each_thread(t)
14 sum += per_thread(counter, t);
15 return sum;
16 }
Interconnect Interconnect
Cache Cache Cache Cache
CPU 4 CPU 5 CPU 6 CPU 7
that the counter is being incremented at rate r counts per unit time, and that read_
count()’s execution consumes ∆ units of time. What is the expected error in the
return value?
However, this excellent update-side scalability comes at great read-side expense for
large numbers of threads. The next section shows one way to reduce read-side expense
while still retaining the update-side scalability.
This approach gives extremely fast counter read-out while still supporting linear
counter-update performance. However, this excellent read-side performance and update-
side scalability comes at the cost of the additional thread running eventual().
Quick Quiz 5.17: Why doesn’t inc_count() in Figure 5.8 need to use atomic
instructions? After all, we now have multiple threads accessing the per-thread counters!
Quick Quiz 5.18: Won’t the single global thread in the function eventual() of
Figure 5.8 be just as severe a bottleneck as a global lock would be?
Quick Quiz 5.19: Won’t the estimate returned by read_count() in Figure 5.8
become increasingly inaccurate as the number of threads rises?
thread exits.
Quick Quiz 5.25: Fine, but the Linux kernel doesn’t have to acquire a lock when
reading out the aggregate value of per-CPU counters. So why should user-space code
need to do this???
5.2.5 Discussion
These three implementations show that it is possible to obtain uniprocessor performance
for statistical counters, despite running on a parallel machine.
Quick Quiz 5.26: What fundamental difference is there between counting packets
and counting the total number of bytes in the packets, given that the packets vary in
size?
Quick Quiz 5.27: Given that the reader must sum all the threads’ counters, this
could take a long time given large numbers of threads. Is there any way that the
increment operation can remain fast and scalable while allowing readers to also enjoy
reasonable performance and scalability?
Given what has been presented in this section, you should now be able to answer the
Quick Quiz about statistical counters for networking near the beginning of this chapter.
5.3.1 Design
One possible design for limit counters is to divide the limit of 10,000 by the number
of threads, and give each thread a fixed pool of structures. For example, given 100
threads, each thread would manage its own pool of 100 structures. This approach is
simple, and in some cases works well, but it does not handle the common case where
a given structure is allocated by one thread and freed by another [MS93]. On the one
hand, if a given thread takes credit for any structures it frees, then the thread doing
most of the allocating runs out of structures, while the threads doing most of the freeing
have lots of credits that they cannot use. On the other hand, if freed structures are
credited to the CPU that allocated them, it will be necessary for CPUs to manipulate
each others’ counters, which will require expensive atomic instructions or other means
of communicating between threads.2
In short, for many important workloads, we cannot fully partition the counter.
Given that partitioning the counters was what brought the excellent update-side perfor-
mance for the three schemes discussed in Section 5.2, this might be grounds for some
pessimism. However, the eventually consistent algorithm presented in Section 5.2.3 pro-
vides an interesting hint. Recall that this algorithm kept two sets of books, a per-thread
2 That said, if each structure will always be freed by the same CPU (or thread) that allocated it, then this
counter variable for updaters and a global_count variable for readers, with an
eventual() thread that periodically updated global_count to be eventually con-
sistent with the values of the per-thread counter. The per-thread counter perfectly
partitioned the counter value, while global_count kept the full value.
For limit counters, we can use a variation on this theme, in that we partially partition
the counter. For example, each of four threads could have a per-thread counter, but
each could also have a per-thread maximum value (call it countermax).
But then what happens if a given thread needs to increment its counter, but
counter is equal to its countermax? The trick here is to move half of that thread’s
counter value to a globalcount, then increment counter. For example, if a
given thread’s counter and countermax variables were both equal to 10, we do
the following:
1. Acquire a global lock.
2. Add five to globalcount.
3. To balance out the addition, subtract five from this thread’s counter.
4. Release the global lock.
5. Increment this thread’s counter, resulting in a value of six.
Although this procedure still requires a global lock, that lock need only be ac-
quired once for every five increment operations, greatly reducing that lock’s level of
contention. We can reduce this contention as low as we wish by increasing the value
of countermax. However, the corresponding penalty for increasing the value of
countermax is reduced accuracy of globalcount. To see this, note that on a
four-CPU system, if countermax is equal to ten, globalcount will be in error by
at most 40 counts. In contrast, if countermax is increased to 100, globalcount
might be in error by as much as 400 counts.
This raises the question of just how much we care about globalcount’s de-
viation from the aggregate value of the counter, where this aggregate value is the
sum of globalcount and each thread’s counter variable. The answer to this
question depends on how far the aggregate value is from the counter’s limit (call it
globalcountmax). The larger the difference between these two values, the larger
countermax can be without risk of exceeding the globalcountmax limit. This
means that the value of a given thread’s countermax variable can be set based on this
difference. When far from the limit, the countermax per-thread variables are set to
large values to optimize for performance and scalability, while when close to the limit,
these same variables are set to small values to minimize the error in the checks against
the globalcountmax limit.
This design is an example of parallel fastpath, which is an important design pattern
in which the common case executes with no expensive instructions and no interactions
between threads, but where occasional use is also made of a more conservatively
designed (and higher overhead) global algorithm. This design pattern is covered in more
detail in Section 6.4.
countermax 3
counter 3
globalreserve
countermax 2 counter 2
globalcountmax
countermax 1 counter 1
countermax 0
counter 0
globalcount
globalcountmax variable on line 3 contains the upper bound for the aggregate
counter, and the globalcount variable on line 4 is the global counter. The sum of
globalcount and each thread’s counter gives the aggregate value of the overall
counter. The globalreserve variable on line 5 is the sum of all of the per-thread
countermax variables. The relationship among these variables is shown by Fig-
ure 5.11:
2. The sum of all threads’ countermax values must be less than or equal to
globalreserve.
3. Each thread’s counter must be less than or equal to that thread’s countermax.
If the test on line 3 fails, we must access global variables, and thus must acquire
gblcnt_mutex on line 7, which we release on line 11 in the failure case or on line 16
in the success case. Line 8 invokes globalize_count(), shown in Figure 5.13,
which clears the thread-local variables, adjusting the global variables as needed, thus
simplifying global processing. (But don’t take my word for it, try coding it yourself!)
Lines 9 and 10 check to see if addition of delta can be accommodated, with the
meaning of the expression preceding the less-than sign shown in Figure 5.11 as the
difference in height of the two red (leftmost) bars. If the addition of delta cannot be
accommodated, then line 11 (as noted earlier) releases gblcnt_mutex and line 12
returns indicating failure.
Otherwise, we take the slowpath. Line 14 adds delta to globalcount, and then
line 15 invokes balance_count() (shown in Figure 5.13) in order to update both the
global and the per-thread variables. This call to balance_count() will usually set
this thread’s countermax to re-enable the fastpath. Line 16 then releases gblcnt_
mutex (again, as noted earlier), and, finally, line 17 returns indicating success.
Quick Quiz 5.30: Why does globalize_count() zero the per-thread variables,
only to later call balance_count() to refill them in Figure 5.12? Why not just leave
the per-thread variables non-zero?
Lines 20-36 show sub_count(), which subtracts the specified delta from the
counter. Line 22 checks to see if the per-thread counter can accommodate this subtrac-
tion, and, if so, line 23 does the subtraction and line 24 returns success. These lines
form sub_count()’s fastpath, and, as with add_count(), this fastpath executes
no costly operations.
If the fastpath cannot accommodate subtraction of delta, execution proceeds to
the slowpath on lines 26-35. Because the slowpath must access global state, line 26
acquires gblcnt_mutex, which is released either by line 29 (in case of failure) or
by line 34 (in case of success). Line 27 invokes globalize_count(), shown in
Figure 5.13, which again clears the thread-local variables, adjusting the global variables
as needed. Line 28 checks to see if the counter can accommodate subtracting delta,
and, if not, line 29 releases gblcnt_mutex (as noted earlier) and line 30 returns
5.3. APPROXIMATE LIMIT COUNTERS 69
failure.
Quick Quiz 5.31: Given that globalreserve counted against us in add_
count(), why doesn’t it count for us in sub_count() in Figure 5.12?
Quick Quiz 5.32: Suppose that one thread invokes add_count() shown in
Figure 5.12, and then another thread invokes sub_count(). Won’t sub_count()
return failure even though the value of the counter is non-zero?
If, on the other hand, line 28 finds that the counter can accommodate subtracting
delta, we complete the slowpath. Line 32 does the subtraction and then line 33
invokes balance_count() (shown in Figure 5.13) in order to update both global
and per-thread variables (hopefully re-enabling the fastpath). Then line 34 releases
gblcnt_mutex, and line 35 returns success.
Quick Quiz 5.33: Why have both add_count() and sub_count() in Fig-
ure 5.12? Why not simply pass a negative number to add_count()?
Lines 38-50 show read_count(), which returns the aggregate value of the
counter. It acquires gblcnt_mutex on line 43 and releases it on line 48, excluding
global operations from add_count() and sub_count(), and, as we will see, also
excluding thread creation and exit. Line 44 initializes local variable sum to the value of
globalcount, and then the loop spanning lines 45-47 sums the per-thread counter
variables. Line 49 then returns the sum.
Figure 5.13 shows a number of utility functions used by the add_count(), sub_
count(), and read_count() primitives shown in Figure 5.12.
70 CHAPTER 5. COUNTING
Lines 1-7 show globalize_count(), which zeros the current thread’s per-
thread counters, adjusting the global variables appropriately. It is important to note that
this function does not change the aggregate value of the counter, but instead changes how
the counter’s current value is represented. Line 3 adds the thread’s counter variable to
globalcount, and line 4 zeroes counter. Similarly, line 5 subtracts the per-thread
countermax from globalreserve, and line 6 zeroes countermax. It is helpful
to refer to Figure 5.11 when reading both this function and balance_count(),
which is next.
Lines 9-19 show balance_count(), which is roughly speaking the inverse of
globalize_count(). This function’s job is to set the current thread’s countermax
variable to the largest value that avoids the risk of the counter exceeding the globalcountmax
limit. Changing the current thread’s countermax variable of course requires corre-
sponding adjustments to counter, globalcount and globalreserve, as can
be seen by referring back to Figure 5.11. By doing this, balance_count() max-
imizes use of add_count()’s and sub_count()’s low-overhead fastpaths. As
with globalize_count(), balance_count() is not permitted to change the
aggregate value of the counter.
Lines 11-13 compute this thread’s share of that portion of globalcountmax that
is not already covered by either globalcount or globalreserve, and assign the
computed quantity to this thread’s countermax. Line 14 makes the corresponding ad-
justment to globalreserve. Line 15 sets this thread’s counter to the middle of the
range from zero to countermax. Line 16 checks to see whether globalcount can
in fact accommodate this value of counter, and, if not, line 17 decreases counter
accordingly. Finally, in either case, line 18 makes the corresponding adjustment to
globalcount.
Quick Quiz 5.34: Why set counter to countermax / 2 in line 15 of Fig-
ure 5.13? Wouldn’t it be simpler to just take countermax counts?
It is helpful to look at a schematic depicting how the relationship of the coun-
ters changes with the execution of first globalize_count() and then balance_
count, as shown in Figure 5.14. Time advances from left to right, with the leftmost
configuration roughly that of Figure 5.11. The center configuration shows the rela-
tionship of these same counters after globalize_count() is executed by thread 0.
As can be seen from the figure, thread 0’s counter (“c 0” in the figure) is added
to globalcount, while the value of globalreserve is reduced by this same
amount. Both thread 0’s counter and its countermax (“cm 0” in the figure) are
reduced to zero. The other three threads’ counters are unchanged. Note that this
change did not affect the overall value of the counter, as indicated by the bottommost
dotted line connecting the leftmost and center configurations. In other words, the
sum of globalcount and the four threads’ counter variables is the same in both
configurations. Similarly, this change did not affect the sum of globalcount and
globalreserve, as indicated by the upper dotted line.
The rightmost configuration shows the relationship of these counters after balance_
count() is executed, again by thread 0. One-quarter of the remaining count, denoted
by the vertical line extending up from all three configurations, is added to thread 0’s
countermax and half of that to thread 0’s counter. The amount added to thread 0’s
counter is also subtracted from globalcount in order to avoid changing the
overall value of the counter (which is again the sum of globalcount and the three
threads’ counter variables), again as indicated by the lowermost of the two dotted
lines connecting the center and rightmost configurations. The globalreserve vari-
5.3. APPROXIMATE LIMIT COUNTERS 71
globalize_count() balance_count()
cm 3
c3
globalreserve
cm 3 cm 3
globalreserve
c3 c3
globalreserve
cm 2
c2
cm 2 cm 2
c2 c2
cm 1 c1
cm 1 c1 cm 1 c1
cm 0
c0
cm 0 c0
globalcount
globalcount
globalcount
able is also adjusted so that this variable remains equal to the sum of the four threads’
countermax variables. Because thread 0’s counter is less than its countermax,
thread 0 can once again increment the counter locally.
Quick Quiz 5.35: In Figure 5.14, even though a quarter of the remaining count up
to the limit is assigned to thread 0, only an eighth of the remaining count is consumed,
as indicated by the uppermost dotted line connecting the center and the rightmost
configurations. Why is that?
Lines 21-28 show count_register_thread(), which sets up state for newly
created threads. This function simply installs a pointer to the newly created thread’s
counter variable into the corresponding entry of the counterp[] array under the
protection of gblcnt_mutex.
Finally, lines 30-38 show count_unregister_thread(), which tears down
state for a soon-to-be-exiting thread. Line 34 acquires gblcnt_mutex and line 37
releases it. Line 35 invokes globalize_count() to clear out this thread’s counter
state, and line 36 clears this thread’s entry in the counterp[] array.
be an approximate limit, there is usually a limit to exactly how much approximation can
be tolerated. One way to limit the degree of approximation is to impose an upper limit
on the value of the per-thread countermax instances. This task is undertaken in the
next section.
way to do this is to use atomic instructions. Of course, atomic instructions will slow
down the fastpath, but on the other hand, it would be silly not to at least give them a try.
Lines 1-22 on Figure 5.21 show the code for balance_count(), which refills
the calling thread’s local ctrandmax variable. This function is quite similar to that
of the preceding algorithms, with changes required to handle the merged ctrandmax
variable. Detailed analysis of the code is left as an exercise for the reader, as it is with
the count_register_thread() function starting on line 24 and the count_
5.4. EXACT LIMIT COUNTERS 77
the IDLE state, and when add_count() or sub_count() find that the combination
of the local thread’s count and the global count cannot accommodate the request, the
corresponding slowpath sets each thread’s theft state to REQ (unless that thread has
no count, in which case it transitions directly to READY). Only the slowpath, which
holds the gblcnt_mutex lock, is permitted to transition from the IDLE state, as
indicated by the green color.3 The slowpath then sends a signal to each thread, and the
corresponding signal handler checks the corresponding thread’s theft and counting
variables. If the theft state is not REQ, then the signal handler is not permitted to
change the state, and therefore simply returns. Otherwise, if the counting variable is
set, indicating that the current thread’s fastpath is in progress, the signal handler sets the
theft state to ACK, otherwise to READY.
If the theft state is ACK, only the fastpath is permitted to change the theft
state, as indicated by the blue color. When the fastpath completes, it sets the theft
state to READY.
Once the slowpath sees a thread’s theft state is READY, the slowpath is permitted
3 For those with black-and-white versions of this book, IDLE and READY are green, REQ is red, and
ACK is blue.
5.4. EXACT LIMIT COUNTERS 79
IDLE
need no
flushed
flush count
!counting
REQ READY
done
counting
counting
ACK
to steal that thread’s count. The slowpath then sets that thread’s theft state to IDLE.
Quick Quiz 5.46: In Figure 5.22, why is the REQ theft state colored red?
Quick Quiz 5.47: In Figure 5.22, what is the point of having separate REQ and
ACK theft states? Why not simplify the state machine by collapsing them into a
single REQACK state? Then whichever of the signal handler or the fastpath gets there
first could set the state to READY.
the reader. Similarly, the structure of sub_count() on Figure 5.26 is the same as that
of add_count(), so the analysis of sub_count() is also left as an exercise for the
reader, as is the analysis of read_count() in Figure 5.27.
1 void count_init(void)
2 {
3 struct sigaction sa;
4
5 sa.sa_handler = flush_local_count_sig;
6 sigemptyset(&sa.sa_mask);
7 sa.sa_flags = 0;
8 if (sigaction(SIGUSR1, &sa, NULL) != 0) {
9 perror("sigaction");
10 exit(-1);
11 }
12 }
13
14 void count_register_thread(void)
15 {
16 int idx = smp_thread_id();
17
18 spin_lock(&gblcnt_mutex);
19 counterp[idx] = &counter;
20 countermaxp[idx] = &countermax;
21 theftp[idx] = &theft;
22 spin_unlock(&gblcnt_mutex);
23 }
24
25 void count_unregister_thread(int nthreadsexpected)
26 {
27 int idx = smp_thread_id();
28
29 spin_lock(&gblcnt_mutex);
30 globalize_count();
31 counterp[idx] = NULL;
32 countermaxp[idx] = NULL;
33 theftp[idx] = NULL;
34 spin_unlock(&gblcnt_mutex);
35 }
The signal-theft implementation runs more than twice as fast as the atomic implementa-
tion on my Intel Core Duo laptop. Is it always preferable?
The signal-theft implementation would be vastly preferable on Pentium-4 systems,
given their slow atomic instructions, but the old 80386-based Sequent Symmetry sys-
tems would do much better with the shorter path length of the atomic implementation.
However, this increased update-side performance comes at the prices of higher read-side
overhead: Those POSIX signals are not free. If ultimate performance is of the essence,
you will need to measure them both on the system that your application is to be deployed
on.
Quick Quiz 5.53: Not only are POSIX signals slow, sending one to each thread
simply does not scale. What would you do if you had (say) 10,000 threads and needed
the read side to be fast?
This is but one reason why high-quality APIs are so important: they permit imple-
mentations to be changed as required by ever-changing hardware performance charac-
teristics.
Quick Quiz 5.54: What if you want an exact limit counter to be exact only for its
lower limit, but to allow the upper limit to be inexact?
Although the exact limit counter implementations in Section 5.4 can be very useful, they
are not much help if the counter’s value remains near zero at all times, as it might when
counting the number of outstanding accesses to an I/O device. The high overhead of
such near-zero counting is especially painful given that we normally don’t care how
many references there are. As noted in the removable I/O device access-count problem
posed by Quick Quiz 5.5, the number of accesses is irrelevant except in those rare cases
when someone is actually trying to remove the device.
One simple solution to this problem is to add a large “bias” (for example, one
billion) to the counter in order to ensure that the value is far enough from zero that
the counter can operate efficiently. When someone wants to remove the device, this
bias is subtracted from the counter value. Counting the last few accesses will be quite
inefficient, but the important point is that the many prior accesses will have been counted
at full speed.
Quick Quiz 5.55: What else had you better have done when using a biased counter?
Although a biased counter can be quite helpful and useful, it is only a partial
solution to the removable I/O device access-count problem called out on page 55. When
attempting to remove a device, we must not only know the precise number of current
I/O accesses, we also need to prevent any future accesses from starting. One way to
accomplish this is to read-acquire a reader-writer lock when updating the counter, and to
write-acquire that same reader-writer lock when checking the counter. Code for doing
I/O might be as follows:
86 CHAPTER 5. COUNTING
1 read_lock(&mylock);
2 if (removing) {
3 read_unlock(&mylock);
4 cancel_io();
5 } else {
6 add_count(1);
7 read_unlock(&mylock);
8 do_io();
9 sub_count(1);
10 }
Line 1 read-acquires the lock, and either line 3 or 7 releases it. Line 2 checks to
see if the device is being removed, and, if so, line 3 releases the lock and line 4 cancels
the I/O, or takes whatever action is appropriate given that the device is to be removed.
Otherwise, line 6 increments the access count, line 7 releases the lock, line 8 performs
the I/O, and line 9 decrements the access count.
Quick Quiz 5.56: This is ridiculous! We are read-acquiring a reader-writer lock to
update the counter? What are you playing at???
The code to remove the device might be as follows:
1 write_lock(&mylock);
2 removing = 1;
3 sub_count(mybias);
4 write_unlock(&mylock);
5 while (read_count() != 0) {
6 poll(NULL, 0, 1);
7 }
8 remove_device();
Line 1 write-acquires the lock and line 4 releases it. Line 2 notes that the device is
being removed, and the loop spanning lines 5-7 wait for any I/O operations to complete.
Finally, line 8 does any additional processing needed to prepare for device removal.
Quick Quiz 5.57: What other issues would need to be accounted for in a real
system?
Reads
Algorithm Section Updates 1 Core 32 Cores
count_stat.c 5.2.2 11.5 ns 408 ns 409 ns
count_stat_eventual.c 5.2.3 11.6 ns 1 ns 1 ns
count_end.c 5.2.4 6.3 ns 389 ns 51,200 ns
count_end_rcu.c 13.3.1 5.7 ns 354 ns 501 ns
Reads
Algorithm Section Exact? Updates 1 Core 64 Cores
count_lim.c 5.3.2 N 3.6 ns 375 ns 50,700 ns
count_lim_app.c 5.3.4 N 11.7 ns 369 ns 51,000 ns
count_lim_atomic.c 5.4.1 Y 51.4 ns 427 ns 49,400 ns
count_lim_sig.c 5.4.4 Y 10.2 ns 370 ns 54,000 ns
consider the C-language ++ operator. The fact is that it does not work in general, only
for a restricted range of numbers. If you need to deal with 1,000-digit decimal numbers,
the C-language ++ operator will not work for you.
Quick Quiz 5.62: The ++ operator works just fine for 1,000-digit numbers! Haven’t
you heard of operator overloading???
This problem is not specific to arithmetic. Suppose you need to store and query
data. Should you use an ASCII file? XML? A relational database? A linked list? A
dense array? A B-tree? A radix tree? Or one of the plethora of other data structures and
environments that permit data to be stored and queried? It depends on what you need
to do, how fast you need it done, and how large your data set is—even on sequential
systems.
Similarly, if you need to count, your solution will depend on how large of numbers
you need to work with, how many CPUs need to be manipulating a given number
concurrently, how the number is to be used, and what level of performance and scalability
you will need.
Nor is this problem specific to software. The design for a bridge meant to allow
people to walk across a small brook might be a simple as a single wooden plank. But
you would probably not use a plank to span the kilometers-wide mouth of the Columbia
River, nor would such a design be advisable for bridges carrying concrete trucks. In
short, just as bridge design must change with increasing span and load, so must software
design change as the number of CPUs increases. That said, it would be good to automate
this process, so that the software adapts to changes in hardware configuration and in
workload. There has in fact been some research into this sort of automation [AHS+ 03,
SAH+ 03], and the Linux kernel does some boot-time reconfiguration, including limited
binary rewriting. This sort of adaptation will become increasingly important as the
number of CPUs on mainstream systems continues to increase.
In short, as discussed in Chapter 3, the laws of physics constrain parallel software
just as surely as they constrain mechanical artifacts such as bridges. These constraints
force specialization, though in the case of software it might be possible to automate the
choice of specialization to fit the hardware and workload in question.
Of course, even generalized counting is quite specialized. We need to do a great
number of other things with computers. The next section relates what we have learned
from counters to topics taken up later in this book.
The partially partitioned counting algorithms used locking to guard the global data,
and locking is the subject of Chapter 7. In contrast, the partitioned data tended to be fully
under the control of the corresponding thread, so that no synchronization whatsoever
was required. This data ownership will be introduced in Section 6.3.4 and discussed in
more detail in Chapter 8.
Because integer addition and subtraction are extremely cheap operations compared
to typical synchronization operations, achieving reasonable scalability requires synchro-
nization operations be used sparingly. One way of achieving this is to batch the addition
and subtraction operations, so that a great many of these cheap operations are handled
by a single synchronization operation. Batching optimizations of one sort or another are
used by each of the counting algorithms listed in Tables 5.1 and 5.2.
Finally, the eventually consistent statistical counter discussed in Section 5.2.3
showed how deferring activity (in that case, updating the global counter) can pro-
vide substantial performance and scalability benefits. This approach allows common
case code to use much cheaper synchronization operations than would otherwise be
possible. Chapter 9 will examine a number of additional ways that deferral can improve
performance, scalability, and even real-time response.
Summarizing the summary:
2. Partial partitioning, that is, partitioning applied only to common code paths, works
almost as well.
3. Partial partitioning can be applied to code (as in Section 5.2’s statistical counters’
partitioned updates and non-partitioned reads), but also across time (as in Sec-
tion 5.3’s and Section 5.4’s limit counters running fast when far from the limit,
but slowly when close to the limit).
4. Partitioning across time often batches updates locally in order to reduce the num-
ber of expensive global operations, thereby decreasing synchronization overhead,
in turn improving performance and scalability. All the algorithms shown in
Tables 5.1 and 5.2 make heavy use of batching.
8. Different levels of performance and scalability will affect algorithm and data-
structure design, as do a large number of other factors. Figure 5.3 illustrates this
point: Atomic increment might be completely acceptable for a two-CPU system,
but be completely inadequate for an eight-CPU system.
Summarizing still further, we have the “big three” methods of increasing perfor-
mance and scalability, namely (1) partitioning over CPUs or threads, (2) batching
90 CHAPTER 5. COUNTING
Batch
Work
Partitioning
Resource
Parallel
Partitioning and
Access Control Replication
Interacting
With Hardware
Weaken Partition
so that more work can be done by each expensive synchronization operations, and
(3) weakening synchronization operations where feasible. As a rough rule of thumb, you
should apply these methods in this order, as was noted earlier in the discussion of Fig-
ure 2.6 on page 19. The partitioning optimization applies to the “Resource Partitioning
and Replication” bubble, the batching optimization to the “Work Partitioning” bubble,
and the weakening optimization to the “Parallel Access Control” bubble, as shown in
Figure 5.29. Of course, if you are using special-purpose hardware such as digital signal
processors (DSPs), field-programmable gate arrays (FPGAs), or general-purpose graph-
ical processing units (GPGPUs), you may need to pay close attention to the “Interacting
With Hardware” bubble throughout the design process. For example, the structure of a
GPGPU’s hardware threads and memory connectivity might richly reward very careful
partitioning and batching design decisions.
In short, as noted at the beginning of this chapter, the simplicity of counting have
allowed us to explore many fundamental concurrency issues without the distraction of
complex synchronization primitives or elaborate data structures. Such synchronization
primitives and data structures are covered in later chapters.
Divide and rule.
Philip II of Macedon
Chapter 6
Partitioning and
Synchronization Design
This chapter describes how to design software to take advantage of the multiple CPUs
that are increasingly appearing in commodity systems. It does this by presenting a
number of idioms, or “design patterns” [Ale79, GHJV95, SSRB00] that can help you
balance performance, scalability, and response time. As noted in earlier chapters, the
most important decision you will make when creating parallel software is how to carry
out the partitioning. Correctly partitioned problems lead to simple, scalable, and high-
performance solutions, while poorly partitioned problems result in slow and complex
solutions. This chapter will help you design partitioning into your code, with some
discussion of batching and weakening as well. The word “design” is very important:
You should partition first, batch second, weaken third, and code fourth. Changing this
order often leads to poor performance and scalability along with great frustration.
To this end, Section 6.1 presents partitioning exercises, Section 6.2 reviews partition-
ability design criteria, Section 6.3 discusses selecting an appropriate synchronization
granularity, Section 6.4 gives an overview of important parallel-fastpath designs that
provide speed and scalability in the common case with a simpler but less-scalable
fallback “slow path” for unusual situations, and finally Section 6.5 takes a brief look
beyond partitioning.
91
92 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
P1
P5 P2
P4 P3
1 Readers who have difficulty imagining a food that requires two forks are invited to instead think in
terms of chopsticks.
2 It is all too easy to denigrate Dijkstra from the viewpoint of the year 2012, more than 40 years after the
fact. If you still feel the need to denigrate Dijkstra, my advice is to publish something, wait 40 years, and then
see how your words stood the test of time.
6.1. PARTITIONING EXERCISES 93
P1
5 1
P5 P2
4 2
P4 P3
the highest-numbered fork. The philosopher sitting in the uppermost position in the
diagram thus picks up the leftmost fork first, then the rightmost fork, while the rest of the
philosophers instead pick up their rightmost fork first. Because two of the philosophers
will attempt to pick up fork 1 first, and because only one of those two philosophers will
succeed, there will be five forks available to four philosophers. At least one of these
four will be guaranteed to have two forks, and thus be able to proceed eating.
This general technique of numbering resources and acquiring them in numerical
order is heavily used as a deadlock-prevention technique. However, it is easy to imagine
a sequence of events that will result in only one philosopher eating at a time even though
all are hungry:
2. P3 picks up fork 2.
3. P4 picks up fork 3.
4. P5 picks up fork 4.
In short, this algorithm can result in only one philosopher eating at a given time,
even when all five philosophers are hungry, despite the fact that there are more than
enough forks for two philosophers to eat concurrently.
Please think about ways of partitioning the Dining Philosophers Problem before
reading further.
94 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
P1
P4 P2
P3
One approach is shown in Figure 6.4, which includes four philosophers rather than
five to better illustrate the partition technique. Here the upper and rightmost philosophers
share a pair of forks, while the lower and leftmost philosophers share another pair of
forks. If all philosophers are simultaneously hungry, at least two will always be able to
eat concurrently. In addition, as shown in the figure, the forks can now be bundled so
that the pair are picked up and put down simultaneously, simplifying the acquisition and
release algorithms.
Quick Quiz 6.1: Is there a better solution to the Dining Philosophers Problem?
This is an example of “horizontal parallelism” [Inm85] or “data parallelism”, so
named because there is no dependency among the pairs of philosophers. In a horizontally
parallel data-processing system, a given item of data would be processed by only one of
a replicated set of software components.
Quick Quiz 6.2: And in just what sense can this “horizontal parallelism” be said to
be “horizontal”?
Lock L Lock R
Header L Header R
Lock L Lock R
Header L 0 Header R
Lock L Lock R
Header L 0 1 Header R
Lock L Lock R
Header L 0 1 2 Header R
Lock L Lock R
Header L 0 1 2 3 Header R
Lock L Lock R
DEQ L DEQ R
four elements on the list. This overlap is due to the fact that removing any given element
affects not only that element, but also its left- and right-hand neighbors. These domains
are indicated by color in the figure, with blue with downward stripes indicating the
domain of the left-hand lock, red with upward stripes indicating the domain of the
right-hand lock, and purple (with no stripes) indicating overlapping domains. Although
it is possible to create an algorithm that works this way, the fact that it has no fewer than
five special cases should raise a big red flag, especially given that concurrent activity at
the other end of the list can shift the queue from one special case to another at any time.
It is far better to consider other designs.
Index L
Index R
Lock L Lock R
1. If holding the right-hand lock, release it and acquire the left-hand lock.
R1
Index L Index R
Enq 3R
R4 R1 R2 R3
Index L Index R
Enq 3L1R
R4 R5 R2 R3
L0 R1 L −2 L −1
Index L Index R
incremented to reference hash chain 2. The middle portion of this same figure shows
the state after three more elements have been right-enqueued. As you can see, the
indexes are back to their initial states (see Figure 6.7), however, each hash chain is
now non-empty. The lower portion of this figure shows the state after three additional
elements have been left-enqueued and an additional element has been right-enqueued.
From the last state shown in Figure 6.8, a left-dequeue operation would return
element “L−2 ” and leave the left-hand index referencing hash chain 2, which would
then contain only a single element (“R2 ”). In this state, a left-enqueue running concur-
rently with a right-enqueue would result in lock contention, but the probability of such
contention can be reduced to arbitrarily low levels by using a larger hash table.
Figure 6.9 shows how 16 elements would be organized in a four-hash-bucket parallel
double-ended queue. Each underlying single-lock double-ended queue holds a one-
quarter slice of the full parallel double-ended queue.
6.1. PARTITIONING EXERCISES 99
R4 R5 R6 R7
L0 R1 R2 R3
Figure 6.10 shows the corresponding C-language data structure, assuming an existing
struct deq that provides a trivially locked double-ended-queue implementation.
This data structure contains the left-hand lock on line 2, the left-hand index on line 3,
the right-hand lock on line 4 (which is cache-aligned in the actual implementation),
the right-hand index on line 5, and, finally, the hashed array of simple lock-based
double-ended queues on line 6. A high-performance implementation would of course
use padding or special alignment directives to avoid false sharing.
Figure 6.11 (lockhdeq.c) shows the implementation of the enqueue and de-
queue functions.3 Discussion will focus on the left-hand operations, as the right-hand
operations are trivially derived from them.
Lines 1-13 show pdeq_pop_l(), which left-dequeues and returns an element if
possible, returning NULL otherwise. Line 6 acquires the left-hand spinlock, and line 7
computes the index to be dequeued from. Line 8 dequeues the element, and, if line 9
finds the result to be non-NULL, line 10 records the new left-hand index. Either way,
line 11 releases the lock, and, finally, line 12 returns the element if there was one, or
NULL otherwise.
Lines 29-38 shows pdeq_push_l(), which left-enqueues the specified element.
Line 33 acquires the left-hand lock, and line 34 picks up the left-hand index. Line 35 left-
enqueues the specified element onto the double-ended queue indexed by the left-hand
index. Line 36 then updates the left-hand index and line 37 releases the lock.
As noted earlier, the right-hand operations are completely analogous to their left-
handed counterparts, so their analysis is left as an exercise for the reader.
Quick Quiz 6.4: Is the hashed double-ended queue a good solution? Why or why
not?
3 One could easily create a polymorphic implementation in any number of languages, but doing so is left
The compound implementation is somewhat more complex than the hashed variant
presented in Section 6.1.2.3, but is still reasonably simple. Of course, a more intelligent
rebalancing scheme could be arbitrarily complex, but the simple scheme shown here
has been shown to perform well compared to software alternatives [DCW+ 11] and even
compared to algorithms using hardware assist [DLM+ 10]. Nevertheless, the best we
can hope for from such a scheme is 2x scalability, as at most two threads can be holding
the dequeue’s locks concurrently. This limitation also applies to algorithms based on
non-blocking synchronization, such as the compare-and-swap-based dequeue algorithm
102 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
of Michael [Mic03].4
Quick Quiz 6.9: Why are there not one but two solutions to the double-ended queue
problem?
In fact, as noted by Dice et al. [DLM+ 10], an unsynchronized single-threaded
double-ended queue significantly outperforms any of the parallel implementations they
studied. Therefore, the key point is that there can be significant overhead enqueuing to
or dequeuing from a shared queue, regardless of implementation. This should come as
no surprise given the material in Chapter 3, given the strict FIFO nature of these queues.
Furthermore, these strict FIFO queues are strictly FIFO only with respect to lin-
earization points [HW90]5 that are not visible to the caller, in fact, in these examples,
the linearization points are buried in the lock-based critical sections. These queues
are not strictly FIFO with respect to (say) the times at which the individual operations
started [HKLP12]. This indicates that the strict FIFO property is not all that valuable in
concurrent programs, and in fact, Kirsch et al. present less-strict queues that provide
improved performance and scalability [KLP12].6 All that said, if you are pushing all
the data used by your concurrent program through a single queue, you really need to
rethink your overall design.
not needed for lock-free implementations of double-ended queues. Instead, the common compare-and-swap
(e.g., x86 cmpxchg) suffices.
5 In short, a linearization point is a single point within a given function where that function can be said
to have taken effect. In this lock-based implementation, the linearization points can be said to be anywhere
within the critical section that does the work.
6 Nir Shavit produced relaxed stacks for roughly the same reasons [Sha11]. This situation leads some to
believe that the linearization points are useful to theorists rather than developers, and leads others to wonder
to what extent the designers of such data structures and algorithms were considering the needs of their users.
104 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
other than microscopically tiny, the space of possible parallel programs is so huge that
convergence is not guaranteed in the lifetime of the universe. Besides, what exactly is
the “best possible parallel program”? After all, Section 2.2 called out no fewer than
three parallel-programming goals of performance, productivity, and generality, and
the best possible performance will likely come at a cost in terms of productivity and
generality. We clearly need to be able to make higher-level choices at design time in
order to arrive at an acceptably good parallel program before that program becomes
obsolete.
However, more detailed design criteria are required to actually produce a real-world
design, a task taken up in this section. This being the real world, these criteria often
conflict to a greater or lesser degree, requiring that the designer carefully balance the
resulting tradeoffs.
As such, these criteria may be thought of as the “forces” acting on the design, with
particularly good tradeoffs between these forces being called “design patterns” [Ale79,
GHJV95].
The design criteria for attaining the three parallel-programming goals are speedup,
contention, overhead, read-to-write ratio, and complexity:
These criteria will act together to enforce a maximum speedup. The first three criteria are
deeply interrelated, so the remainder of this section analyzes these interrelationships.8
Note that these criteria may also appear as part of the requirements specification.
For example, speedup may act as a relative desideratum (“the faster, the better”) or as
an absolute requirement of the workload (“the system must support at least 1,000,000
web hits per second”). Classic design pattern languages describe relative desiderata as
forces and absolute requirements as context.
An understanding of the relationships between these design criteria can be very
helpful when identifying appropriate design tradeoffs for a parallel program.
1. The less time a program spends in critical sections, the greater the potential
speedup. This is a consequence of Amdahl’s Law [Amd67] and of the fact that
only one CPU may execute within a given critical section at a given time.
More specifically, the fraction of time that the program spends in a given exclusive
critical section must be much less than the reciprocal of the number of CPUs for
the actual speedup to approach the number of CPUs. For example, a program
running on 10 CPUs must spend much less than one tenth of its time in the
most-restrictive critical section if it is to scale at all well.
2. Contention effects will consume the excess CPU and/or wallclock time should
the actual speedup be less than the number of available CPUs. The larger the
gap between the number of CPUs and the actual speedup, the less efficiently the
CPUs will be used. Similarly, the greater the desired efficiency, the smaller the
achievable speedup.
4. If the critical sections have high overhead compared to the primitives guarding
them, the best way to improve speedup is to increase parallelism by moving to
reader/writer locking, data locking, asymmetric, or data ownership.
8 A real-world parallel system will be subject to many additional design criteria, such as data-structure
layout, memory size, memory-hierarchy latencies, bandwidth limitations, and I/O issues.
106 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
Sequential
Program
Partition Batch
Code
Locking
Partition Batch
Data
Locking
Own Disown
Data
Ownership
5. If the critical sections have high overhead compared to the primitives guarding
them and the data structure being guarded is read much more often than modi-
fied, the best way to increase parallelism is to move to reader/writer locking or
asymmetric primitives.
6. Many changes that improve SMP performance, for example, reducing lock con-
tention, also improve real-time latencies [McK05c].
Quick Quiz 6.12: Don’t all these problems with critical sections mean that we
should just always use non-blocking synchronization [Her90], which don’t have critical
sections?
instructions per clock, and MIPS for older CPUs requiring multiple clocks to execute even the simplest
6.3. SYNCHRONIZATION GRANULARITY 107
10000
100
10
0.1
1975
1980
1985
1990
1995
2000
2005
2010
2015
Year
result in single chips with thousands of CPUs will not be settled soon, but given that
Paul is typing this sentence on a dual-core laptop, the age of SMP does seem to be upon
us. It is also important to note that Ethernet bandwidth is continuing to grow, as shown
in Figure 6.15. This growth will motivate multithreaded servers in order to handle the
communications load.
Please note that this does not mean that you should code each and every program in
a multi-threaded manner. Again, if a program runs quickly enough on a single processor,
spare yourself the overhead and complexity of SMP synchronization primitives. The
simplicity of the hash-table lookup code in Figure 6.16 underscores this point.10 A key
point is that speedups due to parallelism are normally limited to the number of CPUs.
In contrast, speedups due to sequential optimizations, for example, careful choice of
data structure, can be arbitrarily large.
On the other hand, if you are not in this happy situation, read on!
instruction. The reason for taking this approach is that the newer CPUs’ ability to retire multiple instructions
per clock is typically limited by memory-system performance.
10 The examples in this section are taken from Hart et al. [HMB06], adapted for clarity by gathering
instances, you are instead using “data locking”, described in Section 6.3.3.
108 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
1e+06
100000 Ethernet
Relative Performance 10000
1000
10
0.1
1970
1975
1980
1985
1990
1995
2000
2005
2010
2015
Year
1 struct hash_table
2 {
3 long nbuckets;
4 struct node **buckets;
5 };
6
7 typedef struct node {
8 unsigned long key;
9 struct node *next;
10 } node_t;
11
12 int hash_search(struct hash_table *h, long key)
13 {
14 struct node *cur;
15
16 cur = h->buckets[key % h->nbuckets];
17 while (cur != NULL) {
18 if (cur->key >= key) {
19 return (cur->key == key);
20 }
21 cur = cur->next;
22 }
23 return 0;
24 }
In these cases, code locking will provide a relatively simple program that is very similar
to its sequential counterpart, as can be seen in Figure 6.17. However, note that the
simple return of the comparison in hash_search() in Figure 6.16 has now become
three statements due to the need to release the lock before returning.
1 spinlock_t hash_lock;
2
3 struct hash_table
4 {
5 long nbuckets;
6 struct node **buckets;
7 };
8
9 typedef struct node {
10 unsigned long key;
11 struct node *next;
12 } node_t;
13
14 int hash_search(struct hash_table *h, long key)
15 {
16 struct node *cur;
17 int retval;
18
19 spin_lock(&hash_lock);
20 cur = h->buckets[key % h->nbuckets];
21 while (cur != NULL) {
22 if (cur->key >= key) {
23 retval = (cur->key == key);
24 spin_unlock(&hash_lock);
25 return retval;
26 }
27 cur = cur->next;
28 }
29 spin_unlock(&hash_lock);
30 return 0;
31 }
toy
always translates into increased performance and scalability. For this reason, data
locking was heavily used by Sequent in both its DYNIX and DYNIX/ptx operating
systems [BK85, Inm85, Gar90, Dov90, MD92, MG92, MS93].
However, as those who have taken care of small children can again attest, even
providing enough to go around is no guarantee of tranquillity. The analogous situation
can arise in SMP programs. For example, the Linux kernel maintains a cache of files
and directories (called “dcache”). Each entry in this cache has its own lock, but the
entries corresponding to the root directory and its direct descendants are much more
likely to be traversed than are more obscure entries. This can result in many CPUs
contending for the locks of these popular entries, resulting in a situation not unlike that
shown in Figure 6.21.
In many cases, algorithms can be designed to reduce the instance of data skew, and
in some cases eliminate it entirely (as appears to be possible with the Linux kernel’s
dcache [MSS04]). Data locking is often used for partitionable data structures such as
hash tables, as well as in situations where multiple entities are each represented by an
instance of a given data structure. The task list in version 2.6.17 of the Linux kernel is
an example of the latter, each task structure having its own proc_lock.
A key challenge with data locking on dynamically allocated structures is ensuring
that the structure remains in existence while the lock is being acquired. The code in
Figure 6.19 finesses this challenge by placing the locks in the statically allocated hash
buckets, which are never freed. However, this trick would not work if the hash table
were resizeable, so that the locks were now dynamically allocated. In this case, there
would need to be some means to prevent the hash bucket from being freed during the
time that its lock was being acquired.
Quick Quiz 6.13: What are some ways of preventing a structure from being freed
while its lock is being acquired?
6.3. SYNCHRONIZATION GRANULARITY 111
1 struct hash_table
2 {
3 long nbuckets;
4 struct bucket **buckets;
5 };
6
7 struct bucket {
8 spinlock_t bucket_lock;
9 node_t *list_head;
10 };
11
12 typedef struct node {
13 unsigned long key;
14 struct node *next;
15 } node_t;
16
17 int hash_search(struct hash_table *h, long key)
18 {
19 struct bucket *bp;
20 struct node *cur;
21 int retval;
22
23 bp = h->buckets[key % h->nbuckets];
24 spin_lock(&bp->bucket_lock);
25 cur = bp->list_head;
26 while (cur != NULL) {
27 if (cur->key >= key) {
28 retval = (cur->key == key);
29 spin_unlock(&bp->bucket_lock);
30 return retval;
31 }
32 cur = cur->next;
33 }
34 spin_unlock(&bp->bucket_lock);
35 return 0;
36 }
toy
yot
toy
happens to be that owned by a single CPU, that CPU will be a “hot spot”, sometimes
with results resembling that shown in Figure 6.21. However, in situations where no
sharing is required, data ownership achieves ideal performance, and with code that can
be as simple as the sequential-program case shown in Figure 6.16. Such situations
are often referred to as “embarrassingly parallel”, and, in the best case, resemble the
situation previously shown in Figure 6.20.
Another important instance of data ownership occurs when the data is read-only, in
which case, all threads can “own” it via replication.
Data ownership will be presented in more detail in Chapter 8.
toy
toy
toy
toy
tion was zero, and ignoring the fact that CPUs must wait on each other to complete
their synchronization operations, in other words, µ can be roughly thought of as the
synchronization overhead in absence of contention. For example, suppose that each
synchronization operation involves an atomic increment instruction, and that a computer
system is able to do an atomic increment every 25 nanoseconds on each CPU to a private
variable.12 The value of µ is therefore about 40,000,000 atomic increments per second.
Of course, the value of λ increases with increasing numbers of CPUs, as each CPU
is capable of processing transactions independently (again, ignoring synchronization):
λ = nλ0 (6.1)
where n is the number of CPUs and λ0 is the transaction-processing capability of a
single CPU. Note that the expected time for a single CPU to execute a single transaction
is 1/λ0 .
Because the CPUs have to “wait in line” behind each other to get their chance to
increment the single shared variable, we can use the M/M/1 queueing-model expression
for the expected total waiting time:
1
T= (6.2)
µ −λ
Substituting the above value of λ :
1
T= (6.3)
µ − nλ0
Now, the efficiency is just the ratio of the time required to process a transaction
in absence of synchronization (1/λ0 ) to the time required including synchronization
(T + 1/λ0 ):
12 Of course, if there are 8 CPUs all incrementing the same shared variable, then each CPU must wait
at least 175 nanoseconds for each of the other CPUs to do its increment before consuming an additional 25
nanoseconds doing its own increment. In actual fact, the wait will be longer due to the need to move the
variable from one CPU to another.
114 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
Synchronization Efficiency
0.9
0.8
0.7
100
0.6
0.5 75
0.4 50
0.3 25
0.2 10
0.1
10
20
30
40
50
60
70
80
90
100
Number of CPUs (Threads)
1/λ0
e= (6.4)
T + 1/λ0
Substituting the above value for T and simplifying:
µ
λ0 −n
e= µ (6.5)
λ0 − (n − 1)
But the value of µ/λ0 is just the ratio of the time required to process the transaction
(absent synchronization overhead) to that of the synchronization overhead itself (absent
contention). If we call this ratio f , we have:
f −n
e= (6.6)
f − (n − 1)
Figure 6.22 plots the synchronization efficiency e as a function of the number of
CPUs/threads n for a few values of the overhead ratio f . For example, again using the
25-nanosecond atomic increment, the f = 10 line corresponds to each CPU attempting
an atomic increment every 250 nanoseconds, and the f = 100 line corresponds to each
CPU attempting an atomic increment every 2.5 microseconds, which in turn corresponds
to several thousand instructions. Given that each trace drops off sharply with increasing
numbers of CPUs or threads, we can conclude that synchronization mechanisms based
on atomic manipulation of a single global shared variable will not scale well if used
heavily on current commodity hardware. This is a mathematical depiction of the forces
leading to the parallel counting algorithms that were discussed in Chapter 5.
The concept of efficiency is useful even in cases having little or no formal synchro-
nization. Consider for example a matrix multiply, in which the columns of one matrix
are multiplied (via “dot product”) by the rows of another, resulting in an entry in a
third matrix. Because none of these operations conflict, it is possible to partition the
columns of the first matrix among a group of threads, with each thread computing the
corresponding columns of the result matrix. The threads can therefore operate entirely
independently, with no synchronization overhead whatsoever, as is done in matmul.c.
One might therefore expect a parallel matrix multiply to have a perfect efficiency of 1.0.
6.4. PARALLEL FASTPATH 115
However, Figure 6.23 tells a different story, especially for a 64-by-64 matrix multiply,
which never gets above an efficiency of about 0.7, even when running single-threaded.
The 512-by-512 matrix multiply’s efficiency is measurably less than 1.0 on as few as 10
threads, and even the 1024-by-1024 matrix multiply deviates noticeably from perfection
at a few tens of threads. Nevertheless, this figure clearly demonstrates the performance
and scalability benefits of batching: If you must incur synchronization overhead, you
may as well get your money’s worth.
Quick Quiz 6.14: How can a single-threaded 64-by-64 matrix multiple possibly
have an efficiency of less than 1.0? Shouldn’t all of the traces in Figure 6.23 have
efficiency of exactly 1.0 when running on only one thread?
Given these inefficiencies, it is worthwhile to look into more-scalable approaches
such as the data locking described in Section 6.3.3 or the parallel-fastpath approach
discussed in the next section.
Quick Quiz 6.15: How are data-parallel techniques going to help with matrix
multiply? It is already data parallel!!!
Reader/Writer
Locking
RCU
Parallel
Fastpath
Hierarchical
Locking
Allocator
Caches
4. Resource Allocator Caches ([McK96a, MS93]). See Section 6.4.3 for more detail.
1 rwlock_t hash_lock;
2
3 struct hash_table
4 {
5 long nbuckets;
6 struct node **buckets;
7 };
8
9 typedef struct node {
10 unsigned long key;
11 struct node *next;
12 } node_t;
13
14 int hash_search(struct hash_table *h, long key)
15 {
16 struct node *cur;
17 int retval;
18
19 read_lock(&hash_lock);
20 cur = h->buckets[key % h->nbuckets];
21 while (cur != NULL) {
22 if (cur->key >= key) {
23 retval = (cur->key == key);
24 read_unlock(&hash_lock);
25 return retval;
26 }
27 cur = cur->next;
28 }
29 read_unlock(&hash_lock);
30 return 0;
31 }
Quick Quiz 6.16: In what situation would hierarchical locking work well?
The basic problem facing a parallel memory allocator is the tension between the need to
provide extremely fast memory allocation and freeing in the common case and the need
to efficiently distribute memory in face of unfavorable allocation and freeing patterns.
To see this tension, consider a straightforward application of data ownership to this
problem—simply carve up memory so that each CPU owns its share. For example,
suppose that a system with two CPUs has two gigabytes of memory (such as the one that
I am typing on right now). We could simply assign each CPU one gigabyte of memory,
and allow each CPU to access its own private chunk of memory, without the need for
locking and its complexities and overheads. Unfortunately, this simple scheme breaks
down if an algorithm happens to have CPU 0 allocate all of the memory and CPU 1 the
free it, as would happen in a simple producer-consumer workload.
The other extreme, code locking, suffers from excessive lock contention and over-
head [MS93].
118 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
1 struct hash_table
2 {
3 long nbuckets;
4 struct bucket **buckets;
5 };
6
7 struct bucket {
8 spinlock_t bucket_lock;
9 node_t *list_head;
10 };
11
12 typedef struct node {
13 spinlock_t node_lock;
14 unsigned long key;
15 struct node *next;
16 } node_t;
17
18 int hash_search(struct hash_table *h, long key)
19 {
20 struct bucket *bp;
21 struct node *cur;
22 int retval;
23
24 bp = h->buckets[key % h->nbuckets];
25 spin_lock(&bp->bucket_lock);
26 cur = bp->list_head;
27 while (cur != NULL) {
28 if (cur->key >= key) {
29 spin_lock(&cur->node_lock);
30 spin_unlock(&bp->bucket_lock);
31 retval = (cur->key == key);
32 spin_unlock(&cur->node_lock);
33 return retval;
34 }
35 cur = cur->next;
36 }
37 spin_unlock(&bp->bucket_lock);
38 return 0;
39 }
Global Pool
Overflow
Overflow
(Code Locked)
Empty
Empty
CPU 0 Pool CPU 1 Pool
Allocate/Free
13 Both pool sizes (TARGET_POOL_SIZE and GLOBAL_POOL_SIZE) are unrealistically small, but
this small size makes it easier to single-step the program in order to get a feel for its operation.
120 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
(Empty) −1
on line 9 and released on line 16. Lines 10-14 move blocks from the global to the
per-thread pool until either the local pool reaches its target size (half full) or the global
pool is exhausted, and line 15 sets the per-thread pool’s count to the proper value.
In either case, line 18 checks for the per-thread pool still being empty, and if not,
lines 19-21 remove a block and return it. Otherwise, line 23 tells the sad tale of memory
exhaustion.
and 14 acquiring and releasing the spinlock. Lines 9-12 implement the loop moving
blocks from the local to the global pool, and line 13 sets the per-thread pool’s count to
the proper value.
In either case, line 16 then places the newly freed block into the per-thread pool.
6.4.3.6 Performance
Rough performance results14 are shown in Figure 6.32, running on a dual-core Intel
x86 running at 1GHz (4300 bogomips per CPU) with at most six blocks allowed in
each CPU’s cache. In this micro-benchmark, each thread repeatedly allocates a group
of blocks and then frees all the blocks in that group, with the number of blocks in the
group being the “allocation run length” displayed on the x-axis. The y-axis shows the
number of successful allocation/free pairs per microsecond—failed allocations are not
counted. The “X”s are from a two-thread run, while the “+”s are from a single-threaded
run.
Note that run lengths up to six scale linearly and give excellent performance, while
run lengths greater than six show poor performance and almost always also show nega-
tive scaling. It is therefore quite important to size TARGET_POOL_SIZE sufficiently
large, which fortunately is usually quite easy to do in actual practice [MSK01], espe-
cially given today’s large memories. For example, in most systems, it is quite reasonable
to set TARGET_POOL_SIZE to 100, in which case allocations and frees are guaranteed
to be confined to per-thread pools at least 99% of the time.
As can be seen from the figure, the situations where the common-case data-ownership
applies (run lengths up to six) provide greatly improved performance compared to the
cases where locks must be acquired. Avoiding synchronization in the common case will
be a recurring theme through this book.
Quick Quiz 6.17: In Figure 6.32, there is a pattern of performance rising with
increasing run length in groups of three samples, for example, for run lengths 10, 11,
and 12. Why?
Quick Quiz 6.18: Allocation failures were observed in the two-thread tests at run
lengths of 19 and greater. Given the global-pool size of 40 and the per-thread target
pool size s of three, number of threads n equal to two, and assuming that the per-thread
14 This data was not collected in a statistically meaningful way, and therefore should be viewed with great
skepticism and suspicion. Good data-collection and -reduction practice is discussed in Chapter 11. That said,
repeated runs gave similar results, and these results match more careful evaluations of similar algorithms.
122 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
30
20
15
10
0
0 5 10 15 20 25
Allocation Run Length
pools are initially empty with none of the memory in use, what is the smallest allocation
run length m at which failures can occur? (Recall that each thread repeatedly allocates
m block of memory, and then frees the m blocks of memory.) Alternatively, given n
threads each with pool size s, and where each thread repeatedly first allocates m blocks
of memory and then frees those m blocks, how large must the global pool size be? Note:
Obtaining the correct answer will require you to examine the smpalloc.c source
code, and very likely single-step it as well. You have been warned!
The toy parallel resource allocator was quite simple, but real-world designs expand on
this approach in a number of ways.
First, real-world allocators are required to handle a wide range of allocation sizes,
as opposed to the single size shown in this toy example. One popular way to do this is
to offer a fixed set of sizes, spaced so as to balance external and internal fragmentation,
such as in the late-1980s BSD memory allocator [MK88]. Doing this would mean that
the “globalmem” variable would need to be replicated on a per-size basis, and that the
associated lock would similarly be replicated, resulting in data locking rather than the
toy program’s code locking.
Second, production-quality systems must be able to repurpose memory, meaning
that they must be able to coalesce blocks into larger structures, such as pages [MS93].
This coalescing will also need to be protected by a lock, which again could be replicated
on a per-size basis.
Third, coalesced memory must be returned to the underlying memory system, and
pages of memory must also be allocated from the underlying memory system. The
locking required at this level will depend on that of the underlying memory system, but
could well be code locking. Code locking can often be tolerated at this level, because
this level is so infrequently reached in well-designed systems [MSK01].
Despite this real-world design’s greater complexity, the underlying idea is the same—
repeated application of parallel fastpath, as shown in Table 6.1.
6.5. BEYOND PARTITIONING 123
1 2 3
2 3 4
3 4 5
1
0.9
0.8
0.7 PWQ
0.6
Probability
0.5 SEQ
0.4
0.3
0.2
0.1
0
0 20 40 60 80 100 120 140
CDF of Solution Time (ms)
algorithm: At most one thread may be making progress along the solution path at any
given time. This weakness is addressed in the next section.
0.5 SEQ
0.4
0.3
0.2
0.1
0
0 20 40 60 80 100 120 140
CDF of Solution Time (ms)
Figure 6.39: CDF of Solution Times For SEQ, PWQ, and PART
shown in Figure 6.38. Lines 8-9 check to see if the cells are connected, returning failure
if not. The loop spanning lines 11-18 attempts to mark the new cell visited. Line 13
checks to see if it has already been visited, in which case line 16 returns failure, but only
after line 14 checks to see if we have encountered the other thread, in which case line 15
indicates that the solution has been located. Line 19 updates to the new cell, lines 20
and 21 update this thread’s visited array, and line 22 returns success.
Performance testing revealed a surprising anomaly, shown in Figure 6.39. The
median solution time for PART (17 milliseconds) is more than four times faster than
that of SEQ (79 milliseconds), despite running on only two threads. The next section
analyzes this anomaly.
1
0.9
0.8
0.7
0.6
Probability
0.5 SEQ/PWQ SEQ/PART
0.4
0.3
0.2
0.1
0
0.1 1 10 100
CDF of Speedup Relative to SEQ
then run all solvers on that maze. It therefore makes sense to plot the CDF of the ratios
of solution times for each generated maze, as shown in Figure 6.40, greatly reducing
the CDFs’ overlap. This plot reveals that for some mazes, PART is more than forty
times faster than SEQ. In contrast, PWQ is never more than about two times faster than
SEQ. A forty-times speedup on two threads demands explanation. After all, this is
not merely embarrassingly parallel, where partitionability means that adding threads
does not increase the overall computational cost. It is instead humiliatingly parallel:
Adding threads significantly reduces the overall computational cost, resulting in large
algorithmic superlinear speedups.
Further investigation showed that PART sometimes visited fewer than 2% of the
maze’s cells, while SEQ and PWQ never visited fewer than about 9%. The reason for
this difference is shown by Figure 6.41. If the thread traversing the solution from the
upper left reaches the circle, the other thread cannot reach the upper-right portion of
the maze. Similarly, if the other thread reaches the square, the first thread cannot reach
the lower-left portion of the maze. Therefore, PART will likely visit a small fraction of
the non-solution-path cells. In short, the superlinear speedups are due to threads getting
in each others’ way. This is a sharp contrast with decades of experience with parallel
programming, where workers have struggled to keep threads out of each others’ way.
Figure 6.42 confirms a strong correlation between cells visited and solution time
for all three methods. The slope of PART’s scatterplot is smaller than that of SEQ,
indicating that PART’s pair of threads visits a given fraction of the maze faster than can
SEQ’s single thread. PART’s scatterplot is also weighted toward small visit percentages,
6.5. BEYOND PARTITIONING 129
140
120
100
60 PWQ
40
20 PART
0
0 10 20 30 40 50 60 70 80 90 100
Percent of Maze Cells Visited
confirming that PART does less total work, hence the observed humiliating parallelism.
The fraction of cells visited by PWQ is similar to that of SEQ. In addition, PWQ’s
solution time is greater than that of PART, even for equal visit fractions. The reason for
this is shown in Figure 6.43, which has a red circle on each cell with more than two
neighbors. Each such cell can result in contention in PWQ, because one thread can enter
but two threads can exit, which hurts performance, as noted earlier in this chapter. In
contrast, PART can incur such contention but once, namely when the solution is located.
Of course, SEQ never contends.
Although PART’s speedup is impressive, we should not neglect sequential optimiza-
tions. Figure 6.44 shows that SEQ, when compiled with -O3, is about twice as fast as
unoptimized PWQ, approaching the performance of unoptimized PART. Compiling all
three algorithms with -O3 gives results similar to (albeit faster than) those shown in
Figure 6.40, except that PWQ provides almost no speedup compared to SEQ, in keeping
with Amdahl’s Law [Amd67]. However, if the goal is to double performance compared
to unoptimized SEQ, as opposed to achieving optimality, compiler optimizations are
quite attractive.
Cache alignment and padding often improves performance by reducing false sharing.
However, for these maze-solution algorithms, aligning and padding the maze-cell array
degrades performance by up to 42% for 1000x1000 mazes. Cache locality is more
important than avoiding false sharing, especially for large mazes. For smaller 20-by-
20 or 50-by-50 mazes, aligning and padding can produce up to a 40% performance
improvement for PART, but for these small sizes, SEQ performs better anyway because
130 CHAPTER 6. PARTITIONING AND SYNCHRONIZATION DESIGN
1
0.9
PART
0.8
0.7
0.6
Probability
0.5 PWQ
0.4
0.3
0.2
0.1 SEQ -O3
0
0.1 1 10 100
CDF of Speedup Relative to SEQ
1
COPART
0.9
PWQ
0.8 PART
0.7
0.6
Probability
0.5
0.4
0.3
0.2
0.1
0
0.1 1 10 100
CDF of Speedup Relative to SEQ (-O3)
there is insufficient time for PART to make up for the overhead of thread creation and
destruction.
In short, the partitioned parallel maze solver is an interesting example of an algo-
rithmic superlinear speedup. If “algorithmic superlinear speedup” causes cognitive
dissonance, please proceed to the next section.
12
2 PART PWQ
0
10 100 1000
Maze Size
1.8
Speedup Relative to COPART (-O3)
1.6
1.4
1.2
0.8 PART
0.6
PWQ
0.4
0.2
0
10 100 1000
Maze Size
3.5
2.5
1.5
PART
1
0.5
PWQ
0
1 2 3 4 5 6 7 8
Number of Threads
efficiency breakeven is within the 90% confidence interval for seven and eight threads.
The reasons for the peak at two threads are (1) the lower complexity of termination
detection in the two-thread case and (2) the fact that there is a lower probability of the
third and subsequent threads making useful forward progress: Only the first two threads
are guaranteed to start on the solution line. This disappointing performance compared
to results in Figure 6.47 is due to the less-tightly integrated hardware available in the
larger and older Xeon® system running at 2.66GHz.
that this experience will motivate work on parallelism as a first-class design-time whole-
application optimization technique, rather than as a grossly suboptimal after-the-fact
micro-optimization to be retrofitted into existing programs.
Locking
In recent concurrency research, the role of villain is often played by locking. In many
papers and presentations, locking stands accused of promoting deadlocks, convoying,
starvation, unfairness, data races, and all manner of other concurrency sins. Interestingly
enough, the role of workhorse in production-quality shared-memory parallel software is
played by, you guessed it, locking. This chapter will look into this dichotomy between
villain and hero, as fancifully depicted in Figures 7.1 and 7.2.
There are a number of reasons behind this Jekyll-and-Hyde dichotomy:
1. Many of locking’s sins have pragmatic design solutions that work well in most
cases, for example:
2. Some of locking’s sins are problems only at high levels of contention, levels
reached only by poorly designed programs.
4. Until quite recently, almost all large shared-memory parallel programs were
developed in secret, so that it was difficult for most researchers to learn of these
pragmatic solutions.
5. Locking works extremely well for some software artifacts and extremely poorly
for others. Developers who have worked on artifacts for which locking works
well can be expected to have a much more positive opinion of locking than those
who have worked on artifacts for which locking works poorly, as will be discussed
in Section 7.5.
135
136 CHAPTER 7. LOCKING
XXXX
6. All good stories need a villain, and locking has a long and honorable history
serving as a research-paper whipping boy.
Quick Quiz 7.1: Just how can serving as a whipping boy be considered to be in any
way honorable???
This chapter will give an overview of a number of ways to avoid locking’s more
serious sins.
Lock 1
Thread A Lock 2
Lock 3 Thread B
Thread C Lock 4
7.1.1 Deadlock
Deadlock occurs when each of a group of threads is holding at least one lock while at
the same time waiting on a lock held by a member of that same group.
Without some sort of external intervention, deadlock is forever. No thread can
acquire the lock it is waiting on until that lock is released by the thread holding it, but
the thread holding it cannot release it until the holding thread acquires the lock that it is
waiting on.
We can create a directed-graph representation of a deadlock scenario with nodes for
threads and locks, as shown in Figure 7.3. An arrow from a lock to a thread indicates
that the thread holds the lock, for example, Thread B holds Locks 2 and 4. An arrow
from a thread to a lock indicates that the thread is waiting on the lock, for example,
Thread B is waiting on Lock 3.
A deadlock scenario will always contain at least one deadlock cycle. In Figure 7.3,
this cycle is Thread B, Lock 3, Thread C, Lock 4, and back to Thread B.
Quick Quiz 7.2: But the definition of deadlock only said that each thread was
holding at least one lock and waiting on another lock that was held by some thread.
How do you know that there is a cycle?
Although there are some software environments such as database systems that can
repair an existing deadlock, this approach requires either that one of the threads be
killed or that a lock be forcibly stolen from one of the threads. This killing and forcible
stealing can be appropriate for transactions, but is often problematic for kernel and
application-level use of locking: dealing with the resulting partially updated structures
can be extremely complex, hazardous, and error-prone.
Kernels and applications therefore work to avoid deadlocks rather than to recover
from them. There are a number of deadlock-avoidance strategies, including locking
hierarchies (Section 7.1.1.1), local locking hierarchies (Section 7.1.1.2), layered locking
hierarchies (Section 7.1.1.3), strategies for dealing with APIs containing pointers to
locks (Section 7.1.1.4), conditional locking (Section 7.1.1.5), acquiring all needed locks
first (Section 7.1.1.6), single-lock-at-a-time designs (Section 7.1.1.7), and strategies for
signal/interrupt handlers (Section 7.1.1.8). Although there is no deadlock-avoidance
138 CHAPTER 7. LOCKING
strategy that works perfectly for all situations, there is a good selection of deadlock-
avoidance tools to choose from.
Locking hierarchies order the locks and prohibit acquiring locks out of order. In
Figure 7.3, we might order the locks numerically, so that a thread was forbidden from
acquiring a given lock if it already held a lock with the same or a higher number.
Thread B has violated this hierarchy because it is attempting to acquire Lock 3 while
holding Lock 4, which permitted the deadlock to occur.
Again, to apply a locking hierarchy, order the locks and prohibit out-of-order
lock acquisition. In large program, it is wise to use tools to enforce your locking
hierarchy [Cor06a].
However, the global nature of locking hierarchies make them difficult to apply to library
functions. After all, the program using a given library function has not even been written
yet, so how can the poor library-function implementor possibly hope to adhere to the
yet-to-be-written program’s locking hierarchy?
One special case that is fortunately the common case is when the library function
does not invoke any of the caller’s code. In this case, the caller’s locks will never be
acquired while holding any of the library’s locks, so that there cannot be a deadlock
cycle containing locks from both the library and the caller.
Quick Quiz 7.3: Are there any exceptions to this rule, so that there really could be
a deadlock cycle containing locks from both the library and the caller, even given that
the library code never invokes any of the caller’s functions?
But suppose that a library function does invoke the caller’s code. For example,
the qsort() function invokes a caller-provided comparison function. A concurrent
implementation of qsort() likely uses locking, which might result in deadlock in
the perhaps-unlikely case where the comparison function is a complicated function
involving also locking. How can the library function avoid deadlock?
The golden rule in this case is “Release all locks before invoking unknown code.”
To follow this rule, the qsort() function must release all locks before invoking the
comparison function.
Quick Quiz 7.4: But if qsort() releases all its locks before invoking the compar-
ison function, how can it protect against races with other qsort() threads?
To see the benefits of local locking hierarchies, compare Figures 7.4 and 7.5. In
both figures, application functions foo() and bar() invoke qsort() while holding
Locks A and B, respectively. Because this is a parallel implementation of qsort(), it
acquires Lock C. Function foo() passes function cmp() to qsort(), and cmp()
acquires Lock B. Function bar() passes a simple integer-comparison function (not
shown) to qsort(), and this simple function does not acquire any locks.
Now, if qsort() holds Lock C while calling cmp() in violation of the golden
release-all-locks rule above, as shown in Figure 7.4, deadlock can occur. To see this,
suppose that one thread invokes foo() while a second thread concurrently invokes
bar(). The first thread will acquire Lock A and the second thread will acquire Lock B.
If the first thread’s call to qsort() acquires Lock C, then it will be unable to acquire
Lock B when it calls cmp(). But the first thread holds Lock C, so the second thread’s
7.1. STAYING ALIVE 139
Application
Library
Lock C
qsort()
Application
Library
Lock C
qsort()
call to qsort() will be unable to acquire it, and thus unable to release Lock B,
resulting in deadlock.
In contrast, if qsort() releases Lock C before invoking the comparison function
(which is unknown code from qsort()’s perspective, then deadlock is avoided as
shown in Figure 7.5.
If each module releases all locks before invoking unknown code, then deadlock is
avoided if each module separately avoids deadlock. This rule therefore greatly simplifies
deadlock analysis and greatly improves modularity.
Application
Lock A Lock B
foo() bar()
Library
Lock C
qsort()
Lock D
cmp()
construct a layered locking hierarchy, as shown in Figure 7.6. here, the cmp() function
uses a new Lock D that is acquired after all of Locks A, B, and C, avoiding deadlock.
we therefore have three layers to the global deadlock hierarchy, the first containing
Locks A and B, the second containing Lock C, and the third containing Lock D.
Please note that it is not typically possible to mechanically change cmp() to use
the new Lock D. Quite the opposite: It is often necessary to make profound design-level
modifications. Nevertheless, the effort required for such modifications is normally a
small price to pay in order to avoid deadlock.
For another example where releasing all locks before invoking unknown code is
impractical, imagine an iterator over a linked list, as shown in Figure 7.7 (locked_
list.c). The list_start() function acquires a lock on the list and returns the
first element (if there is one), and list_next() either returns a pointer to the next
element in the list or releases the lock and returns NULL if the end of the list has been
reached.
Figure 7.8 shows how this list iterator may be used. Lines 1-4 define the list_
ints element containing a single integer, and lines 6-17 show how to iterate over the
list. Line 11 locks the list and fetches a pointer to the first element, line 13 provides a
pointer to our enclosing list_ints structure, line 14 prints the corresponding integer,
and line 15 moves to the next element. This is quite simple, and hides all of the locking.
That is, the locking remains hidden as long as the code processing each list element
does not itself acquire a lock that is held across some other call to list_start() or
list_next(), which results in deadlock. We can avoid the deadlock by layering the
locking hierarchy to take the list-iterator locking into account.
This layered approach can be extended to an arbitrarily large number of layers, but
7.1. STAYING ALIVE 141
1 struct locked_list {
2 spinlock_t s;
3 struct list_head h;
4 };
5
6 struct list_head *list_start(struct locked_list *lp)
7 {
8 spin_lock(&lp->s);
9 return list_next(lp, &lp->h);
10 }
11
12 struct list_head *list_next(struct locked_list *lp,
13 struct list_head *np)
14 {
15 struct list_head *ret;
16
17 ret = np->next;
18 if (ret == &lp->h) {
19 spin_unlock(&lp->s);
20 ret = NULL;
21 }
22 return ret;
23 }
1 struct list_ints {
2 struct list_head n;
3 int a;
4 };
5
6 void list_print(struct locked_list *lp)
7 {
8 struct list_head *np;
9 struct list_ints *ip;
10
11 np = list_start(lp);
12 while (np != NULL) {
13 ip = list_entry(np, struct list_ints, n);
14 printf("\t%d\n", ip->a);
15 np = list_next(lp, np);
16 }
17 }
1 spin_lock(&lock2);
2 layer_2_processing(pkt);
3 nextlayer = layer_1(pkt);
4 spin_lock(&nextlayer->lock1);
5 layer_1_processing(pkt);
6 spin_unlock(&lock2);
7 spin_unlock(&nextlayer->lock1);
each added layer increases the complexity of the locking design. Such increases in
complexity are particularly inconvenient for some types of object-oriented designs, in
which control passes back and forth among a large group of objects in an undisciplined
manner.1 This mismatch between the habits of object-oriented design and the need to
avoid deadlock is an important reason why parallel programming is perceived by some
to be so difficult.
Some alternatives to highly layered locking hierarchies are covered in Chapter 9.
1 retry:
2 spin_lock(&lock2);
3 layer_2_processing(pkt);
4 nextlayer = layer_1(pkt);
5 if (!spin_trylock(&nextlayer->lock1)) {
6 spin_unlock(&lock2);
7 spin_lock(&nextlayer->lock1);
8 spin_lock(&lock2);
9 if (layer_1(pkt) != nextlayer) {
10 spin_unlock(&nextlayer->lock1);
11 spin_unlock(&lock2);
12 goto retry;
13 }
14 }
15 layer_1_processing(pkt);
16 spin_unlock(&lock2);
17 spin_unlock(&nextlayer->lock1);
Figure 7.10. Instead of unconditionally acquiring the layer-1 lock, line 5 conditionally
acquires the lock using the spin_trylock() primitive. This primitive acquires the
lock immediately if the lock is available (returning non-zero), and otherwise returns
zero without acquiring the lock.
If spin_trylock() was successful, line 15 does the needed layer-1 processing.
Otherwise, line 6 releases the lock, and lines 7 and 8 acquire them in the correct order.
Unfortunately, there might be multiple networking devices on the system (e.g., Ethernet
and WiFi), so that the layer_1() function must make a routing decision. This
decision might change at any time, especially if the system is mobile.2 Therefore, line 9
must recheck the decision, and if it has changed, must release the locks and start over.
Quick Quiz 7.7: Can the transformation from Figure 7.9 to Figure 7.10 be applied
universally?
Quick Quiz 7.8: But the complexity in Figure 7.10 is well worthwhile given that it
avoids deadlock, right?
on the ability to abort transactions, although this can be simplified by avoiding making
any changes to shared data until all needed locks are acquired. Livelock and deadlock
are issues in such systems, but practical solutions may be found in any of a number of
database textbooks.
In some cases, it is possible to avoid nesting locks, thus avoiding deadlock. For example,
if a problem is perfectly partitionable, a single lock may be assigned to each partition.
Then a thread working on a given partition need only acquire the one corresponding
lock. Because no thread ever holds more than one lock at a time, deadlock is impossible.
However, there must be some mechanism to ensure that the needed data structures
remain in existence during the time that neither lock is held. One such mechanism is
discussed in Section 7.4 and several others are presented in Chapter 9.
Deadlocks involving signal handlers are often quickly dismissed by noting that it is
not legal to invoke pthread_mutex_lock() from within a signal handler [Ope97].
However, it is possible (though almost always unwise) to hand-craft locking primitives
that can be invoked from signal handlers. Besides which, almost all operating-system
kernels permit locks to be acquired from within interrupt handlers, which are the kernel
analog to signal handlers.
The trick is to block signals (or disable interrupts, as the case may be) when acquiring
any lock that might be acquired within an interrupt handler. Furthermore, if holding
such a lock, it is illegal to attempt to acquire any lock that is ever acquired outside of a
signal handler without blocking signals.
Quick Quiz 7.10: Why is it illegal to acquire a Lock A that is acquired outside of a
signal handler without blocking signals while holding a Lock B that is acquired within a
signal handler?
If a lock is acquired by the handlers for several signals, then each and every one of
these signals must be blocked whenever that lock is acquired, even when that lock is
acquired within a signal handler.
Quick Quiz 7.11: How can you legally block signals within a signal handler?
Unfortunately, blocking and unblocking signals can be expensive in some operating
systems, notably including Linux, so performance concerns often mean that locks
acquired in signal handlers are only acquired in signal handlers, and that lockless
synchronization mechanisms are used to communicate between application code and
signal handlers.
Or that signal handlers are avoided completely except for handling fatal errors.
Quick Quiz 7.12: If acquiring locks in signal handlers is such a bad idea, why even
discuss ways of making it safe?
7.1.1.9 Discussion
1 void thread1(void)
2 {
3 retry:
4 spin_lock(&lock1);
5 do_one_thing();
6 if (!spin_trylock(&lock2)) {
7 spin_unlock(&lock1);
8 goto retry;
9 }
10 do_another_thing();
11 spin_unlock(&lock2);
12 spin_unlock(&lock1);
13 }
14
15 void thread2(void)
16 {
17 retry:
18 spin_lock(&lock2);
19 do_a_third_thing();
20 if (!spin_trylock(&lock1)) {
21 spin_unlock(&lock2);
22 goto retry;
23 }
24 do_a_fourth_thing();
25 spin_unlock(&lock1);
26 spin_unlock(&lock2);
27 }
tool in their toolbox: locking is a powerful concurrency tool, but there are jobs better
addressed with other tools.
Quick Quiz 7.13: Given an object-oriented application that passes control freely
among a group of objects such that there is no straightforward locking hierarchy,3
layered or otherwise, how can this application be parallelized?
Nevertheless, the strategies described in this section have proven quite useful in
many settings.
1 void thread1(void)
2 {
3 unsigned int wait = 1;
4 retry:
5 spin_lock(&lock1);
6 do_one_thing();
7 if (!spin_trylock(&lock2)) {
8 spin_unlock(&lock1);
9 sleep(wait);
10 wait = wait << 1;
11 goto retry;
12 }
13 do_another_thing();
14 spin_unlock(&lock2);
15 spin_unlock(&lock1);
16 }
17
18 void thread2(void)
19 {
20 unsigned int wait = 1;
21 retry:
22 spin_lock(&lock2);
23 do_a_third_thing();
24 if (!spin_trylock(&lock1)) {
25 spin_unlock(&lock2);
26 sleep(wait);
27 wait = wait << 1;
28 goto retry;
29 }
30 do_a_fourth_thing();
31 spin_unlock(&lock1);
32 spin_unlock(&lock2);
33 }
Quick Quiz 7.14: How can the livelock shown in Figure 7.11 be avoided?
Livelock can be thought of as an extreme form of starvation where a group of threads
starve, rather than just one of them.4
Livelock and starvation are serious issues in software transactional memory imple-
mentations, and so the concept of contention manager has been introduced to encapsu-
late these issues. In the case of locking, simple exponential backoff can often address
livelock and starvation. The idea is to introduce exponentially increasing delays before
each retry, as shown in Figure 7.12.
Quick Quiz 7.15: What problems can you spot in the code in Figure 7.12?
However, for better results, the backoff should be bounded, and even better high-
contention results have been obtained via queued locking [And90], which is discussed
more in Section 7.3.2. Of course, best of all is to use a good parallel design so that lock
contention remains low.
7.1.3 Unfairness
Unfairness can be thought of as a less-severe form of starvation, where a subset of
threads contending for a given lock are granted the lion’s share of the acquisitions. This
can happen on machines with shared caches or NUMA characteristics, for example, as
4 Try not to get too hung up on the exact definitions of terms like livelock, starvation, and unfairness.
Anything that causes a group of threads to fail to make adequate forward progress is a problem that needs to
be fixed, regardless of what name you choose for it.
7.2. TYPES OF LOCKS 147
Interconnect Interconnect
Cache Cache Cache Cache
CPU 4 CPU 5 CPU 6 CPU 7
shown in Figure 7.13. If CPU 0 releases a lock that all the other CPUs are attempting to
acquire, the interconnect shared between CPUs 0 and 1 means that CPU 1 will have an
advantage over CPUs 2-7. Therefore CPU 1 will likely acquire the lock. If CPU 1 hold
the lock long enough for CPU 0 to be requesting the lock by the time CPU 1 releases it
and vice versa, the lock can shuttle between CPUs 0 and 1, bypassing CPUs 2-7.
Quick Quiz 7.16: Wouldn’t it be better just to use a good parallel design so that
lock contention was low enough to avoid unfairness?
7.1.4 Inefficiency
Locks are implemented using atomic instructions and memory barriers, and often involve
cache misses. As we saw in Chapter 3, these instructions are quite expensive, roughly
two orders of magnitude greater overhead than simple instructions. This can be a serious
problem for locking: If you protect a single instruction with a lock, you will increase the
overhead by a factor of one hundred. Even assuming perfect scalability, one hundred
CPUs would be required to keep up with a single CPU executing the same code without
locking.
This situation underscores the synchronization-granularity tradeoff discussed in
Section 6.3, especially Figure 6.22: Too coarse a granularity will limit scalability, while
too fine a granularity will result in excessive synchronization overhead.
That said, once a lock is held, the data protected by that lock can be accessed by
the lock holder without interference. Acquiring a lock might be expensive, but once
held, the CPU’s caches are an effective performance booster, at least for large critical
sections.
Quick Quiz 7.17: How might the lock holder be interfered with?
reader-writer locks (Section 7.2.2), multi-role locks (Section 7.2.3), and scoped locking
(Section 7.2.4).
Concurrent Write
Concurrent Read
Null (Not Held)
Protected Write
Protected Read
Exclusive
Null (Not Held)
Concurrent Read X
Concurrent Write X X X
Protected Read X X X
Protected Write X X X X
Exclusive X X X X X
The VAX/VMS DLM uses six modes. For purposes of comparison, exclusive locks
use two modes (not held and held), while reader-writer locks use three modes (not held,
read held, and write held).
The first mode is null, or not held. This mode is compatible with all other modes,
which is to be expected: If a thread is not holding a lock, it should not prevent any other
thread from acquiring that lock.
The second mode is concurrent read, which is compatible with every other mode ex-
cept for exclusive. The concurrent-read mode might be used to accumulate approximate
statistics on a data structure, while permitting updates to proceed concurrently.
The third mode is concurrent write, which is compatible with null, concurrent read,
and concurrent write. The concurrent-write mode might be used to update approximate
statistics, while still permitting reads and concurrent updates to proceed concurrently.
The fourth mode is protected read, which is compatible with null, concurrent read,
and protected read. The protected-read mode might be used to obtain a consistent
snapshot of the data structure, while permitting reads but not updates to proceed concur-
rently.
The fifth mode is protected write, which is compatible with null and concurrent
read. The protected-write mode might be used to carry out updates to a data structure
that could interfere with protected readers but which could be tolerated by concurrent
readers.
The sixth and final mode is exclusive, which is compatible only with null. The
exclusive mode is used when it is necessary to exclude all other accesses.
It is interesting to note that exclusive locks and reader-writer locks can be emulated
by the VAX/VMS DLM. Exclusive locks would use only the null and exclusive modes,
while reader-writer locks might use the null, protected-read, and protected-write modes.
Quick Quiz 7.19: Is there any other way for the VAX/VMS DLM to emulate a
reader-writer lock?
Although the VAX/VMS DLM policy has seen widespread production use for dis-
tributed databases, it does not appear to be used much in shared-memory applications.
One possible reason for this is that the greater communication overheads of distributed
databases can hide the greater overhead of the VAX/VMS DLM’s more-complex admis-
sion policy.
Nevertheless, the VAX/VMS DLM is an interesting illustration of just how flexible
the concepts behind locking can be. It also serves as a very simple introduction to the
150 CHAPTER 7. LOCKING
locking schemes used by modern DBMSes, which can have more than thirty locking
modes, compared to VAX/VMS’s six.
finally.
6 My later work with parallelism at Sequent Computer Systems very quickly disabused me of this
misguided notion.
7.2. TYPES OF LOCKS 151
Root rcu_node
Structure
CPU m
CPU m * (N − 1)
CPU m * (N − 1) + 1
CPU m * N − 1
Figure 7.14: Locking Hierarchy
(lines 7-8) spins until the lock is available, at which point the outer loop makes another
attempt to acquire the lock.
Quick Quiz 7.23: Why bother with the inner loop on lines 7-8 of Figure 7.16? Why
not simply repeatedly do the atomic exchange operation on line 6?
Lock release is carried out by the xchg_unlock() function shown on lines 12-15.
Line 14 atomically exchanges the value zero (“unlocked”) into the lock, thus marking it
as having been released.
Quick Quiz 7.24: Why not simply store zero into the lock word on line 14 of
Figure 7.16?
This lock is a simple example of a test-and-set lock [SR84], but very similar mecha-
nisms have been used extensively as pure spinlocks in production.
7 Besides, the best way of handling high lock contention is to avoid it in the first place! However, there
are some situation where high lock contention is the lesser of the available evils, and in any case, studying
schemes that deal with high levels of contention is good mental exercise.
154 CHAPTER 7. LOCKING
More recent queued-lock implementations also take the system’s architecture into
account, preferentially granting locks locally, while also taking steps to avoid starva-
tion [SSVM02, RH03, RH02, JMRR02, MCM02]. Many of these can be thought of as
analogous to the elevator algorithms traditionally used in scheduling disk I/O.
Unfortunately, the same scheduling logic that improves the efficiency of queued
locks at high contention also increases their overhead at low contention. Beng-Hong Lim
and Anant Agarwal therefore combined a simple test-and-set lock with a queued lock,
using the test-and-set lock at low levels of contention and switching to the queued lock at
high levels of contention [LA94], thus getting low overhead at low levels of contention
and getting fairness and high throughput at high levels of contention. Browning et
al. took a similar approach, but avoided the use of a separate flag, so that the test-and-
set fast path uses the same sequence of instructions that would be used in a simple
test-and-set lock [BMMM05]. This approach has been used in production.
Another issue that arises at high levels of contention is when the lock holder is
delayed, especially when the delay is due to preemption, which can result in priority
inversion, where a low-priority thread holds a lock, but is preempted by a medium
priority CPU-bound thread, which results in a high-priority process blocking while
attempting to acquire the lock. The result is that the CPU-bound medium-priority
process is preventing the high-priority process from running. One solution is priority
inheritance [LR80], which has been widely used for real-time computing [SRL90a,
Cor06b], despite some lingering controversy over this practice [Yod04a, Loc02].
Another way to avoid priority inversion is to prevent preemption while a lock is
held. Because preventing preemption while locks are held also improves throughput,
most proprietary UNIX kernels offer some form of scheduler-conscious synchronization
mechanism [KWS97], largely due to the efforts of a certain sizable database vendor.
These mechanisms usually take the form of a hint that preemption would be inappro-
priate. These hints frequently take the form of a bit set in a particular machine register,
which enables extremely low per-lock-acquisition overhead for these mechanisms. In
contrast, Linux avoids these hints, instead getting similar results from a mechanism
called futexes [FRK02, Mol06, Ros06, Dre11].
Interestingly enough, atomic instructions are not strictly needed to implement
locks [Dij65, Lam74]. An excellent exposition of the issues surrounding locking imple-
mentations based on simple loads and stores may be found in Herlihy’s and Shavit’s
textbook [HS08]. The main point echoed here is that such implementations currently
have little practical application, although a careful study of them can be both entertaining
and enlightening. Nevertheless, with one exception described below, such study is left
as an exercise for the reader.
Gamsa et al. [GKAS99, Section 5.3] describe a token-based mechanism in which a
token circulates among the CPUs. When the token reaches a given CPU, it has exclusive
access to anything protected by that token. There are any number of schemes that may
be used to implement the token-based mechanism, for example:
1. Maintain a per-CPU flag, which is initially zero for all but one CPU. When a
CPU’s flag is non-zero, it holds the token. When it finishes with the token, it
zeroes its flag and sets the flag of the next CPU to one (or to any other non-zero
value).
CPU (taking counter wrap into account), the first CPU holds the token. When it
is finished with the token, it sets the next CPU’s counter to a value one greater
than its own counter.
Quick Quiz 7.25: How can you tell if one counter is greater than another, while
accounting for counter wrap?
Quick Quiz 7.26: Which is better, the counter approach or the flag approach?
This lock is unusual in that a given CPU cannot necessarily acquire it immediately,
even if no other CPU is using it at the moment. Instead, the CPU must wait until the
token comes around to it. This is useful in cases where CPUs need periodic access
to the critical section, but can tolerate variances in token-circulation rate. Gamsa et
al. [GKAS99] used it to implement a variant of read-copy update (see Section 9.5), but
it could also be used to protect periodic per-CPU operations such as flushing per-CPU
caches used by memory allocators [MS93], garbage-collecting per-CPU data structures,
or flushing per-CPU data to shared storage (or to mass storage, for that matter).
As increasing numbers of people gain familiarity with parallel hardware and paral-
lelize increasing amounts of code, we can expect more special-purpose locking primi-
tives to appear. Nevertheless, you should carefully consider this important safety tip:
Use the standard synchronization primitives whenever humanly possible. The big ad-
vantage of the standard synchronization primitives over roll-your-own efforts is that the
standard primitives are typically much less bug-prone.8
1. Global variables and static local variables in the base module will exist as long as
the application is running.
2. Global variables and static local variables in a loaded module will exist as long as
that module remains loaded.
8 And yes, I have done at least my share of roll-your-own synchronization primitives. However, you will
notice that my hair is much greyer than it was before I started doing that sort of work. Coincidence? Maybe.
But are you really willing to risk your own hair turning prematurely grey?
156 CHAPTER 7. LOCKING
3. A module will remain loaded as long as at least one of its functions has an active
instance.
4. A given function instance’s on-stack variables will exist until that instance returns.
5. If you are executing within a given function or have been called (directly or
indirectly) from that function, then the given function has an active instance.
lock is running in the parent but not the child, if the child calls your library function,
deadlock will ensue.
The following strategies may be used to avoid deadlock problems in these cases:
Let the caller control synchronization. This works extremely well when the library
functions are operating on independent caller-visible instances of a data structure, each
of which may be synchronized separately. For example, if the library functions operate
on a search tree, and if the application needs a large number of independent search trees,
then the application can associate a lock with each tree. The application then acquires
and releases locks as needed, so that the library need not be aware of parallelism at all.
Instead, the application controls the parallelism, so that locking can work very well, as
was discussed in Section 7.5.1.
However, this strategy fails if the library implements a data structure that requires
internal concurrency, for example, a hash table or a parallel sort. In this case, the library
absolutely must control its own synchronization.
The idea here is to add arguments to the library’s API to specify which locks to acquire,
how to acquire and release them, or both. This strategy allows the application to take on
the global task of avoiding deadlock by specifying which locks to acquire (by passing in
pointers to the locks in question) and how to acquire them (by passing in pointers to lock
acquisition and release functions), but also allows a given library function to control its
own concurrency by deciding where the locks should be acquired and released.
In particular, this strategy allows the lock acquisition and release functions to block
signals as needed without the library code needing to be concerned with which signals
need to be blocked by which locks. The separation of concerns used by this strategy can
be quite effective, but in some cases the strategies laid out in the following sections can
work better.
That said, passing explicit pointers to locks to external APIs must be very carefully
considered, as discussed in Section 7.1.1.4. Although this practice is sometimes the
right thing to do, you should do yourself a favor by looking into alternative designs first.
The basic rule behind this strategy was discussed in Section 7.1.1.2: “Release all locks
before invoking unknown code.” This is usually the best approach because it allows
the application to ignore the library’s locking hierarchy: the library remains a leaf or
isolated subtree of the application’s overall locking hierarchy.
In cases where it is not possible to release all locks before invoking unknown code,
the layered locking hierarchies described in Section 7.1.1.3 can work well. For example,
if the unknown code is a signal handler, this implies that the library function block
signals across all lock acquisitions, which can be complex and slow. Therefore, in
cases where signal handlers (probably unwisely) acquire locks, the strategies in the next
section may prove helpful.
160 CHAPTER 7. LOCKING
1. If the application invokes the library function from within a signal handler, then
that signal must be blocked every time that the library function is invoked from
outside of a signal handler.
2. If the application invokes the library function while holding a lock acquired within
a given signal handler, then that signal must be blocked every time that the library
function is called outside of a signal handler.
These rules can be enforced by using tools similar to the Linux kernel’s lockdep
lock dependency checker [Cor06a]. One of the great strengths of lockdep is that it is
not fooled by human intuition [Ros11].
1. The data structures protected by that lock are likely to be in some intermedi-
ate state, so that naively breaking the lock might result in arbitrary memory
corruption.
2. If the child creates additional threads, two threads might break the lock concur-
rently, with the result that both threads believe they own the lock. This could
again result in arbitrary memory corruption.
The atfork() function is provided to help deal with these situations. The idea is
to register a triplet of functions, one to be called by the parent before the fork(), one
to be called by the parent after the fork(), and one to be called by the child after the
fork(). Appropriate cleanups can then be carried out at these three points.
Be warned, however, that coding of atfork() handlers is quite subtle in general.
The cases where atfork() works best are cases where the data structure in question
can simply be re-initialized by the child.
These flaws and the consequences for locking are discussed in the following sections.
1. Determining when to resize the hash table. In this case, an approximate count
should work quite well. It might also be useful to trigger the resizing operation
from the length of the longest chain, which can be computed and maintained in a
nicely partitioned per-chain manner.
2. Producing an estimate of the time required to traverse the entire hash table. An
approximate count works well in this case, also.
3. For diagnostic purposes, for example, to check for items being lost when trans-
ferring them to and from the hash table. This clearly requires an exact count.
However, given that this usage is diagnostic in nature, it might suffice to maintain
the lengths of the hash chains, then to infrequently sum them up while locking
out addition and deletion operations.
It turns out that there is now a strong theoretical basis for some of the constraints that
performance and scalability place on a parallel library’s APIs [AGH+ 11a, AGH+ 11b,
McK11b]. Anyone designing a parallel library needs to pay close attention to those
constraints.
Although it is all too easy to blame locking for what are really problems due to a
concurrency-unfriendly API, doing so is not helpful. On the other hand, one has little
choice but to sympathize with the hapless developer who made this choice in (say)
1985. It would have been a rare and courageous developer to anticipate the need for
162 CHAPTER 7. LOCKING
parallelism at that time, and it would have required an even more rare combination of
brilliance and luck to actually arrive at a good parallel-friendly API.
Times change, and code must change with them. That said, there might be a huge
number of users of a popular library, in which case an incompatible change to the API
would be quite foolish. Adding a parallel-friendly API to complement the existing
heavily used sequential-only API is probably the best course of action in this situation.
Nevertheless, human nature being what it is, we can expect our hapless developer
to be more likely to complain about locking than about his or her own poor (though
understandable) API design choices.
Sections 7.1.1.2, 7.1.1.3, and 7.5.2 described how undisciplined use of callbacks can
result in locking woes. These sections also described how to design your library function
to avoid these problems, but it is unrealistic to expect a 1990s programmer with no
experience in parallel programming to have followed such a design. Therefore, someone
attempting to parallelize an existing callback-heavy single-threaded library will likely
have many opportunities to curse locking’s villainy.
If there are a very large number of uses of a callback-heavy library, it may be wise to
again add a parallel-friendly API to the library in order to allow existing users to convert
their code incrementally. Alternatively, some advocate use of transactional memory in
these cases. While the jury is still out on transactional memory, Section 17.2 discusses
its strengths and weaknesses. It is important to note that hardware transactional memory
(discussed in Section 17.3) cannot help here unless the hardware transactional memory
implementation provides forward-progress guarantees, which few do. Other alternatives
that appear to be quite practical (if less heavily hyped) include the methods discussed in
Sections 7.1.1.5, and 7.1.1.6, as well as those that will be discussed in Chapters 8 and 9.
is worth some time spent thinking about not only alternative ways to accomplish that
particular task, but also alternative tasks that might better solve the problem at hand.
7.6 Summary
Locking is perhaps the most widely used and most generally useful synchronization
tool. However, it works best when designed into an application or library from the
beginning. Given the large quantity of pre-existing single-threaded code that might
need to one day run in parallel, locking should therefore not be the only tool in your
parallel-programming toolbox. The next few chapters will discuss other tools, and how
they can best be used in concert with locking and with each other.
164 CHAPTER 7. LOCKING
It is mine, I tell you. My own. My precious. Yes, my
precious.
Chapter 8
Data Ownership
One of the simplest ways to avoid the synchronization overhead that comes with locking
is to parcel the data out among the threads (or, in the case of kernels, CPUs) so that a
given piece of data is accessed and modified by only one of the threads. Interestingly
enough, data ownership covers each of the “big three” parallel design techniques: It
partitions over threads (or CPUs, as the case may be), it batches all local operations, and
its elimination of synchronization operations is weakening carried to its logical extreme.
It should therefore be no surprise that data ownership is used extremely heavily, in fact,
it is one usage pattern that even novices use almost instinctively. In fact, it is used so
heavily that this chapter will not introduce any new examples, but will instead reference
examples from previous chapters.
Quick Quiz 8.1: What form of data ownership is extremely difficult to avoid when
creating shared-memory parallel programs (for example, using pthreads) in C or C++?
There are a number of approaches to data ownership. Section 8.1 presents the
logical extreme in data ownership, where each thread has its own private address space.
Section 8.2 looks at the opposite extreme, where the data is shared, but different threads
own different access rights to the data. Section 8.3 describes function shipping, which
is a way of allowing other threads to have indirect access to data owned by a particular
thread. Section 8.4 describes how designated threads can be assigned ownership of a
specified function and the related data. Section 8.5 discusses improving performance
by transforming algorithms with shared data to instead use data ownership. Finally,
Section 8.6 lists a few software environments that feature data ownership as a first-class
citizen.
165
166 CHAPTER 8. DATA OWNERSHIP
is owned by that process, so that almost the entirety of data in the above example
is owned. This approach almost entirely eliminates synchronization overhead. The
resulting combination of extreme simplicity and optimal performance is obviously quite
attractive.
Quick Quiz 8.2: What synchronization remains in the example shown in Sec-
tion 8.1?
Quick Quiz 8.3: Is there any shared data in the example shown in Section 8.1?
This same pattern can be written in C as well as in sh, as illustrated by Figures 4.2
and 4.3.
The next section discusses use of data ownership in shared-memory parallel pro-
grams.
8.5 Privatization
One way of improving the performance and scalability of a shared-memory parallel
program is to transform it so as to convert shared data to private data that is owned by a
particular thread.
An excellent example of this is shown in the answer to one of the Quick Quizzes in
Section 6.1.1, which uses privatization to produce a solution to the Dining Philosophers
problem with much better performance and scalability than that of the standard textbook
solution. The original problem has five philosophers sitting around the table with one
fork between each adjacent pair of philosophers, which permits at most two philosophers
to eat concurrently.
We can trivially privatize this problem by providing an additional five forks, so
that each philosopher has his or her own private pair of forks. This allows all five
philosophers to eat concurrently, and also offers a considerable reduction in the spread
of certain types of disease.
In other cases, privatization imposes costs. For example, consider the simple
limit counter shown in Figure 5.12 on page 67. This is an example of an algorithm
where threads can read each others’ data, but are only permitted to update their own
data. A quick review of the algorithm shows that the only cross-thread accesses are
in the summation loop in read_count(). If this loop is eliminated, we move to
the more-efficient pure data ownership, but at the cost of a less-accurate result from
read_count().
Quick Quiz 8.7: Is it possible to obtain greater accuracy while still maintaining full
privacy of the per-thread data?
In short, privatization is a powerful tool in the parallel programmer’s toolbox, but it
must nevertheless be used with care. Just like every other synchronization primitive, it
has the potential to increase complexity while decreasing performance and scalability.
Violet Fane
Chapter 9
Deferred Processing
The strategy of deferring work goes back before the dawn of recorded history. It
has occasionally been derided as procrastination or even as sheer laziness. However,
in the last few decades workers have recognized this strategy’s value in simplifying
and streamlining parallel algorithms [KL80, Mas92]. Believe it or not, “laziness”
in parallel programming often outperforms and out-scales industriousness! These
performance and scalability benefits stem from the fact that deferring work often enables
weakening of synchronization primitives, thereby reducing synchronization overhead.
General approaches of work deferral include reference counting (Section 9.2), hazard
pointers (Section 9.3), sequence locking (Section 9.4), and RCU (Section 9.5). Finally,
Section 9.6 describes how to choose among the work-deferral schemes covered in this
chapter and Section 9.7 discusses the role of updates. But first we will introduce an
example algorithm that will be used to compare and contrast these approaches.
169
170 CHAPTER 9. DEFERRED PROCESSING
route_list
2 Weizenbaum discusses reference counting as if it was already well-known, so it likely dates back to
the 1950s and perhaps even to the 1940s. And perhaps even further. People repairing and maintaining large
machines have long used a mechanical reference-counting technique, where each worker had a padlock.
9.2. REFERENCE COUNTING 171
1 struct route_entry {
2 struct cds_list_head re_next;
3 unsigned long addr;
4 unsigned long iface;
5 };
6 CDS_LIST_HEAD(route_list);
7
8 unsigned long route_lookup(unsigned long addr)
9 {
10 struct route_entry *rep;
11 unsigned long ret;
12
13 cds_list_for_each_entry(rep,
14 &route_list, re_next) {
15 if (rep->addr == addr) {
16 ret = rep->iface;
17 return ret;
18 }
19 }
20 return ULONG_MAX;
21 }
22
23 int route_add(unsigned long addr,
24 unsigned long interface)
25 {
26 struct route_entry *rep;
27
28 rep = malloc(sizeof(*rep));
29 if (!rep)
30 return -ENOMEM;
31 rep->addr = addr;
32 rep->iface = interface;
33 cds_list_add(&rep->re_next, &route_list);
34 return 0;
35 }
36
37 int route_del(unsigned long addr)
38 {
39 struct route_entry *rep;
40
41 cds_list_for_each_entry(rep,
42 &route_list, re_next) {
43 if (rep->addr == addr) {
44 cds_list_del(&rep->re_next);
45 free(rep);
46 return 0;
47 }
48 }
49 return -ENOENT;
50 }
finally lines 34-35 invoke re_free() if the new value of the reference count is zero.
Quick Quiz 9.2: Why doesn’t route_del() in Figure 9.4 use reference counts
to protect the traversal to the element to be freed?
Figure 9.5 shows the performance and scalability of reference counting on a read-
only workload with a ten-element list running on a single-socket four-core hyperthreaded
2.5GHz x86 system. The “ideal” trace was generated by running the sequential code
shown in Figure 9.2, which works only because this is a read-only workload. The
reference-counting performance is abysmal and its scalability even more so, with the
“refcnt” trace dropping down onto the x-axis. This should be no surprise in view of
Chapter 3: The reference-count acquisitions and releases have added frequent shared-
memory writes to an otherwise read-only workload, thus incurring severe retribution
from the laws of physics. As well it should, given that all the wishful thinking in the
world is not going to increase the speed of light or decrease the size of the atoms used
172 CHAPTER 9. DEFERRED PROCESSING
450000
400000 ideal
2. Thread B invokes route_del() in Figure 9.4 to delete the route entry for
address 42. It completes successfully, and because this entry’s ->re_refcnt
field was equal to the value one, it invokes re_free() to set the ->re_freed
field and to free the entry.
The problem is that the reference count is located in the object to be protected, but
that means that there is no protection during the instant in time when the reference
count itself is being acquired! This is the reference-counting counterpart of a locking
issue noted by Gamsa et al. [GKAS99]. One could imagine using a global lock or
reference count to protect the per-route-entry reference-count acquisition, but this
would result in severe contention issues. Although algorithms exist that allow safe
reference-count acquisition in a concurrent environment [Val95], they are not only
9.3. HAZARD POINTERS 175
extremely complex and error-prone [MS95], but also provide terrible performance and
scalability [HMBW07].
In short, concurrency has most definitely reduced the usefulness of reference count-
ing!
Quick Quiz 9.5: If concurrency has “most definitely reduced the usefulness of
reference counting”, why are there so many reference counters in the Linux kernel?
That said, sometimes it is necessary to look at a problem in an entirely different way
in order to successfully solve it. The next section describes what could be thought of as
an inside-out reference count that provides decent performance and scalability.
1 struct route_entry {
2 struct hazptr_head hh;
3 struct route_entry *re_next;
4 unsigned long addr;
5 unsigned long iface;
6 int re_freed;
7 };
8 struct route_entry route_list;
9 DEFINE_SPINLOCK(routelock);
10 hazard_pointer __thread *my_hazptr;
11
12 unsigned long route_lookup(unsigned long addr)
13 {
14 int offset = 0;
15 struct route_entry *rep;
16 struct route_entry **repp;
17
18 retry:
19 repp = &route_list.re_next;
20 do {
21 rep = ACCESS_ONCE(*repp);
22 if (rep == NULL)
23 return ULONG_MAX;
24 if (rep == (struct route_entry *)HAZPTR_POISON)
25 goto retry;
26 my_hazptr[offset].p = &rep->hh;
27 offset = !offset;
28 smp_mb();
29 if (ACCESS_ONCE(*repp) != rep)
30 goto retry;
31 repp = &rep->re_next;
32 } while (rep->addr != addr);
33 if (ACCESS_ONCE(rep->re_freed))
34 abort();
35 return rep->iface;
36 }
450000
400000 ideal
The Pre-BSD routing example can use hazard pointers as shown in Figure 9.7
for data structures and route_lookup(), and in Figure 9.8 for route_add()
and route_del() (route_hazptr.c). As with reference counting, the hazard-
pointers implementation is quite similar to the sequential algorithm shown in Figure 9.2
on page 171, so only differences will be discussed.
Starting with Figure 9.7, line 2 shows the ->hh field used to queue objects pending
hazard-pointer free, line 6 shows the ->re_freed field used to detect use-after-free
bugs, and lines 24-30 attempt to acquire a hazard pointer, branching to line 18’s retry
label on failure.
In Figure 9.8, line 11 initializes ->re_freed, lines 32 and 33 poison the ->re_
next field of the newly removed object, and line 35 passes that object to the hazard
pointers’s hazptr_free_later() function, which will free that object once it is
safe to do so. The spinlocks work the same as in Figure 9.4.
Figure 9.9 shows the hazard-pointers-protected Pre-BSD routing algorithm’s per-
formance on the same read-only workload as for Figure 9.5. Although hazard pointers
scales much better than does reference counting, hazard pointers still require readers
to do writes to shared memory (albeit with much improved locality of reference), and
also require a full memory barrier and retry check for each object traversed. Therefore,
hazard pointers’s performance is far short of ideal. On the other hand, hazard pointers
do operate correctly for workloads involving concurrent updates.
Quick Quiz 9.10: The paper “Structured Deferral: Synchronization via Procrasti-
nation” [McK13] shows that hazard pointers have near-ideal performance. Whatever
happened in Figure 9.9???
The next section attempts to improve on hazard pointers by using sequence locks,
which avoid both read-side writes and per-object memory barriers.
Figure 9.10, it is important to design code using sequence locks so that readers very
rarely need to retry.
Quick Quiz 9.11: Why isn’t this sequence-lock discussion in Chapter 7, you know,
the one on locking?
The key component of sequence locking is the sequence number, which has an even
value in the absence of updaters and an odd value if there is an update in progress.
Readers can then snapshot the value before and after each access. If either snapshot has
an odd value, or if the two snapshots differ, there has been a concurrent update, and the
reader must discard the results of the access and then retry it. Readers therefore use
the read_seqbegin() and read_seqretry() functions shown in Figure 9.11
when accessing data protected by a sequence lock. Writers must increment the value
before and after each update, and only one writer is permitted at a given time. Writers
therefore use the write_seqlock() and write_sequnlock() functions shown
in Figure 9.12 when updating data protected by a sequence lock.
As a result, sequence-lock-protected data can have an arbitrarily large number of
concurrent readers, but only one writer at a time. Sequence locking is used in the Linux
kernel to protect calibration quantities used for timekeeping. It is also used in pathname
traversal to detect concurrent rename operations.
A simple implementation of sequence locks is shown in Figure 9.13 (seqlock.h).
The seqlock_t data structure is shown on lines 1-4, and contains the sequence
number along with a lock to serialize writers. Lines 6-10 show seqlock_init(),
1 do {
2 seq = read_seqbegin(&test_seqlock);
3 /* read-side access. */
4 } while (read_seqretry(&test_seqlock, seq));
1 typedef struct {
2 unsigned long seq;
3 spinlock_t lock;
4 } seqlock_t;
5
6 static void seqlock_init(seqlock_t *slp)
7 {
8 slp->seq = 0;
9 spin_lock_init(&slp->lock);
10 }
11
12 static unsigned long read_seqbegin(seqlock_t *slp)
13 {
14 unsigned long s;
15
16 s = ACCESS_ONCE(slp->seq);
17 smp_mb();
18 return s & ~0x1UL;
19 }
20
21 static int read_seqretry(seqlock_t *slp,
22 unsigned long oldseq)
23 {
24 unsigned long s;
25
26 smp_mb();
27 s = ACCESS_ONCE(slp->seq);
28 return s != oldseq;
29 }
30
31 static void write_seqlock(seqlock_t *slp)
32 {
33 spin_lock(&slp->lock);
34 ++slp->seq;
35 smp_mb();
36 }
37
38 static void write_sequnlock(seqlock_t *slp)
39 {
40 smp_mb();
41 ++slp->seq;
42 spin_unlock(&slp->lock);
43 }
1 struct route_entry {
2 struct route_entry *re_next;
3 unsigned long addr;
4 unsigned long iface;
5 int re_freed;
6 };
7 struct route_entry route_list;
8 DEFINE_SEQ_LOCK(sl);
9
10 unsigned long route_lookup(unsigned long addr)
11 {
12 struct route_entry *rep;
13 struct route_entry **repp;
14 unsigned long ret;
15 unsigned long s;
16
17 retry:
18 s = read_seqbegin(&sl);
19 repp = &route_list.re_next;
20 do {
21 rep = ACCESS_ONCE(*repp);
22 if (rep == NULL) {
23 if (read_seqretry(&sl, s))
24 goto retry;
25 return ULONG_MAX;
26 }
27 repp = &rep->re_next;
28 } while (rep->addr != addr);
29 if (ACCESS_ONCE(rep->re_freed))
30 abort();
31 ret = rep->iface;
32 if (read_seqretry(&sl, s))
33 goto retry;
34 return ret;
35 }
Quick Quiz 9.15: What prevents sequence-locking updaters from starving readers?
Lines 31-36 show write_seqlock(), which simply acquires the lock, incre-
ments the sequence number, and executes a memory barrier to ensure that this in-
crement is ordered before the caller’s critical section. Lines 38-43 show write_
sequnlock(), which executes a memory barrier to ensure that the caller’s critical
section is ordered before the increment of the sequence number on line 44, then releases
the lock.
Quick Quiz 9.16: What if something else serializes writers, so that the lock is not
needed?
Quick Quiz 9.17: Why isn’t seq on line 2 of Figure 9.13 unsigned rather than
unsigned long? After all, if unsigned is good enough for the Linux kernel,
shouldn’t it be good enough for everyone?
So what happens when sequence locking is applied to the Pre-BSD routing table?
Figure 9.14 shows the data structures and route_lookup(), and Figure 9.15 shows
route_add() and route_del() (route_seqlock.c). This implementation
is once again similar to its counterparts in earlier sections, so only the differences will
be highlighted.
In Figure 9.14, line 5 adds ->re_freed, which is checked on lines 29 and 30.
Line 8 adds a sequence lock, which is used by route_lookup() on lines 18, 23,
and 32, with lines 24 and 33 branching back to the retry label on line 17. The effect
is to retry any lookup that runs concurrently with an update.
182 CHAPTER 9. DEFERRED PROCESSING
450000
400000 ideal
In Figure 9.15, lines 12, 15, 24, and 40 acquire and release the sequence lock, while
lines 11, 33, and 44 handle ->re_freed. This implementation is therefore quite
straightforward.
It also performs better on the read-only workload, as can be seen in Figure 9.16,
though its performance is still far from ideal.
Unfortunately, it also suffers use-after-free failures. The problem is that the reader
might encounter a segmentation violation due to accessing an already-freed structure
before it comes to the read_seqretry().
Quick Quiz 9.18: Can this bug be fixed? In other words, can you use sequence locks
as the only synchronization mechanism protecting a linked list supporting concurrent
addition, deletion, and lookup?
Both the read-side and write-side critical sections of a sequence lock can be thought
of as transactions, and sequence locking therefore can be thought of as a limited form
of transactional memory, which will be discussed in Section 17.2. The limitations of
sequence locking are: (1) Sequence locking restricts updates and (2) sequence locking
does not permit traversal of pointers to objects that might be freed by updaters. These
limitations are of course overcome by transactional memory, but can also be overcome
by combining other synchronization primitives with sequence locking.
Sequence locks allow writers to defer readers, but not vice versa. This can result
in unfairness and even starvation in writer-heavy workloads. On the other hand, in the
absence of writers, sequence-lock readers are reasonably fast and scale linearly. It is only
human to want the best of both worlds: fast readers without the possibility of read-side
failure, let alone starvation. In addition, it would also be nice to overcome sequence
locking’s limitations with pointers. The following section presents a synchronization
mechanism with exactly these properties.
pointers covered by Section 9.3 uses implicit counters in the guise of per-thread lists of
pointer. This avoids read-side contention, but requires full memory barriers in read-side
primitives. The sequence lock presented in Section 9.4 also avoids read-side contention,
but does not protect pointer traversals and, like hazard pointers, requires full memory
barriers in read-side primitives. These schemes’ shortcomings raise the question of
whether it is possible to do better.
This section introduces read-copy update (RCU), which provides an API that allows
delays to be identified in the source code, rather than as expensive updates to shared data.
The remainder of this section examines RCU from a number of different perspectives.
Section 9.5.1 provides the classic introduction to RCU, Section 9.5.2 covers fundamental
RCU concepts, Section 9.5.3 introduces some common uses of RCU, Section 9.5.4
presents the Linux-kernel API, Section 9.5.5 covers a sequence of “toy” implementations
of user-level RCU, and finally Section 9.5.6 provides some RCU exercises.
4 On many computer systems, simple assignment is insufficient due to interference from both the compiler
and, on DEC Alpha systems, the CPU as well. This will be covered in Section 9.5.2.
6 And yet again, this approximates reality, which will be expanded on in Section 9.5.2.
9.5. READ-COPY UPDATE (RCU) 185
(1) gptr
kmalloc()
p
->addr=?
(2) gptr ->iface=?
initialization
p
->addr=42
(3) gptr ->iface=1
gptr = p; /*almost*/
p
->addr=42
(4) gptr ->iface=1
shows that this can also result in long delays, just as can the locking and sequence-
locking approaches that we already rejected.
Let’s consider the logical extreme where the readers do absolutely nothing to
announce their presence. This approach clearly allows optimal performance for readers
(after all, free is a very good price), but leaves open the question of how the updater can
possibly determine when all the old readers are done. We clearly need some additional
constraints if we are to provide a reasonable answer to this question.
One constraint that fits well with some operating-system kernels is to consider the
case where threads are not subject to preemption. In such non-preemptible environments,
each thread runs until it explicitly and voluntarily blocks. This means that an infinite
loop without blocking will render a CPU useless for any other purpose from the start of
the infinite loop onwards.7 Non-preemptibility also requires that threads be prohibited
from blocking while holding spinlocks. Without this prohibition, all CPUs might be
consumed by threads spinning attempting to acquire a spinlock held by a blocked thread.
The spinning threads will not relinquish their CPUs until they acquire the lock, but
the thread holding the lock cannot possibly release it until one of the spinning threads
relinquishes a CPU. This is a classic deadlock situation.
Let us impose this same constraint on reader threads traversing the linked list:
7 In contrast, an infinite loop in a preemptible environment might be preempted. This infinite loop might
still waste considerable CPU time, but the CPU in question would nevertheless be able to do other work.
186 CHAPTER 9. DEFERRED PROCESSING
Readers?
(1) A B C 1 Version
list_del() /*almost*/
Readers?
(2) A B C 2 Versions
(3) A B C 1 Versions
free()
(4) A C 1 Versions
such threads are not allowed to block until after completing their traversal. Returning
to the second row of Figure 9.18, where the updater has just completed executing
list_del(), imagine that CPU 0 executes a context switch. Because readers are
not permitted to block while traversing the linked list, we are guaranteed that all prior
readers that might have been running on CPU 0 will have completed. Extending this
line of reasoning to the other CPUs, once each CPU has been observed executing a
context switch, we are guaranteed that all prior readers have completed, and that there
are no longer any reader threads referencing element B. The updater can then safely
free element B, resulting in the state shown at the bottom of Figure 9.18.
This approach is termed quiescent state based reclamation (QSBR) [HMB06]. A
QSBR schematic is shown in Figure 9.19, with time advancing from the top of the figure
to the bottom.
Although production-quality implementations of this approach can be quite complex,
a toy implementation is exceedingly simple:
1 for_each_online_cpu(cpu)
2 run_on(cpu);
The for_each_online_cpu() primitive iterates over all CPUs, and the run_
on() function causes the current thread to execute on the specified CPU, which forces
the destination CPU to execute a context switch. Therefore, once the for_each_
online_cpu() has completed, each CPU has executed a context switch, which in
turn guarantees that all pre-existing reader threads have completed.
Please note that this approach is not production quality. Correct handling of a
9.5. READ-COPY UPDATE (RCU) 187
list_del()
Context Switch
Reader
Grace Period
free()
number of corner cases and the need for a number of powerful optimizations mean that
production-quality implementations have significant additional complexity. In addition,
RCU implementations for preemptible environments require that readers actually do
something. However, this simple non-preemptible approach is conceptually complete,
and forms a good initial basis for understanding the RCU fundamentals covered in the
following section.
1 struct foo {
2 int a;
3 int b;
4 int c;
5 };
6 struct foo *gp = NULL;
7
8 /* . . . */
9
10 p = kmalloc(sizeof(*p), GFP_KERNEL);
11 p->a = 1;
12 p->b = 2;
13 p->c = 3;
14 gp = p;
possibly work). This document addresses these questions from a fundamental viewpoint;
later installments look at them from usage and from API viewpoints. This last installment
also includes a list of references.
RCU is made up of three fundamental mechanisms, the first being used for insertion,
the second being used for deletion, and the third being used to allow readers to tolerate
concurrent insertions and deletions. Section 9.5.2.1 describes the publish-subscribe
mechanism used for insertion, Section 9.5.2.2 describes how waiting for pre-existing
RCU readers enabled deletion, and Section 9.5.2.3 discusses how maintaining multiple
versions of recently updated objects permits concurrent insertions and deletions. Finally,
Section 9.5.2.4 summarizes RCU fundamentals.
A B C
Although this code fragment might well seem immune to misordering, unfortunately,
the DEC Alpha CPU [McK05a, McK05b] and value-speculation compiler optimizations
can, believe it or not, cause the values of p->a, p->b, and p->c to be fetched before
the value of p. This is perhaps easiest to see in the case of value-speculation compiler
optimizations, where the compiler guesses the value of p fetches p->a, p->b, and
p->c then fetches the actual value of p in order to check whether its guess was correct.
This sort of optimization is quite aggressive, perhaps insanely so, but does actually
occur in the context of profile-driven optimization.
Clearly, we need to prevent this sort of skullduggery on the part of both the compiler
and the CPU. The rcu_dereference() primitive uses whatever memory-barrier
instructions and compiler directives are required for this purpose:8
1 rcu_read_lock();
2 p = rcu_dereference(gp);
3 if (p != NULL) {
4 do_something_with(p->a, p->b, p->c);
5 }
6 rcu_read_unlock();
a memory barrier instruction. In the C11 and C++11 standards, memory_order_consume is intended
to provide longer-term support for rcu_dereference(), but no compilers implement this natively yet.
(They instead strengthen memory_order_consume to memory_order_acquire, thus emitting a
needless memory-barrier instruction on weakly ordered systems.)
190 CHAPTER 9. DEFERRED PROCESSING
1 struct foo {
2 struct list_head *list;
3 int a;
4 int b;
5 int c;
6 };
7 LIST_HEAD(head);
8
9 /* . . . */
10
11 p = kmalloc(sizeof(*p), GFP_KERNEL);
12 p->a = 1;
13 p->b = 2;
14 p->c = 3;
15 list_add_rcu(&p->list, &head);
1 struct foo {
2 struct hlist_node *list;
3 int a;
4 int b;
5 int c;
6 };
7 HLIST_HEAD(head);
8
9 /* . . . */
10
11 p = kmalloc(sizeof(*p), GFP_KERNEL);
12 p->a = 1;
13 p->b = 2;
14 p->c = 3;
15 hlist_add_head_rcu(&p->list, &head);
The set of RCU publish and subscribe primitives are shown in Table 9.1, along with
additional primitives to “unpublish”, or retract.
Note that the list_replace_rcu(), list_del_rcu(), hlist_replace_
rcu(), and hlist_del_rcu() APIs add a complication. When is it safe to free
up the data element that was replaced or removed? In particular, how can we possibly
know when all the readers have released their references to that data element?
These questions are addressed in the following section.
Grace Period
Reader Reader
Extends as
Reader Reader Needed
Reader Reader
Removal Reclamation
Time
2. Wait for all pre-existing RCU read-side critical sections to completely finish (for
example, by using the synchronize_rcu() primitive or its asynchronous
counterpart, call_rcu(), which invokes a specified function at the end of a
future grace period). The key observation here is that subsequent RCU read-side
critical sections have no way to gain a reference to the newly removed element.
3. Clean up, for example, free the element that was replaced above.
The code fragment shown in Figure 9.27, adapted from those in Section 9.5.2.1,
demonstrates this process, with field a being the search key.
Lines 19, 20, and 21 implement the three steps called out above. Lines 16-19 gives
RCU (“read-copy update”) its name: while permitting concurrent reads, line 16 copies
and lines 17-19 do an update.
As discussed in Section 9.5.1, the synchronize_rcu() primitive can be quite
simple (see Section 9.5.5 for additional “toy” RCU implementations). However,
production-quality implementations must deal with difficult corner cases and also incor-
porate powerful optimizations, both of which result in significant complexity. Although
it is good to know that there is a simple conceptual implementation of synchronize_
rcu(), other questions remain. For example, what exactly do RCU readers see when
9.5. READ-COPY UPDATE (RCU) 193
1 struct foo {
2 struct list_head *list;
3 int a;
4 int b;
5 int c;
6 };
7 LIST_HEAD(head);
8
9 /* . . . */
10
11 p = search(head, key);
12 if (p == NULL) {
13 /* Take appropriate action, unlock, & return. */
14 }
15 q = kmalloc(sizeof(*p), GFP_KERNEL);
16 *q = *p;
17 q->b = 2;
18 q->c = 3;
19 list_replace_rcu(&p->list, &q->list);
20 synchronize_rcu();
21 kfree(p);
This code will update the list as shown in Figure 9.28. The triples in each element
represent the values of fields a, b, and c, respectively. The red-shaded elements indicate
that RCU readers might be holding references to them, so in the initial state at the
top of the diagram, all elements are shaded red. Please note that we have omitted the
backwards pointers and the link from the tail of the list to the head for clarity.
After the list_del_rcu() on line 3 has completed, the 5,6,7 element has
been removed from the list, as shown in the second row of Figure 9.28. Since readers do
not synchronize directly with updaters, readers might be concurrently scanning this list.
These concurrent readers might or might not see the newly removed element, depending
on timing. However, readers that were delayed (e.g., due to interrupts, ECC memory
errors, or, in CONFIG_PREEMPT_RT kernels, preemption) just after fetching a pointer
to the newly removed element might see the old version of the list for quite some time
194 CHAPTER 9. DEFERRED PROCESSING
list_del_rcu()
synchronize_rcu()
kfree()
1,2,3 11,4,8
after the removal. Therefore, we now have two versions of the list, one with element
5,6,7 and one without. The 5,6,7 element in the second row of the figure is now
shaded yellow, indicating that old readers might still be referencing it, but that new
readers cannot obtain a reference to it.
Please note that readers are not permitted to maintain references to element 5,6,7
after exiting from their RCU read-side critical sections. Therefore, once the synchronize_
rcu() on line 4 completes, so that all pre-existing readers are guaranteed to have
completed, there can be no more readers referencing this element, as indicated by its
green shading on the third row of Figure 9.28. We are thus back to a single version of
the list.
At this point, the 5,6,7 element may safely be freed, as shown on the final row
of Figure 9.28. At this point, we have completed the deletion of element 5,6,7. The
following example covers replacement.
The initial state of the list, including the pointer p, is the same as for the deletion
example, as shown on the first row of Figure 9.29.
As before, the triples in each element represent the values of fields a, b, and c,
9.5. READ-COPY UPDATE (RCU) 195
Allocate
?,?,?
Copy
5,6,7
Update
5,2,3
list_replace_rcu()
5,2,3
synchronize_rcu()
5,2,3
kfree()
Discussion These examples assumed that a mutex was held across the entire update
operation, which would mean that there could be at most two versions of the list active
at a given time.
Quick Quiz 9.21: How would you modify the deletion example to permit more
than two versions of the list to be active?
Quick Quiz 9.22: How many RCU versions of a given list can be active at any
given time?
This sequence of events shows how RCU updates use multiple versions to safely
carry out changes in presence of concurrent readers. Of course, some algorithms cannot
gracefully handle multiple versions. There are techniques for adapting such algorithms
to RCU [McK04], but these are beyond the scope of this section.
Quick Quiz 9.23: How can RCU updaters possibly delay RCU readers, given
that the rcu_read_lock() and rcu_read_unlock() primitives neither spin
nor block?
These three RCU components allow data to be updated in face of concurrent readers,
and can be combined in different ways to implement a surprising variety of different
types of RCU-based algorithms, some of which are described in the following section.
1 struct route_entry {
2 struct rcu_head rh;
3 struct cds_list_head re_next;
4 unsigned long addr;
5 unsigned long iface;
6 int re_freed;
7 };
8 CDS_LIST_HEAD(route_list);
9 DEFINE_SPINLOCK(routelock);
10
11 unsigned long route_lookup(unsigned long addr)
12 {
13 struct route_entry *rep;
14 unsigned long ret;
15
16 rcu_read_lock();
17 cds_list_for_each_entry_rcu(rep, &route_list,
18 re_next) {
19 if (rep->addr == addr) {
20 ret = rep->iface;
21 if (ACCESS_ONCE(rep->re_freed))
22 abort();
23 rcu_read_unlock();
24 return ret;
25 }
26 }
27 rcu_read_unlock();
28 return ULONG_MAX;
29 }
The answer to this shown in Figure 9.33, which shows the RCU QSBR results as the
trace between the RCU and the ideal traces. RCU QSBR’s performance and scalability
is very nearly that of an ideal synchronization-free workload, as desired.
Quick Quiz 9.24: Why doesn’t RCU QSBR give exactly ideal results?
Quick Quiz 9.25: Given RCU QSBR’s read-side performance, why bother with
any other flavor of userspace RCU?
450000
400000 ideal
450000
400000 ideal
Lookups per Millisecond
350000
300000
RCU
250000
200000
seqlock
150000
100000
50000 hazptr
refcnt
0
1 2 3 4 5 6 7 8
Number of CPUs (Threads)
10000
1000 rwlock
Overhead (nanoseconds)
100
10
1
0.1
0.01
0.001 rcu
1e-04
1e-05
0 2 4 6 8 10 12 14 16
Number of CPUs
10000
rwlock
Overhead (nanoseconds)
1000
100
10 rcu
1
0 2 4 6 8 10 12 14 16
Number of CPUs
12000
10000
Overhead (nanoseconds)
8000
rwlock
6000
4000
2000 rcu
0
0 2 4 6 8 10
Critical-Section Duration (microseconds)
Note that do_update() is executed under the protection of the lock and under
RCU read-side protection.
Another interesting consequence of RCU’s deadlock immunity is its immunity to a
large class of priority inversion problems. For example, low-priority RCU readers cannot
prevent a high-priority RCU updater from acquiring the update-side lock. Similarly, a
low-priority RCU updater cannot prevent high-priority RCU readers from entering an
RCU read-side critical section.
Quick Quiz 9.29: Immunity to both deadlock and priority inversion??? Sounds too
good to be true. Why should I believe that this is even possible?
Realtime Latency Because RCU read-side primitives neither spin nor block, they
offer excellent realtime latencies. In addition, as noted earlier, this means that they are
immune to priority inversion involving the RCU read-side primitives and locks.
However, RCU is susceptible to more subtle priority-inversion scenarios, for exam-
ple, a high-priority process blocked waiting for an RCU grace period to elapse can be
blocked by low-priority RCU readers in -rt kernels. This can be solved by using RCU
priority boosting [McK07c, GMTW08].
9.5. READ-COPY UPDATE (RCU) 203
Update Received
RCU Readers and Updaters Run Concurrently Because RCU readers never spin
nor block, and because updaters are not subject to any sort of rollback or abort semantics,
RCU readers and updaters must necessarily run concurrently. This means that RCU
readers might access stale data, and might even see inconsistencies, either of which can
render conversion from reader-writer locking to RCU non-trivial.
However, in a surprisingly large number of situations, inconsistencies and stale data
are not problems. The classic example is the networking routing table. Because routing
updates can take considerable time to reach a given system (seconds or even minutes),
the system will have been sending packets the wrong way for quite some time when
the update arrives. It is usually not a problem to continue sending updates the wrong
way for a few additional milliseconds. Furthermore, because RCU updaters can make
changes without waiting for RCU readers to finish, the RCU readers might well see the
change more quickly than would batch-fair reader-writer-locking readers, as shown in
Figure 9.37.
Once the update is received, the rwlock writer cannot proceed until the last reader
completes, and subsequent readers cannot proceed until the writer completes. However,
these subsequent readers are guaranteed to see the new value, as indicated by the green
shading of the rightmost boxes. In contrast, RCU readers and updaters do not block
each other, which permits the RCU readers to see the updated values sooner. Of course,
because their execution overlaps that of the RCU updater, all of the RCU readers might
well see updated values, including the three readers that started before the update.
Nevertheless only the green-shaded rightmost RCU readers are guaranteed to see the
updated values.
Reader-writer locking and RCU simply provide different guarantees. With reader-
writer locking, any reader that begins after the writer begins is guaranteed to see new
values, and any reader that attempts to begin while the writer is spinning might or
might not see new values, depending on the reader/writer preference of the rwlock
implementation in question. In contrast, with RCU, any reader that begins after the
updater completes is guaranteed to see new values, and any reader that completes after
the updater begins might or might not see new values, depending on timing.
The key point here is that, although reader-writer locking does indeed guarantee
consistency within the confines of the computer system, there are situations where this
consistency comes at the price of increased inconsistency with the outside world. In
204 CHAPTER 9. DEFERRED PROCESSING
other words, reader-writer locking obtains internal consistency at the price of silently
stale data with respect to the outside world.
Nevertheless, there are situations where inconsistency and stale data within the
confines of the system cannot be tolerated. Fortunately, there are a number of approaches
that avoid inconsistency and stale data [McK04, ACMS03], and some methods based
on reference counting are discussed in Section 9.2.
RCU Grace Periods Extend for Many Milliseconds With the exception of QRCU
and several of the “toy” RCU implementations described in Section 9.5.5, RCU grace
periods extend for multiple milliseconds. Although there are a number of techniques to
render such long delays harmless, including use of the asynchronous interfaces where
available (call_rcu() and call_rcu_bh()), this situation is a major reason for
the rule of thumb that RCU be used in read-mostly situations.
Comparison of Reader-Writer Locking and RCU Code In the best case, the con-
version from reader-writer locking to RCU is quite simple, as shown in Figures 9.38,
9.39, and 9.40, all taken from Wikipedia [MPA+ 06].
1 struct el { 1 struct el {
2 struct list_head lp; 2 struct list_head lp;
3 long key; 3 long key;
4 spinlock_t mutex; 4 spinlock_t mutex;
5 int data; 5 int data;
6 /* Other data fields */ 6 /* Other data fields */
7 }; 7 };
8 DEFINE_RWLOCK(listmutex); 8 DEFINE_SPINLOCK(listmutex);
9 LIST_HEAD(head); 9 LIST_HEAD(head);
1 int search(long key, int *result) 1 int search(long key, int *result)
2 { 2 {
3 struct el *p; 3 struct el *p;
4 4
5 read_lock(&listmutex); 5 rcu_read_lock();
6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry_rcu(p, &head, lp) {
7 if (p->key == key) { 7 if (p->key == key) {
8 *result = p->data; 8 *result = p->data;
9 read_unlock(&listmutex); 9 rcu_read_unlock();
10 return 1; 10 return 1;
11 } 11 }
12 } 12 }
13 read_unlock(&listmutex); 13 rcu_read_unlock();
14 return 0; 14 return 0;
15 } 15 }
More-elaborate cases of replacing reader-writer locking with RCU are beyond the
scope of this document.
The assignment to head prevents any future references to p from being acquired,
and the synchronize_rcu() waits for any previously acquired references to be
released.
Quick Quiz 9.30: But wait! This is exactly the same code that might be used when
thinking of RCU as a replacement for reader-writer locking! What gives?
Of course, RCU can also be combined with traditional reference counting, as
discussed in Section 13.2.
But why bother? Again, part of the answer is performance, as shown in Figure 9.41,
again showing data taken on a 16-CPU 3GHz Intel x86 system.
Quick Quiz 9.31: Why the dip in refcnt overhead near 6 CPUs?
206 CHAPTER 9. DEFERRED PROCESSING
10000
refcnt
Overhead (nanoseconds)
1000
100
10 rcu
1
0 2 4 6 8 10 12 14 16
Number of CPUs
10000
Overhead (nanoseconds)
8000
6000
4000 refcnt
2000 rcu
0
0 2 4 6 8 10
Critical-Section Duration (microseconds)
And, as with reader-writer locking, the performance advantages of RCU are most
pronounced for short-duration critical sections, as shown Figure 9.42 for a 16-CPU
system. In addition, as with reader-writer locking, many system calls (and thus any
RCU read-side critical sections that they contain) complete in a few microseconds.
However, the restrictions that go with RCU can be quite onerous. For example, in
many cases, the prohibition against sleeping while in an RCU read-side critical section
would defeat the entire purpose. The next section looks at ways of addressing this
problem, while also reducing the complexity of traditional reference counting, at least
in some cases.
all pre-existing RCU read-side critical sections to complete, line 19 frees the newly
removed element, and line 20 indicates success. If the element is no longer the one we
want, line 22 releases the lock, line 23 leaves the RCU read-side critical section, and
line 24 indicates failure to delete the specified key.
Quick Quiz 9.33: Why is it OK to exit the RCU read-side critical section on line 15
of Figure 9.43 before releasing the lock on line 17?
Quick Quiz 9.34: Why not exit the RCU read-side critical section on line 23 of
Figure 9.43 before releasing the lock on line 22?
Alert readers will recognize this as only a slight variation on the original “RCU
is a way of waiting for things to finish” theme, which is addressed in Section 9.5.3.8.
They might also note the deadlock-immunity advantages over the lock-based existence
guarantees discussed in Section 7.4.
prevent any data from a SLAB_DESTROY_BY_RCU slab ever being returned to the
system, possibly resulting in OOM events?
These algorithms typically use a validation step that checks to make sure that the
newly referenced data structure really is the one that was requested [LS86, Section 2.5].
These validation checks require that portions of the data structure remain untouched by
the free-reallocate process. Such validation checks are usually very hard to get right,
and can hide subtle and difficult bugs.
Therefore, although type-safety-based lockless algorithms can be extremely helpful
in a very few difficult situations, you should instead use existence guarantees where
possible. Simpler is after all almost always better!
1. Make a change, for example, to the way that the OS reacts to an NMI.
2. Wait for all pre-existing read-side critical sections to completely finish (for ex-
ample, by using the synchronize_sched() primitive). The key observation
here is that subsequent RCU read-side critical sections are guaranteed to see
whatever change was made.
3. Clean up, for example, return status indicating that the change was successfully
made.
The remainder of this section presents example code adapted from the Linux ker-
nel. In this example, the timer_stop function uses synchronize_sched() to
ensure that all in-flight NMI notifications have completed before freeing the associated
resources. A simplified version of this code is shown Figure 9.44.
Lines 1-4 define a profile_buffer structure, containing a size and an indefinite
array of entries. Line 5 defines a pointer to a profile buffer, which is presumably
initialized elsewhere to point to a dynamically allocated region of memory.
Lines 7-16 define the nmi_profile() function, which is called from within an
NMI handler. As such, it cannot be preempted, nor can it be interrupted by a normal
interrupts handler, however, it is still subject to delays due to cache misses, ECC errors,
and cycle stealing by other hardware threads within the same core. Line 9 gets a local
pointer to the profile buffer using the rcu_dereference() primitive to ensure
memory ordering on DEC Alpha, and lines 11 and 12 exit from this function if there is
no profile buffer currently allocated, while lines 13 and 14 exit from this function if the
210 CHAPTER 9. DEFERRED PROCESSING
1 struct profile_buffer {
2 long size;
3 atomic_t entry[0];
4 };
5 static struct profile_buffer *buf = NULL;
6
7 void nmi_profile(unsigned long pcvalue)
8 {
9 struct profile_buffer *p = rcu_dereference(buf);
10
11 if (p == NULL)
12 return;
13 if (pcvalue >= p->size)
14 return;
15 atomic_inc(&p->entry[pcvalue]);
16 }
17
18 void nmi_stop(void)
19 {
20 struct profile_buffer *p = buf;
21
22 if (p == NULL)
23 return;
24 rcu_assign_pointer(buf, NULL);
25 synchronize_sched();
26 kfree(p);
27 }
In the meantime, Figure 9.45 shows some rough rules of thumb on where RCU is
most helpful.
As shown in the blue box at the top of the figure, RCU works best if you have
read-mostly data where stale and inconsistent data is permissible (but see below for
more information on stale and inconsistent data). The canonical example of this case
in the Linux kernel is routing tables. Because it may have taken many seconds or
even minutes for the routing updates to propagate across Internet, the system has been
sending packets the wrong way for quite some time. Having some small probability of
continuing to send some of them the wrong way for a few more milliseconds is almost
never a problem.
If you have a read-mostly workload where consistent data is required, RCU works
well, as shown by the green “read-mostly, need consistent data” box. One example
of this case is the Linux kernel’s mapping from user-level System-V semaphore IDs
to the corresponding in-kernel data structures. Semaphores tend to be used far more
frequently than they are created and destroyed, so this mapping is read-mostly. However,
it would be erroneous to perform a semaphore operation on a semaphore that has
already been deleted. This need for consistency is handled by using the lock in the
in-kernel semaphore data structure, along with a “deleted” flag that is set when deleting
a semaphore. If a user ID maps to an in-kernel data structure with the “deleted” flag set,
the data structure is ignored, so that the user ID is flagged as invalid.
Although this requires that the readers acquire a lock for the data structure repre-
senting the semaphore itself, it allows them to dispense with locking for the mapping
data structure. The readers therefore locklessly traverse the tree used to map from ID to
data structure, which in turn greatly improves performance, scalability, and real-time
response.
As indicated by the yellow “read-write” box, RCU can also be useful for read-write
workloads where consistent data is required, although usually in conjunction with a
number of other synchronization primitives. For example, the directory-entry cache in
recent Linux kernels uses RCU in conjunction with sequence locks, per-CPU locks, and
per-data-structure locks to allow lockless traversal of pathnames in the common case.
212 CHAPTER 9. DEFERRED PROCESSING
Although RCU can be very beneficial in this read-write case, such use is often more
complex than that of the read-mostly cases.
Finally, as indicated by the red box at the bottom of the figure, update-mostly
workloads requiring consistent data are rarely good places to use RCU, though there are
some exceptions [DMS+ 12]. In addition, as noted in Section 9.5.3.7, within the Linux
kernel, the SLAB_DESTROY_BY_RCU slab-allocator flag provides type-safe memory
to RCU readers, which can greatly simplify non-blocking synchronization and other
lockless algorithms.
In short, RCU is an API that includes a publish-subscribe mechanism for adding
new data, a way of waiting for pre-existing RCU readers to finish, and a discipline of
maintaining multiple versions to allow updates to avoid harming or unduly delaying
concurrent RCU readers. This RCU API is best suited for read-mostly situations,
especially if stale and inconsistent data can be tolerated by the application.
1. RCU BH: read-side critical sections must guarantee forward progress against
everything except for NMI and interrupt handlers, but not including software-
interrupt (softirq) handlers. RCU BH is global in scope.
2. RCU Sched: read-side critical sections must guarantee forward progress against
everything except for NMI and irq handlers, including softirq handlers. RCU
Sched is global in scope.
3. RCU (both classic and real-time): read-side critical sections must guarantee
forward progress against everything except for NMI handlers, irq handlers,
softirq handlers, and (in the real-time case) higher-priority real-time tasks.
RCU is global in scope.
4. SRCU: read-side critical sections need not guarantee forward progress unless
some other task is waiting for the corresponding grace period to complete, in
which case these read-side critical sections should complete in no more than a
few seconds (and preferably much more quickly).10 SRCU’s scope is defined by
the use of the corresponding srcu_struct.
NMI
rcu_assign_pointer()
call_rcu()
Process synchronize_rcu()
rule?
Quick Quiz 9.48: Are there any downsides to the fact that these traversal and update
primitives can be used with any of the RCU API family members?
Figure 9.46 shows which APIs may be used in which in-kernel environments. The
RCU read-side primitives may be used in any environment, including NMI, the RCU
mutation and asynchronous grace-period primitives may be used in any environment
other than NMI, and, finally, the RCU synchronous grace-period primitives may be used
only in process context. The RCU list-traversal primitives include list_for_each_
entry_rcu(), hlist_for_each_entry_rcu(), etc. Similarly, the RCU list-
mutation primitives include list_add_rcu(), hlist_del_rcu(), etc.
Note that primitives from other families of RCU may be substituted, for example,
srcu_read_lock() may be used in any context in which rcu_read_lock()
may be used.
At its core, RCU is nothing more nor less than an API that supports publication and
subscription for insertions, waiting for all RCU readers to complete, and maintenance
of multiple versions. That said, it is possible to build higher-level constructs on top of
RCU, including the reader-writer-locking, reference-counting, and existence-guarantee
constructs listed in Section 9.5.3. Furthermore, I have no doubt that the Linux com-
munity will continue to find interesting new uses for RCU, just as they do for any of a
number of synchronization primitives throughout the kernel.
Of course, a more-complete view of RCU would also include all of the things you
can do with these APIs.
However, for many people, a complete view of RCU must include sample RCU
implementations. The next section therefore presents a series of “toy” RCU implemen-
tations of increasing complexity and capability.
218 CHAPTER 9. DEFERRED PROCESSING
1 atomic_t rcu_refcnt;
2
3 static void rcu_read_lock(void)
4 {
5 atomic_inc(&rcu_refcnt);
6 smp_mb();
7 }
8
9 static void rcu_read_unlock(void)
10 {
11 smp_mb();
12 atomic_dec(&rcu_refcnt);
13 }
14
15 void synchronize_rcu(void)
16 {
17 smp_mb();
18 while (atomic_read(&rcu_refcnt) != 0) {
19 poll(NULL, 0, 10);
20 }
21 smp_mb();
22 }
However, this implementations still has some serious shortcomings. First, the
atomic operations in rcu_read_lock() and rcu_read_unlock() are still quite
heavyweight, with read-side overhead ranging from about 100 nanoseconds on a single
Power5 CPU up to almost 40 microseconds on a 64-CPU system. This means that
the RCU read-side critical sections have to be extremely long in order to get any real
read-side parallelism. On the other hand, in the absence of readers, grace periods elapse
in about 40 nanoseconds, many orders of magnitude faster than production-quality
implementations in the Linux kernel.
Quick Quiz 9.55: How can the grace period possibly elapse in 40 nanoseconds
when synchronize_rcu() contains a 10-millisecond delay?
Second, if there are many concurrent rcu_read_lock() and rcu_read_
9.5. READ-COPY UPDATE (RCU) 221
1 DEFINE_SPINLOCK(rcu_gp_lock);
2 atomic_t rcu_refcnt[2];
3 atomic_t rcu_idx;
4 DEFINE_PER_THREAD(int, rcu_nesting);
5 DEFINE_PER_THREAD(int, rcu_read_idx);
Design It is the two-element rcu_refcnt[] array that provides the freedom from
starvation. The key point is that synchronize_rcu() is only required to wait for
pre-existing readers. If a new reader starts after a given instance of synchronize_
rcu() has already begun execution, then that instance of synchronize_rcu()
need not wait on that new reader. At any given time, when a given reader enters its RCU
read-side critical section via rcu_read_lock(), it increments the element of the
rcu_refcnt[] array indicated by the rcu_idx variable. When that same reader
exits its RCU read-side critical section via rcu_read_unlock(), it decrements
whichever element it incremented, ignoring any possible subsequent changes to the
rcu_idx value.
This arrangement means that synchronize_rcu() can avoid starvation by
complementing the value of rcu_idx, as in rcu_idx = !rcu_idx. Suppose that
the old value of rcu_idx was zero, so that the new value is one. New readers that arrive
after the complement operation will increment rcu_idx[1], while the old readers that
previously incremented rcu_idx[0] will decrement rcu_idx[0] when they exit
their RCU read-side critical sections. This means that the value of rcu_idx[0] will
no longer be incremented, and thus will be monotonically decreasing.12 This means that
all that synchronize_rcu() need do is wait for the value of rcu_refcnt[0] to
reach zero.
With the background, we are ready to look at the implementation of the actual
primitives.
will be dealt with by the code for synchronize_rcu(). In the meantime, I suggest suspending disbelief.
9.5. READ-COPY UPDATE (RCU) 223
1 void synchronize_rcu(void)
2 {
3 int i;
4
5 smp_mb();
6 spin_lock(&rcu_gp_lock);
7 i = atomic_read(&rcu_idx);
8 atomic_set(&rcu_idx, !i);
9 smp_mb();
10 while (atomic_read(&rcu_refcnt[i]) != 0) {
11 poll(NULL, 0, 10);
12 }
13 smp_mb();
14 atomic_set(&rcu_idx, i);
15 smp_mb();
16 while (atomic_read(&rcu_refcnt[!i]) != 0) {
17 poll(NULL, 0, 10);
18 }
19 spin_unlock(&rcu_gp_lock);
20 smp_mb();
21 }
Discussion There are still some serious shortcomings. First, the atomic operations
in rcu_read_lock() and rcu_read_unlock() are still quite heavyweight. In
fact, they are more complex than those of the single-counter variant shown in Figure 9.49,
with the read-side primitives consuming about 150 nanoseconds on a single Power5 CPU
and almost 40 microseconds on a 64-CPU system. The update-side synchronize_
rcu() primitive is more costly as well, ranging from about 200 nanoseconds on a
single Power5 CPU to more than 40 microseconds on a 64-CPU system. This means
that the RCU read-side critical sections have to be extremely long in order to get any
real read-side parallelism.
Second, if there are many concurrent rcu_read_lock() and rcu_read_
unlock() operations, there will be extreme memory contention on the rcu_refcnt
elements, resulting in expensive cache misses. This further extends the RCU read-side
critical-section duration required to provide parallel read-side access. These first two
shortcomings defeat the purpose of RCU in most situations.
Third, the need to flip rcu_idx twice imposes substantial overhead on updates,
224 CHAPTER 9. DEFERRED PROCESSING
1 DEFINE_SPINLOCK(rcu_gp_lock);
2 DEFINE_PER_THREAD(int [2], rcu_refcnt);
3 atomic_t rcu_idx;
4 DEFINE_PER_THREAD(int, rcu_nesting);
5 DEFINE_PER_THREAD(int, rcu_read_idx);
1 DEFINE_SPINLOCK(rcu_gp_lock);
2 DEFINE_PER_THREAD(int [2], rcu_refcnt);
3 long rcu_idx;
4 DEFINE_PER_THREAD(int, rcu_nesting);
5 DEFINE_PER_THREAD(int, rcu_read_idx);
Figure 9.56: RCU Read-Side Using Per-Thread Reference-Count Pair and Shared
Update Data
1 static void rcu_read_lock(void)
2 {
3 int i;
4 int n;
5
6 n = __get_thread_var(rcu_nesting);
7 if (n == 0) {
8 i = ACCESS_ONCE(rcu_idx) & 0x1;
9 __get_thread_var(rcu_read_idx) = i;
10 __get_thread_var(rcu_refcnt)[i]++;
11 }
12 __get_thread_var(rcu_nesting) = n + 1;
13 smp_mb();
14 }
15
16 static void rcu_read_unlock(void)
17 {
18 int i;
19 int n;
20
21 smp_mb();
22 n = __get_thread_var(rcu_nesting);
23 if (n == 1) {
24 i = __get_thread_var(rcu_read_idx);
25 __get_thread_var(rcu_refcnt)[i]--;
26 }
27 __get_thread_var(rcu_nesting) = n - 1;
28 }
Figure 9.57: RCU Read-Side Using Per-Thread Reference-Count Pair and Shared
Update
increases linearly with the number of threads, imposing substantial overhead on applica-
tions with large numbers of threads.
Third, as before, although concurrent RCU updates could in principle be satisfied
by a common grace period, this implementation serializes grace periods, preventing
grace-period sharing.
Finally, as noted in the text, the need for per-thread variables and for enumerating
threads may be problematic in some software environments.
That said, the read-side primitives scale very nicely, requiring about 115 nanoseconds
regardless of whether running on a single-CPU or a 64-CPU Power5 system. As noted
above, the synchronize_rcu() primitive does not scale, ranging in overhead from
almost a microsecond on a single Power5 CPU up to almost 200 microseconds on a
64-CPU system. This implementation could conceivably form the basis for a production-
quality user-level RCU implementation.
The next section describes an algorithm permitting more efficient concurrent RCU
updates.
grace periods. The main difference from the earlier implementation shown in Fig-
ure 9.54 is that rcu_idx is now a long that counts freely, so that line 8 of Figure 9.57
must mask off the low-order bit. We also switched from using atomic_read() and
atomic_set() to using ACCESS_ONCE(). The data is also quite similar, as shown
in Figure 9.56, with rcu_idx now being a long instead of an atomic_t.
Figure 9.58 (rcu_rcpls.c) shows the implementation of synchronize_rcu()
and its helper function flip_counter_and_wait(). These are similar to those in
Figure 9.55. The differences in flip_counter_and_wait() include:
1. Line 6 uses ACCESS_ONCE() instead of atomic_set(), and increments
rather than complementing.
2. A new line 7 masks the counter down to its bottom bit.
The changes to synchronize_rcu() are more pervasive:
1. There is a new oldctr local variable that captures the pre-lock-acquisition value
of rcu_idx on line 23.
2. Line 26 uses ACCESS_ONCE() instead of atomic_read().
3. Lines 27-30 check to see if at least three counter flips were performed by other
threads while the lock was being acquired, and, if so, releases the lock, does
a memory barrier, and returns. In this case, there were two full waits for the
counters to go to zero, so those other threads already did all the required work.
228 CHAPTER 9. DEFERRED PROCESSING
1 DEFINE_SPINLOCK(rcu_gp_lock);
2 long rcu_gp_ctr = 0;
3 DEFINE_PER_THREAD(long, rcu_reader_gp);
4 DEFINE_PER_THREAD(long, rcu_reader_gp_snap);
Figure 9.60 (rcu.h and rcu.c) shows an RCU implementation based on a single
global free-running counter that takes on only even-numbered values, with data shown in
Figure 9.59. The resulting rcu_read_lock() implementation is extremely straight-
forward. Lines 3 and 4 simply add one to the global free-running rcu_gp_ctr variable
and stores the resulting odd-numbered value into the rcu_reader_gp per-thread
variable. Line 5 executes a memory barrier to prevent the content of the subsequent
RCU read-side critical section from “leaking out”.
The rcu_read_unlock() implementation is similar. Line 10 executes a mem-
ory barrier, again to prevent the prior RCU read-side critical section from “leaking out”.
Lines 11 and 12 then copy the rcu_gp_ctr global variable to the rcu_reader_gp
per-thread variable, leaving this per-thread variable with an even-numbered value so
that a concurrent instance of synchronize_rcu() will know to ignore it.
Quick Quiz 9.63: If any even value is sufficient to tell synchronize_rcu() to
ignore a given task, why don’t lines 10 and 11 of Figure 9.60 simply assign zero to
rcu_reader_gp?
Thus, synchronize_rcu() could wait for all of the per-thread rcu_reader_
gp variables to take on even-numbered values. However, it is possible to do much better
than that because synchronize_rcu() need only wait on pre-existing RCU read-
side critical sections. Line 19 executes a memory barrier to prevent prior manipulations
of RCU-protected data structures from being reordered (by either the CPU or the
compiler) to follow the increment on line 21. Line 20 acquires the rcu_gp_lock
(and line 30 releases it) in order to prevent multiple synchronize_rcu() instances
from running concurrently. Line 21 then increments the global rcu_gp_ctr variable
230 CHAPTER 9. DEFERRED PROCESSING
1 DEFINE_SPINLOCK(rcu_gp_lock);
2 #define RCU_GP_CTR_SHIFT 7
3 #define RCU_GP_CTR_BOTTOM_BIT (1 << RCU_GP_CTR_SHIFT)
4 #define RCU_GP_CTR_NEST_MASK (RCU_GP_CTR_BOTTOM_BIT - 1)
5 long rcu_gp_ctr = 0;
6 DEFINE_PER_THREAD(long, rcu_reader_gp);
by two, so that all pre-existing RCU read-side critical sections will have corresponding
per-thread rcu_reader_gp variables with values less than that of rcu_gp_ctr,
modulo the machine’s word size. Recall also that threads with even-numbered values
of rcu_reader_gp are not in an RCU read-side critical section, so that lines 23-29
scan the rcu_reader_gp values until they all are either even (line 24) or are greater
than the global rcu_gp_ctr (lines 25-26). Line 27 blocks for a short period of time
to wait for a pre-existing RCU read-side critical section, but this can be replaced with a
spin-loop if grace-period latency is of the essence. Finally, the memory barrier at line 31
ensures that any subsequent destruction will not be reordered into the preceding loop.
Quick Quiz 9.64: Why are the memory barriers on lines 19 and 31 of Figure 9.60
needed? Aren’t the memory barriers inherent in the locking primitives on lines 20
and 30 sufficient?
This approach achieves much better read-side performance, incurring roughly
63 nanoseconds of overhead regardless of the number of Power5 CPUs. Updates
incur more overhead, ranging from about 500 nanoseconds on a single Power5 CPU to
more than 100 microseconds on 64 such CPUs.
Quick Quiz 9.65: Couldn’t the update-side batching optimization described in
Section 9.5.5.6 be applied to the implementation shown in Figure 9.60?
This implementation suffers from some serious shortcomings in addition to the high
update-side overhead noted earlier. First, it is no longer permissible to nest RCU read-
side critical sections, a topic that is taken up in the next section. Second, if a reader is
preempted at line 3 of Figure 9.60 after fetching from rcu_gp_ctr but before storing
to rcu_reader_gp, and if the rcu_gp_ctr counter then runs through more than
half but less than all of its possible values, then synchronize_rcu() will ignore
the subsequent RCU read-side critical section. Third and finally, this implementation
requires that the enclosing software environment be able to enumerate threads and
maintain per-thread variables.
Quick Quiz 9.66: Is the possibility of readers being preempted in lines 3-4 of
Figure 9.60 a real problem, in other words, is there a real sequence of events that could
lead to failure? If not, why not? If so, what is the sequence of events, and how can the
failure be addressed?
1 DEFINE_SPINLOCK(rcu_gp_lock);
2 long rcu_gp_ctr = 0;
3 DEFINE_PER_THREAD(long, rcu_reader_qs_gp);
13 Although the code in the figure is consistent with rcu_quiescent_state() being the same
1 void synchronize_rcu(void)
2 {
3 int t;
4
5 smp_mb();
6 spin_lock(&rcu_gp_lock);
7 rcu_gp_ctr += 2;
8 smp_mb();
9 for_each_thread(t) {
10 while (rcu_gp_ongoing(t) &&
11 ((per_thread(rcu_reader_qs_gp, t) -
12 rcu_gp_ctr) < 0)) {
13 poll(NULL, 0, 10);
14 }
15 }
16 spin_unlock(&rcu_gp_lock);
17 smp_mb();
18 }
rcu(), and then acquire that same lock within an RCU read-side critical section? This
should be a deadlock, but how can a primitive that generates absolutely no code possibly
participate in a deadlock cycle?
In addition, this implementation does not permit concurrent calls to synchronize_
rcu() to share grace periods. That said, one could easily imagine a production-quality
RCU implementation based on this version of RCU.
Quick Quiz 9.75: Given that grace periods are prohibited within RCU read-side
critical sections, how can an RCU data structure possibly be updated while in an RCU
read-side critical section?
This section is organized as a series of Quick Quizzes that invite you to apply RCU
to a number of examples earlier in this book. The answer to each Quick Quiz gives
some hints, and also contains a pointer to a later section where the solution is explained at
length. The rcu_read_lock(), rcu_read_unlock(), rcu_dereference(),
rcu_assign_pointer(), and synchronize_rcu() primitives should suffice
for most of these exercises.
Quick Quiz 9.76: The statistical-counter implementation shown in Figure 5.9
(count_end.c) used a global lock to guard the summation in read_count(),
which resulted in poor performance and negative scalability. How could you use RCU
to provide read_count() with excellent performance and good scalability. (Keep in
mind that read_count()’s scalability will necessarily be limited by its need to scan
all threads’ counters.)
Quick Quiz 9.77: Section 5.5 showed a fanciful pair of code fragments that dealt
with counting I/O accesses to removable devices. These code fragments suffered from
high overhead on the fastpath (starting an I/O) due to the need to acquire a reader-writer
lock. How would you use RCU to provide excellent performance and scalability? (Keep
in mind that the performance of the common-case first code fragment that does I/O
accesses is much more important than that of the device-removal code fragment.)
9.6. WHICH TO CHOOSE? 237
We have already seen one situation featuring high performance and scalability
for writers, namely the counting algorithms surveyed in Chapter 5. These algorithms
featured partially partitioned data structures so that updates can operate locally, while the
more-expensive reads must sum across the entire data structure. Silas Boyd-Wickhizer
has generalized this notion to produce OpLog, which he has applied to Linux-kernel
pathname lookup, VM reverse mappings, and the stat() system call [BW14].
Another approach, called “Disruptor”, is designed for applications that process
high-volume streams of input data. The approach is to rely on single-producer-single-
consumer FIFO queues, minimizing the need for synchronization [Sut13]. For Java
applications, Disruptor also has the virtue of minimizing use of the garbage collector.
And of course, where feasible, fully partitioned or “sharded” systems provide
excellent performance and scalability, as noted in Chapter 6.
The next chapter will look at updates in the context of several types of data struc-
tures.
240 CHAPTER 9. DEFERRED PROCESSING
Bad programmers worry about the code. Good
programmers worry about data structures and their
relationships.
Linus Torvalds
Chapter 10
Data Structures
241
242 CHAPTER 10. DATA STRUCTURES
Births, captures, and purchases result in insertions, while deaths, releases, and sales
result in deletions. Because Schrödinger’s zoo contains a large quantity of short-lived
animals, including mice and insects, the database must be able to support a high update
rate.
Those interested in Schrödinger’s animals can query them, however, Schrödinger
has noted extremely high rates of queries for his cat, so much so that he suspects that
his mice might be using the database to check up on their nemesis. This means that
Schrödinger’s application must be able to support a high rate of queries to a single data
element.
Please keep this application in mind as various data structures are presented.
1 struct ht_elem {
2 struct cds_list_head hte_next;
3 unsigned long hte_hash;
4 };
5
6 struct ht_bucket {
7 struct cds_list_head htb_head;
8 spinlock_t htb_lock;
9 };
10
11 struct hashtab {
12 unsigned long ht_nbuckets;
13 struct ht_bucket ht_bkt[0];
14 };
struct hashtab
−>ht_nbuckets = 4
−>ht_bkt[0] struct ht_elem struct ht_elem
−>htb_head −>hte_next −>hte_next
−>htb_lock −>hte_hash −>hte_hash
−>ht_bkt[1]
−>htb_head
−>htb_lock
−>ht_bkt[2] struct ht_elem
−>htb_head −>hte_next
−>htb_lock −>hte_hash
−>ht_bkt[3]
−>htb_head
−>htb_lock
caches the corresponding element’s hash value in the ->hte_hash field. The ht_
elem structure would be included in the larger structure being placed in the hash table,
and this larger structure might contain a complex key.
The diagram shown in Figure 10.2 has bucket 0 with two elements and bucket 2
with one.
Figure 10.3 shows mapping and locking functions. Lines 1 and 2 show the macro
HASH2BKT(), which maps from a hash value to the corresponding ht_bucket
structure. This macro uses a simple modulus: if more aggressive hashing is required, the
1 #define HASH2BKT(htp, h) \
2 (&(htp)->ht_bkt[h % (htp)->ht_nbuckets])
3
4 static void hashtab_lock(struct hashtab *htp,
5 unsigned long hash)
6 {
7 spin_lock(&HASH2BKT(htp, hash)->htb_lock);
8 }
9
10 static void hashtab_unlock(struct hashtab *htp,
11 unsigned long hash)
12 {
13 spin_unlock(&HASH2BKT(htp, hash)->htb_lock);
14 }
1 struct ht_elem *
2 hashtab_lookup(struct hashtab *htp,
3 unsigned long hash,
4 void *key,
5 int (*cmp)(struct ht_elem *htep,
6 void *key))
7 {
8 struct ht_bucket *htb;
9 struct ht_elem *htep;
10
11 htb = HASH2BKT(htp, hash);
12 cds_list_for_each_entry(htep,
13 &htb->htb_head,
14 hte_next) {
15 if (htep->hte_hash != hash)
16 continue;
17 if (cmp(htep, key))
18 return htep;
19 }
20 return NULL;
21 }
caller needs to implement it when mapping from key to hash value. The remaining two
functions acquire and release the ->htb_lock corresponding to the specified hash
value.
Figure 10.4 shows hashtab_lookup(), which returns a pointer to the element
with the specified hash and key if it exists, or NULL otherwise. This function takes both
a hash value and a pointer to the key because this allows users of this function to use
arbitrary keys and arbitrary hash functions, with the key-comparison function passed
in via cmp(), in a manner similar to qsort(). Line 11 maps from the hash value
to a pointer to the corresponding hash bucket. Each pass through the loop spanning
lines 12-19 examines one element of the bucket’s hash chain. Line 15 checks to see if
the hash values match, and if not, line 16 proceeds to the next element. Line 17 checks
to see if the actual key matches, and if so, line 18 returns a pointer to the matching
element. If no element matches, line 20 returns NULL.
Quick Quiz 10.2: But isn’t the double comparison on lines 15-18 in Figure 10.4
inefficient in the case where the key fits into an unsigned long?
Figure 10.5 shows the hashtab_add() and hashtab_del() functions that
add and delete elements from the hash table, respectively.
The hashtab_add() function simply sets the element’s hash value on line 6, then
adds it to the corresponding bucket on lines 7 and 8. The hashtab_del() function
simply removes the specified element from whatever hash chain it is on, courtesy of the
10.2. PARTITIONABLE DATA STRUCTURES 245
1 struct hashtab *
2 hashtab_alloc(unsigned long nbuckets)
3 {
4 struct hashtab *htp;
5 int i;
6
7 htp = malloc(sizeof(*htp) +
8 nbuckets *
9 sizeof(struct ht_bucket));
10 if (htp == NULL)
11 return NULL;
12 htp->ht_nbuckets = nbuckets;
13 for (i = 0; i < nbuckets; i++) {
14 CDS_INIT_LIST_HEAD(&htp->ht_bkt[i].htb_head);
15 spin_lock_init(&htp->ht_bkt[i].htb_lock);
16 }
17 return htp;
18 }
19
20 void hashtab_free(struct hashtab *htp)
21 {
22 free(htp);
23 }
90000
80000
Total Lookups per Millisecond
70000
60000 ideal
50000
40000
30000
20000
10000
1 2 3 4 5 6 7 8
Number of CPUs (Threads)
doubly linked nature of the hash-chain lists. Before calling either of these two functions,
the caller is required to ensure that no other thread is accessing or modifying this same
bucket, for example, by invoking hashtab_lock() beforehand.
60000
55000