0% found this document useful (0 votes)
365 views458 pages

10.1007/978 3 319 16964 4

About SSI MODELS

Uploaded by

Venkat Macharla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
365 views458 pages

10.1007/978 3 319 16964 4

About SSI MODELS

Uploaded by

Venkat Macharla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 458

Geotechnical, Geological and Earthquake Engineering

Atilla Ansal Editor

Perspectives
on European
Earthquake
Engineering and
Seismology
Volume 2
Perspectives on European Earthquake Engineering
and Seismology
GEOTECHNICAL, GEOLOGICAL AND
EARTHQUAKE ENGINEERING
Volume 39

Series Editor
€ gin University, Istanbul, Turkey
Atilla Ansal, School of Engineering, Ozye

Editorial Advisory Board


Julian Bommer, Imperial College London, U.K.
Jonathan D. Bray, University of California, Berkeley, U.S.A.
Kyriazis Pitilakis, Aristotle University of Thessaloniki, Greece
Susumu Yasuda, Tokyo Denki University, Japan

More information about this series at http://www.springer.com/series/6011


Atilla Ansal
Editor

Perspectives on European
Earthquake Engineering
and Seismology
Volume 2
Editor
Atilla Ansal
School of Engineering
Özyeǧin University
Istanbul, Turkey

ISSN 1573-6059 ISSN 1872-4671 (electronic)


Geotechnical, Geological and Earthquake Engineering
ISBN 978-3-319-16963-7 ISBN 978-3-319-16964-4 (eBook)
DOI 10.1007/978-3-319-16964-4

Library of Congress Control Number: 2014946618

Springer Cham Heidelberg New York Dordrecht London


© The Editor(s) and if applicable the Author(s) 2015. The book is published with open access at
http://link.springer.com
Open Access This book is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction in any
medium, provided the original author(s) and source are credited.
All commercial rights are reserved by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication
of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the
Publisher’s location, in its current version, and permission for commercial use must always be obtained
from Springer. Permissions for commercial use may be obtained through RightsLink at the Copyright
Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer International Publishing AG Switzerland is part of Springer Science+Business Media (www.


springer.com)
Preface

The collection of chapter contributions compiled in this second volume of Per-


spectives on European Earthquake Engineering and Seismology is composed out of
4 keynote and 15 theme lectures presented during the Second European Conference
on Earthquake Engineering and Seismology (2ECEES) held in Istanbul, Turkey,
from August 24 to 29, 2014. Since the Conference was a joint event of European
Association of Earthquake Engineering (EAEE) and the European Seismological
Commission (ESC), the chapter contributions cover the major topics of earthquake
engineering and seismology along with priority issues of global importance.
On the occasion of the 50th anniversary of the establishment of the European
Association of Earthquake Engineering, and for the first time in the book series
“Geotechnical, Geological, and Earthquake Engineering”, we are publishing an
Open Access book that can be downloaded freely by anybody interested in these
topics. We believe that this option adopted by the Advisory Committee of 2ECEES,
will enable a wide distribution and readability of the contributions presented by
very prominent researchers in Europe.
The chapters in this second volume are composed of four keynote lectures, first
of which is given by Shamita Das, the recipient of the first Inge Lehmann Lecture
Award. Her lecture is entitled “Supershear Earthquake Ruptures – Theory,
Methods, Laboratory Experiments and Fault Superhighways: An Update”. The
other three keynote lectures are “Civil Protection Achievements and Critical Issues
in Seismology and Earthquake Engineering Research” by Mauro Dolce and
Daniela Di Bucci, “Earthquake risk assessment: Certitudes, Fallacies, Uncer-
tainties and the Quest for Soundness” by Kyriazis Pitilakis and “Variability and
Uncertainty in Empirical Ground-Motion Prediction for Probabilistic Hazard and
Risk Analyses” by Peter J. Stafford.
The next nine chapters are the EAEE Theme Lectures: “Seismic Code Develop-
ments for Steel and Composite Structures” by Ahmed Y. Elghazouli; “Seismic
Analysis and Design of Foundation Soil-Structure Interaction” by Alain Pecker;
“Performance-Based Seismic Design and Assessment of Bridges” by Andreas
Kappos; “An Algorithm to Justify the Design of Single Story Precast Structures”

v
vi Preface

by H.F. Karado gan, I.E. Bal, E. Yüksel, S. Z. Yüce, Y.Durgun, and C. Soydan;
“Developments in Seismic Design of Tall Buildings: Preliminary Design of Coupled
Core Wall Systems” by M. Nuray Aydınoglu and Eren Vuran; “Seismic Response of
Underground Lifeline Systems” by Selçuk Toprak, Engin Nacaroglu, and A. Cem
Koç; “Seismic Performance of Historical Masonry Structures Through Pushover
and Nonlinear Dynamic Analyses” by Sergio Lagomarsino and Serena Cattari;
“Developments in Ground Motion Predictive Models and Accelerometric Data
Archiving in the Broader European Region” by Sinan Akkar and Özkan Kale;
and “Towards the ‘Ultimate Earthquake-Proof’ Building: Development of an Inte-
grated Low-Damage System” by Stefano Pampanin.
The remaining six chapters are the ESC Theme Lectures “Archive of Historical
Earthquake Data for the European-Mediterranean Area” by Andrea Rovida and
Mario Locati; “A Review and Some New Issues on the Theory of the H/V Technique
for Ambient Vibrations” by Enrico Lunedei and Peter Malischewsky;
“Macroseismic Intervention Group: the Necessary Field Observation” by Chris-
tophe Sira; “Bridging the Gap Between Nonlinear Seismology as Reality and
Earthquake Engineering” by Gheorghe Marmureanu, Carmen-Ortanza Cioflan,
Alexandru Marmureanu, Constantin Ionescu, and Elena-Florinela Manea; “The
Influence of Earthquake Magnitude on Hazard Related to Induced Seismicity” by
Benjamin Edwards; and “On the Origin of Mega-Thrust Earthquakes” by Kuvvet
Atakan.
The Editor and the Advisory Committee of the Second European Conference on
Earthquake Engineering and Seismology appreciate the support given by the
Istanbul Governorship, Istanbul Project Coordination Unit, for the publication of
the Perspectives on European Earthquake Engineering and Seismology volumes as
Open Access books.

Istanbul, Turkey A. Ansal


Contents

1 Supershear Earthquake Ruptures – Theory,


Methods, Laboratory Experiments and Fault Superhighways:
An Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Shamita Das
2 Civil Protection Achievements and Critical Issues
in Seismology and Earthquake Engineering Research . . . . . . . . . . 21
Mauro Dolce and Daniela Di Bucci
3 Earthquake Risk Assessment: Certitudes, Fallacies,
Uncertainties and the Quest for Soundness . . . . . . . . . . . . . . . . . . . 59
Kyriazis Pitilakis
4 Variability and Uncertainty in Empirical Ground-Motion
Prediction for Probabilistic Hazard and Risk Analyses . . . . . . . . . 97
Peter J. Stafford
5 Seismic Code Developments for Steel
and Composite Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Ahmed Y. Elghazouli
6 Seismic Analyses and Design of Foundation
Soil Structure Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Alain Pecker
7 Performance-Based Seismic Design and Assessment
of Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Andreas J. Kappos
8 An Algorithm to Justify the Design of Single Story
Precast Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
H.F. Karadogan, I.E. Bal, E. Yüksel, S. Ziya Yüce,
Y. Durgun, and C. Soydan

vii
viii Contents

9 Developments in Seismic Design of Tall Buildings:


Preliminary Design of Coupled Core Wall Systems . . . . . . . . . . . . 227
M. Nuray Aydınoglu and Eren Vuran
10 Seismic Response of Underground Lifeline Systems . . . . . . . . . . . . 245
Selçuk Toprak, Engin Nacaroglu, and A. Cem Koç
11 Seismic Performance of Historical Masonry Structures
Through Pushover and Nonlinear Dynamic Analyses . . . . . . . . . . . 265
Sergio Lagomarsino and Serena Cattari
12 Developments in Ground Motion Predictive
Models and Accelerometric Data Archiving in the Broader
European Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Sinan Akkar and Özkan Kale
13 Towards the “Ultimate Earthquake-Proof” Building:
Development of an Integrated Low-Damage System . . . . . . . . . . . . 321
Stefano Pampanin
14 Archive of Historical Earthquake Data
for the European-Mediterranean Area . . . . . . . . . . . . . . . . . . . . . . 359
Andrea Rovida and Mario Locati
15 A Review and Some New Issues on the Theory
of the H/V Technique for Ambient Vibrations . . . . . . . . . . . . . . . . 371
Enrico Lunedei and Peter Malischewsky
16 Macroseismic Intervention Group: The Necessary
Field Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Christophe Sira
17 Bridging the Gap Between Nonlinear Seismology as Reality
and Earthquake Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Gheorghe Marmureanu, Carmen Ortanza Cioflan, Alexandru
Marmureanu, Constantin Ionescu, and Elena Florinela Manea
18 The Influence of Earthquake Magnitude on Hazard
Related to Induced Seismicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Benjamin Edwards
19 On the Origin of Mega-thrust Earthquakes . . . . . . . . . . . . . . . . . . 443
Kuvvet Atakan
Chapter 1
Supershear Earthquake Ruptures – Theory,
Methods, Laboratory Experiments and Fault
Superhighways: An Update

Shamita Das

Abstract The occurrence of earthquakes propagating at speeds not only exceeding


the shear wave speed of the medium (~3 km/s in the Earth’s crust), but even
reaching compressional wave speeds of nearly 6 km/s is now well established. In
this paper, the history of development of ideas since the early 1970s is given first.
The topic is then discussed from the point of view of theoretical modelling. A brief
description of a method for analysing seismic waveform records to obtain earth-
quake rupture speed information is given. Examples of earthquakes known to have
propagated at supershear speed are listed. Laboratory experiments in which such
speeds have been measured, both in rocks as well as on man-made materials, are
discussed. Finally, faults worldwide which have the potential to propagate for
long distances (> about 100 km) at supershear speeds are identified (“fault
superhighways”).

1.1 Introduction

Seismologists now know that one of the important parameters controlling earth-
quake damage is the fault rupture speed, and changes in this rupture speed
(Madariaga 1977, 1983). The changes in rupture speed generate high-frequency
damaging waves Thus, the knowledge of how this rupture speed changes during
earthquakes and its maximum possible value are essential for reliable earthquake
hazard assessment. But how high this rupture speed can be has been understood
only relatively recently. In the 1950–1960s, it was believed that earthquake ruptures
could only reach the Rayleigh wave speed. This was based partly on very idealized
models of fracture mechanics, originating from results on tensile crack propagation
velocities which cannot exceed the Rayleigh wave speed and which were simply

S. Das (*)
Department of Earth Sciences, University of Oxford, Oxford OX1 3AN, UK
e-mail: [email protected]

© The Author(s) 2015 1


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_1
2 S. Das

transferred to shear cracks. But more importantly, seismologists estimated the


average rupture speed for several earthquakes by studying the directivity effects
and spectra of seismic waves. The first was for the 1952 Ms ~7.6 Kern County,
California earthquake. Benioff (1955) concluded that “the progression speed is in
the neighborhood of speed of Rayleigh waves” using body wave studies. Similar
conclusions were reached for several great earthquakes, including the 1960 great
Chile earthquake (Press et al. 1961), the 1957 Mongolian earthquake
(Ben-Menahem and Toks€oz 1962), the 1958 Alaska earthquake (Brune 1961,
1962; Ben-Menahem and Toks€oz 1963a) and the 1952 Kamchatka earthquake
(Ben-Menahem and Toks€oz 1963b) by studying directivity effects and/or spectra
of very long wave length surface waves.
In the early 1970s, Wu et al. (1972) conducted laboratory experiments on plastic
polymer, under very low normal stresses, and found supershear rupture speeds. This
was considered unrealistic for real earthquakes, both the material and the low
normal stress, and the results were ignored. Soon after, Burridge (1973) demon-
strated that faults with friction but without cohesion across the fault faces could
exceed the shear wave speed and even reach the compressional wave speed of the
medium. But since such faults are unrealistic for actual earthquakes, the results
were again not taken seriously. In the mid- to late 1970s the idea that for in-plane
shear faults with cohesion, terminal speeds exceeding not only the Rayleigh wave
speed but even being as high as the compressional-wave speed was possible finally
started being accepted, based on the work of Hamano (1974), Andrews (1976), Das
(1976), and Das and Aki (1977). Once the theoretical result was established,
scientists interpreting observations became more inclined to believe results show-
ing supershear fault rupture speeds, and at the same time the data quality and the
increase in the number of broadband seismometers worldwide, required to obtain
detailed information on fault rupture started becoming available. Thus, the theory
spurred the search for supershear earthquake ruptures.
The first earthquake for which supershear wave rupture speed was inferred was
the 1979 Imperial Valley, California earthquake which had a moment-magnitude
(Mw) of 6.5, studied by Archuleta (1984), and by Spudich and Cranswick (1984)
using strong motion accelerograms. But since the distance for which the earthquake
propagated at the high speed was not long, the idea was still not accepted univer-
sally. And then for nearly 25 years there were no further developments, perhaps
because earthquakes which attain supershear speeds are rare, and none are known to
have occurred. This provided ammunition to those who resisted the idea of super-
sonic earthquake rupture speeds being possible.
Then, in the late 1990 to early 2000s, there were two major developments.
Firstly, a group at Caltech, led by Rosakis, measured earthquake speeds in the
laboratory, not only exceeding the shear wave speed (Rosakis et al. 1999; Xia
et al. 2004) but even reaching the compressional wave speed (Xia et al. 2005).
Secondly, several earthquakes with supershear wave rupture speeds actually
occurred, with one even reaching the compressional wave speed. The first of
these was the strike-slip earthquake of 1999 with Mw 7.6 in Izmit, Turkey
(Bouchon et al. 2000, 2001), with a total rupture length of ~150 km, and with the
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 3

length of the section rupturing at supershear speeds being about 45 km. This study
was based on two components of near-fault accelerograms recorded at one station
(SKR). Then two larger supershear earthquakes occurred, namely, the 2001 Mw 7.8
Kunlun, Tibet earthquake (Bouchon and Vallée 2003; Antolik et al. 2004; Robinson
et al. 2006b; Vallée et al. 2008; Walker and Shearer 2009), and the 2002 Mw 7.9
Denali, Alaska earthquake (Dunham and Archuleta 2004; Ellsworth et al. 2004;
Frankel 2004; Ozacar and Beck 2004; Walker and Shearer 2009). Both were very
long, narrow intra-plate strike-slip earthquakes, with significantly long sections of
the faults propagating at supershear speeds. At last, clear evidence of supershear
rupture speeds was available. Moreover, by analysing body wave seismograms very
carefully, Robinson et al. (2006b) showed that not only did the rupture speed
exceed the shear wave speed of the medium; it reached the compressional wave
speed, which is about 70 % higher than the shear wave speed in crustal rocks.
Once convincing examples of supershear rupture speeds started to be found,
theoretical calculations were carried out (Bernard and Baumont 2005; Dunham and
Bhat 2008) and these suggested that the resulting ground shaking can be much
higher for such rapid ruptures, due to the generation of Mach wave fronts. Such
wave fronts, analogous to the “sonic boom” from supersonic jets, are characteristics
and their amplitudes decrease much more slowly with distance than usual spherical
waves do. Of course, much work still remains to be done in this area. Figure 1.1
shows a schematic illustrating that formulae from acoustics cannot be directly
transferred to seismology. The reason is that many regions of the fault area are
simultaneously moving at these high speeds, each point generating a Mach cone,

Fig. 1.1 Schematic representation of the leading edges of the multiple S-wave Mach cones
generated by a planar fault spreading out in many directions, along the black arrows, from the
hypocenter (star). The pink shaded region is the region of supershear rupture. The thick black
arrows show the direction of the applied tectonic stress across the x–y plane. Supershear speeds
cannot be reached in the y- direction (that is, by the Mode III or the anti-plane shear mode).
The higher the rupture speed, the narrower each cone would be. Dunham and Bhat (2008) showed
that additional Rayleigh wave Mach fronts would be generated along the Earth’s surface during
supershear earthquake ruptures
4 S. Das

and resulting in a the Mach surface. Moreover, different parts of the fault could
move at different supershear speeds, again introducing complexity into the shape
and amplitudes of the Mach surface. Finally, accounting for the heterogeneity of the
medium surrounding the fault through which these Mach fronts propagate would
further modify the Mach surface. There could be special situations where the
individual Mach fronts comprising the Mach surface could interfere to even
lower, rather than raise, the resulting ground shaking. Such studies would be of
great interest to the earthquake engineering community.

1.2 Theory

Since damaging high-frequency waves are generated when faults change speed
(Madariaga 1977, 1983), the details of how faults start from rest and move at
increasing speeds is very important. Though in-plane shear faults (primarily strike–
slip earthquakes) can not only exceed the shear wave speed of the medium, but can
even reach the compressional wave speed, steady-state (constant speed) calcula-
tions on singular cracks (with infinite stress at the fault edges) had shown that
speeds between the Rayleigh and shear wave speeds were not possible, due to the
fact that in such a case there is negative energy flux into the fault edge from the
surrounding medium, that is, such a fault would not absorb elastic strain-energy but
generate it (Broberg 1989, 1994, 1999). Theoretical studies by Andrews (1976) and
Burridge et al. (1979) using the non-singular slip-weakening model (Fig. 1.2),
introduced by Ida (1972) suggested that even for such 2-D in-plane faults which
start from rest and accelerate to some terminal velocity, such a forbidden zone does
exist.

Fig. 1.2 The linear “slip-


weakening model”, relating
the fault slip to the stress at
the edge of the fault. The
region between 0 to do is
called the “break-down”
zone, where the earthquake
stress release occurs. Cruz-
Atienza and Olsen (2010)
estimated do to be ~2 m for
the 1999 Izmit, Turkey and
2002 Denali, Alaska
earthquakes
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 5

Recent work of Bizzari and Das (2012) showed that for the 3-D mixed in-plane
and anti-plane shear mode fault, propagating under this slip-weakening law, the
rupture front actually does pass smoothly through this forbidden zone, but very fast.
The width of the cohesive zone initially decreases, then increases as the rupture
exceeds the shear wave speed and finally again decreases as the rupture accelerates
to a speed of ~90 % of the compressional wave speed. The penetration of the
‘forbidden zone’ has very recently also been confirmed for the 2-D in-plane shear
fault for the same linear slip-weakening model by Liu et al. (2014). To reiterate, this
is important as this smooth transition from sub- to super- shear wave speeds would
reduce damage.

1.3 Seismic Data Analysis

The inverse problem of earthquake source mechanics consists of analysing


seismograms to obtain the details of the earthquake rupture process. This problem
is known to be unstable (Kostrov 1975; Olson and Apsel 1982; Kostrov and Das
1988; Das and Kostrov 1990) and requires additional constraints to stabilize it. In
order to demonstrate the basic ideas involved, we follow the formulation of Das and
Kostrov (1990, 1994) here.
By modifying the representation theorem (e.g., equation (3.2) of Aki and
Richards (1980, 2002); equation (3.2.18) of Kostrov and Das (1988)), the displace-
ment at a seismic station can be written as the convolution of the components of the
slip rate on the fault with a step-function response of the medium. Note that the
usual formulation convolves the slip with the delta function response of the
medium, but since moving the time derivative from one term of the convolution
to the other does not change the value of the integral, Das and Kostrov’s formula-
tion uses the slip rate on the fault convolved with a singular term but with a weaker
integrable singularity, making the problem mathematically more tractable and more
stable. The convolution extends over the fault area and the time over which the fault
slips. Full details can be found in Das and Kostrov (1990, 1994). The resulting
integral equation is of the first kind and known to be unstable. Thus, these authors
stabilized the equations by adding physically-based additional constraints, the most
important of this being that the slip-rate on the fault is non-negative, called the “no-
backslip constraint”. Numerical modelling of ruptures show that this is very likely
for large earthquakes. To solve the integral equation numerically, it must be
discretized. For this, the fault area is divided into a number of rectangular cells
and the slip-rate is approximated within each cell by linear functions in time and
along strike and by a constant along dip. The time at the source is discretized by
choosing a fixed time step, and assuming that the slip rate during the time step
varies linearly with time. The Heaviside kernel is then integrated over each cell
analytically, and the integrals over the fault area and over time are replaced by
sums. The optimal size of the spatial cells and the time steps should be determined
6 S. Das

by some synthetic tests, as discussed, for example by Das and Suhadolc (1996), Das
et al. (1996), and Sarao et al. (1998) for inversions using strong ground motion data
and by Henry et al. (2000, 2002) for teleseismic data inversions. The fault area and
the total source duration are not assigned a priori but determined as part of the
inversion process. An initial fault area is assigned based on the aftershock area and
then refined. An initial value of the finite source duration is estimated, based on the
fault size and a range of average rupture speeds, and it cannot be longer than the
longest record used. The integral equation then takes the form of a system of linear
equations A x  b, where A is the kernel matrix obtained by integrating it over each
cell, each column of A corresponding to different cells and time instants of the
source duration, ordered in the same way as the vector of observed seismograms b,
and x is vector of unknown slip rates on the different cells on the fault at different
source time-steps. The no back-slip constraint then becomes x  0. In order to
reduce the number of unknowns, a very weak causality condition could be intro-
duced, for example, x’s beyond the first compressional wave from the hypocenter
could be set to 0. If desired, the seismic moment could be required to be equal to
that obtained say, from the centroid-moment tensor (CMT) solution. With the high-
quality of broadband data now available, this constraint is not necessary and it is
found that when stations are well distributed in azimuth around the earthquake, the
seismic moment obtained by the solution is close to the CMT moment. In addition,
Das and Kostrov (1990, 1994) permitted the entire fault behind the rupture front to
slip, if the data required it, unlike studies where slipping is confined only to the
vicinity of the rupture front. If there is slippage well behind the main rupture front
in some earthquake, then this method would find it whereas others would not. Such
a case was found by Robinson et al. (2006a) for the 2001 Mw 8.4 Peru
earthquake.
Thus, the inverse problem is the solution of the linear system of equations under
one or more constraints, in which the number of equations m is equal to the total
number of samples taken from all the records involved and the number of unknowns
n is equal to the number of spatial cells times on the fault times the number of time
steps at the source. Taking m > n, the linear system is over determined and a
solution x which provides a best fit to the observations is obtained. It is well
known that the matrix A is often ill-conditioned which implies that the linear
system admits more than one solution, equally well fitting the observations. The
introduction of the constraints reduces the set of permissible (feasible) solutions.
Even when an unique solution does exist, there may be many other solutions that
almost satisfy the equations. Since the data used in geophysical applications often
contain experimental noise and the models used are themselves approximations to
reality, solutions almost satisfying the data are also of great interest.
Finally, for the system of equations together with the constraints to comprise a
complete mathematical problem, the exact form of what the “best fit” to observa-
tions means has to be stated. For this problem, we have to minimize the vector of
residuals, r ¼ b  A x, and some norm of the vector r must be adopted. One may
choose to minimize minimize the ‘1, the ‘2 or the ‘1 norm (see Tarantola 1987 for a
discussion of different norms), all three being equivalent in the sense that they tend
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 7

to zero simultaneously. Das and Kostrov (1990, 1994) used the linear programming
method to solve the linear system and minimized the ‘1 norm subject to the
positivity constraint, using programs modified from Press et al. (1986). In various
studies, they have evaluated the other two norms of the solution to investigate how
they behave, and find that when the data is fitted well, the other two norms are also
small. A method with many similarities to that of Das and Kostrov (1990, 1994)
was developed by Hartzell and Heaton (1983). Hartzell et al. (1991) also carried out
a comprehensive study comparing the results of using different norms in the
inversion. Parker (1994) has discussed the positivity constraint in detail.
In order to confirm that the solution obtained is reliable Das and Kostrov (1994),
introduced additional levels of optimization. For example, if a region with high or
low slip was found, fitting the data by lowering or raising the slip in that region was
attempted to see if the data was still well fitted. If it did not, then the features were
considered robust. If high rupture speed was found in some portion of the fault, its
robustness was treated similarly. All features interpreted geophysically can be
tested in this way. Some examples can be found in Das and Kostrov (1994),
Henry et al. (2000), Henry and Das (2002), Robinson et al. (2006a, b).

1.4 A Case Study of a Supershear Earthquake

1.4.1 The 2001 Mw 7.8 Kunlun, Tibet Earthquake

This >400 km long earthquake was, at the time of its occurrence, the longest known
strike-slip earthquake, on land or underwater, since the 1906 California earthquake.
The earthquake occurred on a left-lateral fault, propagating unilaterally from west
to east, on one of the great strike-slip faults of Tibet, along which some of the
northward motion of the Indian plate under Tibet is accommodated by lateral
extrusion of the Tibetan crust. It produced surface ruptures, reported from field
observations, with displacements as high as 7–8 m (Xu et al. 2002), [initially even
larger values were estimated by Lin et al. (2002) but these were later revised down],
this large value being supported by interferometric synthetic aperture radar
(InSAR) measurements (Lasserre et al. 2005), as well as the seismic body wave
studies referred to below. Bouchon and Vallée (2003) used mainly Love waves
from regional seismograms to show that the average rupture speed was ~3.9 km/s,
exceeding the shear wave speed of the crustal rocks, and P-wave body wave studies
confirmed this (Antolik et al. 2004; Ozacar and Beck 2004). More detailed analysis
of SH body wave seismograms, using the inversion method of Kostrov and Das
(1990, 1994), showed that the rupture speed on the Kunlun fault during this
earthquake was highly variable and the rupture process consisted of three stages
(Robinson et al. 2006b). First, the rupture accelerated from rest to an average
speed of 3.3 km/s over a distance of 120 km. The rupture then propagated for
another 150 km at an apparent rupture speed exceeding the P wave speed, the
8 S. Das

Fig. 1.3 Schematic showing the final slip distribution for the 2001 Kunlun, Tibet earthquake, with
the average rupture speeds in 3 segments marked. Relocated aftershocks for the 6 month period
following the earthquake (Robinson et al. 2006a, b) are shown as red dots, with the symbol size
scaling with earthquake magnitude. The maximum slip is ~6.95 m. The centroid-moment tensor
solution for the main shock (star denotes the epicenter, its cmt is in red) and those available for the
larger aftershocks (cmts in black) are shown. The longitude (E) and latitude (N) are marked. The
impressive lack of aftershocks, both in number and in size, for such a large earthquake was shown
by Robinson et al. (2006b)

longest known segment propagating at such a high speed for any earthquake fault
(Fig. 1.3). Finally, the fault bifurcated and bent, the rupture front slowed down, and
came to a stop at another sharper bend, as shown in Robinson et al. (2006b). The
region of the highest rupture velocity coincided with the region of highest fault slip,
highest fault slip rate, highest stress drop (stress drop is what drives the earthquake
rupture), the longest fault slipping duration and had the greatest concentration of
aftershocks. The location of the region of the large displacement has been inde-
pendently confirmed from satellite measurements (Lasserre et al. 2005). The fault
width (in the depth direction) for this earthquake is variable, being no more than
10 km in most places and about 20 km in the region of highest slip.
Field observations, made several months later, showed a ~25 km wide region to
the south of the fault in the region of supershear rupture speed, with many off-fault
open (tensile) cracks. These open cracks are confined only to the off-fault section of
high speed portion of the fault, and were not seen off-fault of the lower rupture
speed portions of the fault, though those regions were also visited by the scientists
(Bhat et al. 2007). Theoretical results show that as the rupture moves from sub- to
super- shear speeds, large normal stresses develop in the off-fault regions close to
the fault, as the Mach front passes through. Das (2007) has suggested that obser-
vations of such off-fault open cracks could be used as an independent diagnostic
tool for identifying the occurrence of supershear rupture and it would be useful to
search for and document them in the field for large strike-slip earthquakes.
The special faulting characteristics (Bouchon et al. 2010) and the special pattern
of aftershocks for this and other supershear earthquakes (Bouchon and Karabulut
2008) has been recently been noted.
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 9

1.5 Conditions Necessary for Supershear Rupture

A striking observation for the 2001 Kunlun earthquake is that that the portion of the
fault where rupture propagated at supershear speeds is very long and very straight.
Bouchon et al. (2001) showed that for the 1999 Izmit, Turkey earthquake fault the
supershear eastern segment of the fault was very straight and very simple, with no
changes in fault strike, say, jogs, bends, step-overs, branching etc. Examination of
the 2002 Denali, Alaska earthquake fault shows the portion of the fault identified by
Walker and Shearer (2009) as having supershear rupture speeds is also long and
straight. The Kunlun earthquake showed that a change in fault strike direction slows
the fault down, and a large variation in strike stops the earthquake (Robinson
et al. 2006b). Based on these, we can say that necessary (though not sufficient)
conditions for supershear rupture to continue for significant distances are: (i) The
strike-slip fault must be very straight (ii) The longer the straight section, the more
likely is supershear speed, provided: (a) fault friction is low (b) no other impedi-
ments or barriers exist on the fault. Of course, very locally short sections could
reach supershear speeds, but the resulting Mach fronts would be small and local,
and thus less damaging. It is the sustained supershear wave speed over long
distances that would create large Mach fronts.

1.6 Laboratory Experiments

Important support, and an essential tool in the understanding of supershear rupture


speeds in earthquakes, comes from laboratory experiments on fracture. As men-
tioned in the introduction, the first time supershear rupture speeds were ever
mentioned with respect to earthquakes was the experiment of Wu et al. (1972).
The pioneering work led by Rosakis at Caltech, starting in the late 1990s, finally
convinced scientists that such earthquake rupture speeds were possible. Though
these experiments were carried out on man-made material (Homalite), and the
rupture and wave fronts were photographed, they revolutionised our way of think-
ing. More recently, Passelègue et al. (2013) at the École Normale Supérieure in
Paris obtained supershear rupture speeds in laboratory experiments on rock samples
(Westerly granite). The rupture front position was obtained by analysis of acoustic
high-frequency recordings on a multistation array. This is clearly very close to the
situation in seismology, where the rupture details are obtained by seismogram
(time-series) analysis, as discussed earlier. However, in the real Earth, the earth-
quake ruptures propagate through material at higher temperatures and pressures
than those in these experiments. Future plans by the Paris group includes upgrading
their equipment to first studying the samples at higher pressures, and then moving
on to higher temperatures as well, a more technologically challenging problem.
10 S. Das

1.7 Potential Supershear Earthquake Hazards

Earthquakes start from rest and need to propagate for some distance to reach their
maximum speed (Kostrov 1966). Once the maximum speed is reached, the earth-
quake could continue at this speed, provided the fault is straight, and no other
barriers exist on it, as mentioned above. Faults with many large changes in strike, or
large step-overs, would thus be less likely to reach very high rupture speeds as this
would cause rupture on such faults to repeatedly slow down, before speeding up
again, if the next segment is long enough. The distance necessary for ruptures to
propagate in order to attain supershear speeds is called the transition distance and is
currently still a topic of vigorous research and depends on many physical param-
eters of the fault, such as the fault strength to stress-drop ratio, the critical fault
length required to reach supershear speeds, etc. (Andrews 1976; Dunham 2007;
Bizzari and Das 2012; Liu et al. 2014).
Motivated by the observation that the rare earthquakes which propagated for
significant distances at supershear speeds occurred on very long straight segments
of faults, we examined every known major active strike-slip fault system on land
worldwide and identified those with long (>100 km) straight portions capable not
only of sustained supershear rupture speeds but having the potential to reach
compressional wave speeds over significant distances, and call them “fault super-
highways”. Detailed criteria for each fault chosen to be considered a superhighway
are discussed in Robinson et al. (2010), including when a fault segment is consid-
ered to be straight. Every fault selected, except one portion of the Red River fault
and the Dead Sea Fault has had earthquakes of magnitude >7 on it in the last
150 years. These superhighways, listed in Table 1.3, include portions of the
1,000 km long Red River fault in China and Vietnam passing through Hanoi, the
1,050 km long San Andreas fault in California passing close to Los Angeles, Santa
Barbara and San Francisco, the 1,100 km long Chaman fault system in Pakistan
north of Karachi, the 700 km long Sagaing fault connecting the first and second
cities of Burma (Rangoon and Mandalay), the 1,600 km Great Sumatra fault, and
the 1,000 km Dead Sea fault. Of the 11 faults classified as ‘superhighways’, 9 are in
Asia and 2 in North America, with 7 located near areas of very dense population.
Based on the population distribution within 50 km of each fault superhighway,
obtained from the United Nations database for the Year 2005 (Gridded Population
of the World 2007), we find that more than 60 million people today have increased
seismic hazards due to such faults. The main aim of this work was to identify those
sections of faults where additional studies should be targeted for better understand-
ing of earthquake hazard for these regions. Figure 1.4 shows the world map, with
the locations of the superhighways marked, and the world population density.
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 11

Fig. 1.4 Location of earthquake superhighways worldwide, shown as green stars, numbered as in
Table 1.3. The world population (Gridded Population of the World 2007), in inhabitants per
300  300 , is coloured as per the key. The zigzag band has no superhighways in it

1.7.1 The Red River Fault, Vietnam/China

Since we consider this to be the most dangerous fault in the world (Robinson
et al. 2010), as well as one less well studied compare to some other faults,
particularly the San Andreas fault, it is discussed here in detail, in order to
encourage more detailed studies there. The Red River fault runs for about
1,000 km, through one of the most densely populated regions of the world, from
the south-eastern part of Tibet through Yunnan and North Vietnam to the South
China Sea. Controversy exists regarding total geological offsets, timing of initiation
and depth of the Red River fault. Many authors propose that it was a long-lasting
plate boundary (between Indochina and South China ‘blocks’) initiated ~35 Ma
ago, accommodating between 500 and 1,050 km of left-lateral offset, and extending
down into the mantle. Many others propose that it is only a crustal scale fault,
~29–22 Myold. Although mylonites along the metamorphic complexes show ubiq-
uitous left-lateral shear fabrics, geodetic data confirm that recent motion has been
right-lateral. Seismic sections across the Red River delta in the Gulf of Tonkin
clearly suggest that at least offshore of Vietnam the fault is no longer active.
Although the Red River fault system is highly complex, Robinson et al. (2010)
were able to identify three sections of it as having potential for supershear rupture
(Fig. 1.5). In Vietnam, the Red River fault branches into numerous strands as it runs
through the thick sediments of the Red River delta near Hanoi. Although there is no
known record of recent major earthquakes on the main Red River fault in Vietnam
(Utsu 2002), two sub-parallel strands of this fault near Hanoi appear remarkably
straight, hence we identify two ~250 km sections here as being superhighways. The
consequences of a long supershear rupture in this area would be catastrophic. A
second, 280 km long, segment is identified in the Chuxiong Basin section of the
12 S. Das

Fig. 1.5 Map of southeastern China, Vietnam and Myanmar showing the 700 km superhighway
of the 1,000 km long Sagaing fault, Myanmar, and the 280 and 250 km superhighways of the
800 km Red River (Honghe) fault. Known faults (Yeats et al. 1997) are shown as white lines, with
superhighways shown in black. The world population (Gridded Population of the World)
(in inhabitants per 300  300 ,) is shown, according to the colour key shown in Fig. 1.4, with
populations less than 100 people per 300  300 shown as transparent, overlain on a digital elevation
map of the region. Locations of known large earthquakes on these faults (Table 1.3) are marked

fault, where it appears to be straight and simple. This area has a long history of
documented significant earthquakes on nearby faults (Yeats et al. 1997; Fig. 8.12 of
Yeats 2012).

1.7.2 The Sagaing Fault, Burma

The second-most dangerous superhighway in Table 1.3 is the San Andreas fault in
California but since it has been very heavily discussed in the literature we do not
discuss it here. Instead, we discuss the third-most dangerous superhighway. This
1,100 km long right-lateral strike-slip fault in Myanmar (Burma) forms the present-
day eastern plate boundary of India (Fig. 1.5). Estimates of long-term geological
offsets along the fault range from 100 to 150 km to ~450 km, motion along the
Sagaing Fault probably initiating ~22 Ma. The Sagaing fault is very continuous
between Mandalay and Rangoon, with the central 700 km from (17 to 23 N) being
“remarkably linear” (Vigny et al. 2003). It is the longest, continuous linear strike-
slip fault identified globally. North of 23 N, the fault begins to curve slightly but it
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 13

is still possible that supershear rupture could proceed for a considerable distance.
We have identified about 700 km of this fault as having the potential for sustained
supershear rupture (Fig. 1.5). There were large earthquakes on the fault in 1931,
1946, 1839, 1929, and two in 1930 (Yeats et al. 1997). With the cities of Rangoon
(Yangon) (population exceeding five million) and Mandalay (population
approaching one million) at, respectively, the southern and northern ends of this
straight portion, supershear earthquakes propagating either northwards or south-
wards could focus energy on these cities. In addition, the highly populated off-fault
regions would have increased vulnerability due to the passing Mach fronts, thereby
exacerbating the hazard.

1.8 Discussion

Tables 1.1 and 1.2 show that it is only in the last 2 years that we have found the first
example of two under-water earthquakes reaching supershear speeds, showing that
this is even rarer for marine earthquakes than ones on continents. Very recently, a
deep earthquake at ~650 km depth has been inferred to have had supershear speed
(Zhan et al. 2014).
Sometimes earthquakes in very different parts of the world in very different
tectonic regimes have remarkable similarities. Das (2007) has compared the 2001
Tibet earthquake and the 1906 California earthquake, the repeat of which would be
a far greater disaster, certainly in financial terms, than the 2004 Sumatra-Andaman
earthquake and tsunami! They are both vertical strike-slip faults, have similar Mw,
fault length and width, and hence similar average slip and average stress drop. The
right-lateral 1906 earthquake rupture started south of San Francisco, and propa-
gated bilaterally, both to the northwest and to the southeast. Geodetic measure-
ments showed that the largest displacements were on the segment to the north of
San Francisco, which is in agreement with results obtained by inversion of the very
few available seismograms. It has recently been suggested that this northern
segment may have reached supershear rupture speeds (Song et al. 2008). The fact
that the high fault displacement region is where the fault is very straight, would
provide additional support to this, if the 1906 and the 2001 earthquakes behaved
similarly. Unfortunately, due to heavy rains and rapid rebuilding following the
1906 earthquake, no information is available on whether or not off-fault cracks
appeared in this region. The cold desert climate of Tibet had preserved the off-fault
open cracks from the 2001 earthquake, un-eroded during the winter months, till the
scientists visited in the following spring. Similar considerations deserve to be made
for other great strike-slip faults around the world, for example, along the
Himalayan-Alpine seismic belt, New Zealand, Venezuela, and others, some of
which are discussed next.
14 S. Das

Table 1.1 Recent large strike-slip earthquakes without supershear rupture speed
Fault length On land or
Date Location Mw (km) underwater References
1989 Macquarie 8.0 200 Underwater Das (1992, 1993))
Ridge
1998 Antarctic 8.1 140, 60a ” Henry et al. (2000)
Ocean
2000 Wharton Basin 7.8 80 ” Robinson
et al. (2001)
2004 Tasman Sea 8.1 160, 100a Robinson (2011)
a
Two sub-events

Table 1.2 Strike-slip earthquakes known to have reached supershear rupture speeds
Supershear Type of data
segment used to study the Land
Year Mw Location length (km) quake or sea Reference
1979 6.5 Imperial 35 Strong ground Land Archuleta (1984),
Valley, motion Spudich and
California Cranswick (1984)
1999 7.6 Izmit, 45 ” ” Bouchon et al. (2002)
Turkey
1999 7.2 Duzce, 40 ” ” Bouchon et al. (2001)
Turkey
2001 7.8 Kunlun, >400 Teleseismic ” Robinson
Tibet et al. (2006a, b)
2002 7.9 Denali, 340 ” ” Walker and Shearer
Alaska (2009)
2012 8.6 N. Sumatra 200, 400, 400 ” Sea Wang et al. (2012)
2013 7.5 Craig, 100 ” ” Yue et al. (2013)
Alaska

1.9 Future Necessary Investigations

There are several other faults with shorter straight segments, which may or may not
be long enough to reach supershear speeds. Although we do not identify them as
fault superhighways, they merit mention. Of these, the 1,400 km long North
Anatolian fault in Turkey is the most particularly note-worthy, since supershear
(though not near-compressional wave speed) rupture has actually been inferred to
have occurred on it (Bouchon et al. 2001). The fault is characterized by periods of
quiescence (of about 75–150 years) followed by a rapid succession of earthquakes,
the most famous of these is the “unzipping” of the fault starting in 1939. For the
most part the surface expression of the fault is complex, with many segments and
en-echelon faults. It seems that large earthquakes (e.g., 1939, 1943, 1944) are able
to rupture multiple segments of these faults but it is unlikely that in jumping from
one segment to another, they will be able to sustain rupture velocities in excess of
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 15

the shear wave velocity. The longest “straight, continuous” portion of the North
Anatolian Fault lies in the rupture area of the 1939 Erzincan earthquake, to the west
of its epicenter, just prior to a sharp bend of the fault trace to the south (Yeats
et al. 1997). This portion of fault is approximately 80 km long. Additionally, this
branch which continues in the direction of Ankara (the Sungurlu fault zone) appears
to be very straight. However, the Sungurlu fault zone is characterized by very low
seismicity and is difficult to map due to its segmentation. Thus it is unlikely that
supershear rupture speeds could be maintained on this fault for a significant
distance. Since the North Anatolian fault runs close to Ankara and Istanbul, it is a
candidate for further very detailed in-depth studies.
Another noteworthy fault is the Wairarapa fault in New Zealand, which is
reported to have the largest measured coseismic strike-slip offset worldwide during
the 1855 earthquake, with an average offset of ~16 m (Rodgers and Little 2006), but
this high displacement is estimated over only 16 km of its length. Although a
~120 km long fault scarp was produced in the 1855 earthquake, the Wairarapa fault
is complex for much of its length as a series of splay faults branch off it. One
straight, continuous, portion of the fault is seen in the Southern Wairarapa valley,
but this is only ~40 km long. Thus it is less likely that this fault could sustain
supershear rupture over a considerable distance.
It is interesting to note that since the mid-1970s, when very accurate magnitudes
of earthquakes became available, no strike-slip earthquake on land appears to have
Mw >7.9 (two earthquakes in Mongolia in 1905 are supposed to have been >8, but
the magnitudes of such old earthquakes are not reliably known), even some with
rupture lengths >400 km. Yet they can produce surprisingly large damage. Perhaps
this could be explained by the multiple shock waves, carrying large ground veloc-
ities and accelerations, generated by supershear ruptures. A good example is the
1812 Caracas, Venezuela earthquake, described by John Milne (see Milne and Lee
1939), which devastated the city with more than 10,000 killed in 1 min. The
earthquake is believed to be of magnitude about 7.5, and to have occurred on the
Bocono fault, which is ~125 km away (Perez et al. 1997), but there is no known
local geological feature, such as a sedimentary basin, to amplify the motion. So one
could suggest either that the fault propagated further towards Caracas than previ-
ously believed, or reached supershear rupture speeds, or both.

1.10 Conclusions

Table 1.3 is ordered by the number of people expected to be affected by a fault


superhighway, and the list would look very different if it was listed in financial
terms. In addition, faults in less populated areas could then become much more
important. The 2011 Christchurch, New Zealand earthquake with a Mw of only 6.1
led to the second largest insurance claim in history (Financial Times, London,
March 28, 2012). Even though no supershear rupture was involved in this, it shows
that financial losses depend on very different circumstances than simply the number
16 S. Das

Table 1.3 Earthquake fault superhighways


Total Segment Affected
Fault system length lengthsa population Size and dates of past
and location (km) (km) (millions)b earthquakesc
1 Red River, 1,000 280, 25.7 7.7 (1733), 8.0 (1833)
Vietnam/China 230, 290
2 San Andreas, 1,050 160, 230 13.1 7.9 (1857), 7.9 (1906)
California
3 Sagaing, Burma 1,000 700 9.1 N.D. (1839), 7.3 (1930), 7.3
(1930), 7.6 (1931), 7.5 (1936),
7.4 (1946),
4 Great Sumatra 1,600 100, 6.7 7.7 (1892), 7.6 (1909), 7.5
160, 220, (1933), 7.4 (1943), 7.6 (1943)
200
5 Dead Sea, Jor- 1,000 100, 125 5.2 N.D. (1068), N.D. (1170),
dan/Israel N.D. (1202)
6 Chaman/Herat, 1,100 170, 2.5 N.D. (1892)
Pakistan/ 320, 210
Afghanistan
7 Luzon, 1,600 130 2.1 7.8 (1990)
Philippines
8 Kunlun, Tibet 1,600 270, 0.15 7.5 (1937), 7.8 (2001)
130, 180,
100
9 Altyn Tagh, 1,200 100, 0.062 7.6 (1932)
Tibet 100, 150
10 Bulnay, 300 100, 200 0.020 7.8, 8.2 (1905)
Mongolia
11 Denali, Alaska 1,400 130 Negligible 7.8 (2002)
a
Lengths of straight segments, identified as superhighways, listed from south to north
b
Current population, in millions, within 50 km of the superhighways, this being the region
expected to be most damaged by earthquakes propagating along the superhighways
c
Magnitude of old earthquakes are surface wave magnitude or moment-magnitude, as available;
N.D. if unknown

of people affected. Another interesting example is the 2002 Denali, Alaska fault,
which intersects the Trans-Alaska pipeline. Due to extreme care in the original
construction (Pers. Comm., Lloyd Cluff), it was not damaged, but the environmen-
tal catastrophe for an oil spill in the pristine national park would have had indirect
financial consequences, the most important being the possible prevention of it being
ever allowed to re-open again. In many places of low population density, Govern-
ments may consider placing power plants (nuclear or otherwise), and such instal-
lations need to be built keeping in mind the possibility of supershear rupture on
nearby faults. Clearly, many other major strike-slip faults worldwide, not classed as
a superhighway yet, deserve much closer inspection with very detailed studies to
fully assess their potential to reach supershear rupture speeds.
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 17

Acknowledgements I would like to thank two distinguished colleagues, Raul Madariaga and
Michel Bouchon, for reading the manuscript and providing many useful comments, which
improved and clarified it.

Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.

References

Aki K, Richards P (1989) Quantitative seismology: theory and methods. WH Freeman and
Company, San Francisco
Aki K, Richards P (2002) Quantitative seismology: theory and methods. University Science,
Sausalito
Antolik M, Abercrombie RE, Ekstr€ om G (2004) The 14 November 2001 Kokoxili (Kunlunshan),
Tibet, earthquake: rupture transfer through a large extensional step-over. Bull Seismol Soc Am
94:1173–1194
Andrews DJ (1976) Rupture velocity of plane strain shear cracks. J Geophys Res 81:5679–5687
Archuleta R (1984) Faulting model for the 1979 Imperial Valley earthquake. J Geophys Res
89:4559–4585
Benioff H (1952) Mechanism and strain characteristics of the White Wolf fault as indicated by the
aftershock sequence, Earthquakes in Kern County, California during 1952. Bull Calif Div
Mines Geology 171:199–202
Ben-Menahem A, Toks€ oz MN (1962) Source mechanism from spectra of long-period seismic
surface waves 1. The Mongolian earthquake of December 4, 1957. J Geophys Res
67:1943–1955
Ben-Menahem A, Toks€ oz MN (1963a) Source mechanism from spectrums of long-period surface
waves: 2. The Kamchatka earthquake of November 4, 1952. J Geophys Res 68:5207–5222
Ben-Menahem A, Toks€ oz MN (1963b) Source mechanism from spectrums of long-period seismic
surface waves. Bull Seismol Soc Am 53:905–919
Bernard P, Baumont D (2005) Shear Mach wave characterization for kinematic fault rupture
models with constant supershear rupture velocity. Geophys J Int 162:431–447
Bhat HS, Dmowska R, King GCP, Klinger Y, Rice JR (2007) Off-fault damage patterns due to
supershear ruptures with application to the 2001 Mw 8.1 Kokoxili (Kunlun) Tibet earthquake. J
Geophys Res 112:B06301
Bizzari A, Das S (2012) Mechanics of 3-D shear cracks between Rayleigh and shaer wave speeds.
Earth Planet Sci Lett 357–358:397–404
Bouchon M, Toks€oz MN, Karabulut H, Bouin MP, Dietrich M, Aktar M, Edie M (2002) Space and
time evolution of rupture and faulting during the 199 Izmit (Turkey) earthquake. Bull Seismol
Soc Am 92:256–266
Bouchon M, Vallée M (2003) Observation of long supershear rupture during the magnitude 8.1
Kunlunshan earthquake. Science 301:824–826
Bouchon M et al (2010) Faulting characteristics of supershear earthquakes. Tectonophysics
493:244–253
Bouchon M, Karabulut H (2008) The aftershock signature of supershear earthquakes. Science
320:1323–1325
Bouchon M, Bouin MP, Karabulut H, Toks€ oz MN, Dietrich M, Rosakis AJ (2001) How fast is
rupture during an earthquake? New insights from the 1999 Turkey earthquakes. Geophys Res
Lett 28:2723–2726
18 S. Das

Bouchon M, Toksoz MN, Karabulut H, Bouin MP, Dietrich M, Aktar M, Edie M (2000) Seismic
imaging of the Izmit rupture inferred from the near-fault recordings. Geophys Res Lett
27:3013–3016
Broberg KB (1989) The near-tip field at high crack velocities. Int J Fract 39:1–13
Broberg KB (1994) Intersonic bilateral slip. Geophys J Int 119:706–714
Broberg KB (1999) Cracks and fracture. Academic, New York
Brune JN (1961) Radiation pattern of Rayleigh waves from the Southeast Alaska earthquake of
July 10, 1958. Publ Dom Observ 24:1
Brune JN (1962) Correction of initial phase measurements for the Southeast Alaska earthquake of
July 10, 1958, and for certain nuclear explosions. J Geophys Res 67:3463
Burridge R (1973) Admissible speeds for plane-strain self-similar shear crack with friction but
lacking cohesion. Geophys J Roy Astron Soc 35:439–455
Burridge R, Conn G, Freund LB (1979) The stability of a rapid Mode II shear crack with finite
cohesive traction. J Geophys Res 84:2210–2222
Cruz-Atienza VM, Olsen KB (2010) Supershear Mach-waves expose the fault breakdown slip.
Tectonophysics 493:285–296
Das S (2007) The need to study speed. Science 317:889–890
Das S (1992) Reactivation of an oceanic fracture by the Macquarie Ridge earthquake of 1989.
Nature 357:150–153
Das S (1993) The Macquarie Ridge earthquake of 1989. Geophys J Int 115:778–798
Das S (1976) A numerical study of rupture propagation and earthquake source mechanism DSc
thesis, Massachusetts Institute of Technology, Cambridge
Das S, Aki K (1977) A numerical study of two-dimensional rupture propagation. Geophys J Roy
Astron Soc 50:643–668
Das S, Kostrov BV (1994) Diversity of solutions of the problem of earthquake faulting inversion:
application to SH waves for the great 1989 Macquarie Ridge earthquake. Phys Earth Planet Int
85:293–318
Das S, Kostrov BV (1990) Inversion for slip rate history and distribution on fault with stabilizing
constraints – the 1986 Andreanof Islands earthquake. J Geophys Res 95:6899–6913
Das S, Suhadolc P (1996) On the inverse problem for earthquake rupture. The Haskell-type source
model. J Geophys Res 101:5725–5738
Das S, Suhadolc P, Kostrov BV (1996) Realistic inversions to obtain gross properties of the
earthquake faulting process. Tectonophysics 261:165–177. Special issue entitled Seismic
Source Parameters: from Microearthquakes to Large Events, ed. C. Trifu
Dunham EM (2007) Conditions governing the occurrence of supershear ruptures under slip-
weakening friction. J Geophys Res 112:B07302
Dunham EM, Archuleta RJ (2004) Evidence for a supershear transient during the 2002 Denali fault
earthquake. Bull Seismol Soc Am 94:S256–S268
Dunham EM, Bhat HS (2008) Attenuation of radiated ground motion and stresses from three-
dimensional supershear ruptures. J Geophys Res 113:B08319
Ellsworth WL, Celebi M, Evans JR, Jensen EG, Kayen R, Metz MC, Nyman DJ, Roddick JW,
Spudich P, Stephens CD (2004) Nearfield ground motion of the 2002 Denali Fault, Alaska,
earthquake recorded at Pump Station 10. Earthq Spectra 20:597–615
Frankel A (2004) Rupture process of the M7.9 Denali fault, Alaska, earthquake: subevents,
directivity, and scaling of high-frequency ground motion. Bull Seismol Soc Am 94:S234–S255
Gridded Population of the World, version 3 (GPWv3) (2007) Center for International Earth
Science Information Network (CIESIN), Columbia University; and Centro Internacional de
Agricultura Tropical (CIAT). 2005, Palisades. Available at http://sedac.ciesin.columbia.edu/
gpw
Hamano Y (1974) Dependence of rupture time history on the heterogeneous distribution of stress
and strength on the fault, (abstract). Transact Am Geophys Union 55:352
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 19

Hartzell SH, Heaton TH (1983) Inversion of strong ground motion and teleseismic waveform data
for the fault rupture history of the 1979 Imperial Valley, California, earthquake. Bull Seismol
Soc Am 73:1553–1583
Hartzell SH, Stewart GS, Mendoza C (1991) Comparison of L1 and L2 norms in a teleseismic
waveform inversion for the slip history of the Loma Prieta, California, earthquake. Bull
Seismol Soc Am 81:1518–1539
Ida Y (1972) Cohesive force across the tip of a longitudinal-shear crack and Griffith’s specific
surface energy. J Geophys Res 77:3796–3805
Henry C, Das S (2002) The Mw 8.2 February 17, 1996 Biak, Indonesia earthquake: rupture history,
aftershocks and fault plane properties. J Geophys Res 107:2312
Henry C, Das S, Woodhouse JH (2000) The great March 25, 1998 Antarctic Plate earthquake:
moment tensor and rupture history. J Geophys Res 105:16097–16119
Kostrov BV (1975) Mechanics of the tectonic earthquake focus (in Russian). Nauka, Moscow
Kostrov BV (1966) Unsteady propagation of longitudinal shear cracks. J Appl Math Mech
30:1241–1248
Kostrov BV, Das S (1988) Principles of earthquake source mechanics. Cambridge University
Press, New York
Lin A, Fu B, Guo J, Zeng Q, Dang G, He W, Zhao Y (2002) Co-seismic strike-slip and rupture
length produced by the 2001 Ms 8.1 Central Kunlun earthquake. Science 296:2015–2016
Liu C, Bizzari A, Das S (2014) Progression of spontaneous in-plane shear faults from
sub-Rayleigh up to compressional wave rupture speeds. J Geophys Res Solid Earth 119
(11):8331–8345
Lasserre C, Peltzer G, Cramp F, Klinger Y, Van der Woerd J, Tapponnier P (2005) Coseismic
deformation of the 2001 Mw ¼ 7.8 Kokoxili earthquake in Tibet, measured by synthetic
aperture radar interferometry. J Geophys Res 110:B12408
Madariaga R (1983) High-frequency radiation from dynamic earthquake fault models. Ann
Geophys 1:17–23
Madariaga R (1977) High-frequency radiation from crack (stress drop) models of earthquake
faulting. Geophys J Roy Astron Soc 51:625–651
Milne J, Lee AW (1939) Earthquakes and other earth movements. K Paul, Trench, Trubner and
Co., London
Olson AH, Apsel RJ (1982) Finite faults and inverse theory with applications to the 1979 Imperial
Valley earthquake. Bull Seismol Soc Am 72:1969–2001
Ozacar AA, Beck SL (2004) The 2002 Denali fault and 2001 Kunlun fault earthquakes: complex
rupture processes of two large strike-slip events. Bull Seismol Soc Am 94:S278–S292
Parker RL (1994) Geophysical inverse theory. Princeton University Press, Princeton
Passelègue FX, Schubnel A, Nielsen S, Bhat HS, Madariaga R (2013) From sub-Rayleigh to
supershear ruptures during stick-slip experiments on crustal rock. Science 340
(6137):1208–1211
Perez OJ, Sanz C, Lagos G (1997) Microseismicity, tectonics and seismic potential in southern
Caribbean and northern Venezuela. J Seismol 1:15–28
Press F, Ben-Menahem A, Toks€ oz MN (1961) Experimental determination of earthquake fault
length and rupture velocity. J Geophys Res 66:3471–3485
Press WH, Flannery BP, Teukolsky SA, Vetterling WT (1986) Numerical recipes: the art of
scientific computing. Cambridge University Press, New York
Robinson DP, Das S, Searle MP (2010) Earthquake fault superhighways. Tectonophysics
493:236–243
Robinson DP (2011) A rare great earthquake on an oceanic fossil fracture zone. Geophys J Int
186:1121–1134
Robinson DP, Das S, Watts AB (2006a) Earthquake rupture stalled by subducting fracture zone.
Science 312:1203–1205
Robinson DP, Brough C, Das S (2006b) The Mw 7.8 Kunlunshan earthquake: extreme rupture
speed variability and effect of fault geometry. J Geophys Res 111:B08303
20 S. Das

Robinson DP, Henry C, Das S, Woodhouse JH (2001) Simultaneous rupture along two conjugate
planes of the Wharton Basin earthquake. Science 292:1145–1148
Rodgers DW, Little TA (2006) World’s largest coseismic strike-slip offset: the 1855 rupture of the
Wairarapa Fault, New Zealand, and implications for displacement/length scaling of continental
earth-quakes. J Geophys Res 111:B12408
Rosakis AJ, Samudrala O, Coker D (1999) Cracks faster than the shear wave speed. Science
284:1337–1340
Sarao A, Das S, Suhadolc P (1998) A comprehensive study of the effect of non-uniform station
distribution on the inversion for seismic moment release history and distribution for a Haskell-
type rupture model. J Seismol 2:1–25
Spudich P, Cranswick E (1984) Direct observation of rupture propagation during the 1979
Imperial Valley earthquake using a short baseline accelerometer array. Bull Seismol Soc Am
74:2083–2114
Song SG, Beroza GC, Segall P (2008) A unified source model for the 1906 San Francisco
earthquake. Bull Seismol Soc Am 98:823–831
Tarantola A (1987) Inverse problem theory. Methods for data fitting and model parameter
estimation. Elsevier, New York
Utsu T (2002) A list of deadly earthquakes in the world (1500–2000). In: Lee WHK, Kanamori H,
Jennings PC, Kisslinger C (eds) International handbook of earthquake and engineering seis-
mology part A. Academic, New York, p 691
Vallée M, Landès M, Shapiro NM, Klinger Y (2008) The 14 November 2001 Kokoxili (Tibet)
earthquake: High-frequency seismic radiation originating from the transitions between
sub-Rayleigh and supershear rupture velocity regimes””. J Geophys Res 113:B07305
Vigny C, Socquet A, Rangin Chamot-Rooke N, Pubellier M, Bouin M-N, Bertrand G, Becker M
(2003) Present-day crustal deformation around Sagaing fault, Myanmar. J Geophys Res
108:2533
Walker KT, Shearer PM (2009) Illuminating the near-sonic rupture velocities of the intracon-
tinental Kokoxili Mw 7.8 and Denali fault Mw 7.9 strike-slip earthquakes with global P wave
back projection imaging. J Geophys Res 114:B02304
Wang D, Mori J, Uchide T (2012) Supershear rupture on multiple faults for the Mw 8.6 off
Northern Sumatra, Indonesia earthquake. Geophys Res Lett 39:L21307
Wu FT, Thomson KC, Kuenzler H (1972) Stick-slip propagation velocity and seismic source
mechanism. Bull Seismol Soc Am 62:1621–1628
Xia K, Rosakis AJ, Kanamori H (2004) Laboratory earthquakes: the sub-Rayleigh-to-supershear
transition. Science 303:1859–1861
Xia K, Rosakis AJ, Kanamori H, Rice JR (2005) Laboratory earthquakes along inhomogeneous
faults: directionality and supershear. Science 308:681–684
Xu X, Chen W, Ma W, Yu G, Chen G (2002) Surface rupture of the Kunlunshan earthquake (Ms
8.1), northern Tibet plateau, China. Seismol Res Lett 73:884–892
Yeats RS, Sieh K, Allen CR (1997) The geology of earthquakes. Oxford University Press,
New York
Yeats R (2012) Active faults of the world. Cambridge University Press, New York
Yue H, Lay T, Freymuller JT, Ding K, Rivera L, Ruppert NA, Koper KD (2013) Supershear
rupture of the 5 January 2013 Craig, Alaska (Mw 7.5) earthquake. J Geophys Res
118:5903–5919
Zhan Z, Helmberger DV, Kanamori H, Shearer PM (2014) Supershear rupture in a Mw 6.7
aftershock of the 2013 Sea of Okhotsk earthquake. Science 345:204–207
Chapter 2
Civil Protection Achievements and Critical
Issues in Seismology and Earthquake
Engineering Research

Mauro Dolce and Daniela Di Bucci

Abstract A great complexity characterizes the relationships between science and


civil protection. Science attains advances that can allow civil protection organiza-
tions to make decisions and undertake actions more and more effectively. Provided
that these advances are consolidated and shared by a large part of the scientific
community, civil protection has to take them into account in its operational pro-
cedures and in its decision-making processes, and it has to do this while growing
side by side with the scientific knowledge, avoiding any late pursuit.
The aim of the paper is to outline the general framework and the boundary
conditions, to describe the overall model of such relationships and the current state-
of-the-art, focusing on the major results achieved in Italy and on the many critical-
ities, with special regards to research on seismic risk.
Among the boundary conditions, the question of the different roles and respon-
sibilities in the decision-making process will be addressed, dealing in particular
with the contribution of scientists and decision-makers, among the others, in the
risk management. In this frame, the different kinds of contributions that civil
protection receives from the scientific community will be treated. Some of them
are directly planned, asked and funded by civil protection. Some contributions
come instead from research that the scientific community develops in other frame-
works. All of them represent an added value from which civil protection wants to
take advantage, but only after a necessary endorsement by a large part of the
scientific community and an indispensable adaptation to civil protection utilization.
This is fundamental in order to avoid that any decision and any consequent action,
which could in principle affect the life and property of many citizens, be undertaken
on the basis of non-consolidated and/or minor and/or not shared scientific
achievements.

M. Dolce (*) • D. Di Bucci


Department of Civil Protection, Presidency of the Council of Ministers, Rome, Italy
e-mail: [email protected]; [email protected]

© The Author(s) 2015 21


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_2
22 M. Dolce and D. Di Bucci

2.1 Introduction

In the last decade, within their activities at the Italian Department of Civil Protec-
tion (DPC), the authors had the opportunity to contribute to develop the relation-
ships between the “Civil Protection” and the “Scientific Community”, especially in
the field of seismic and seismo-induced risks.
During these years, the DPC has faced difficult circumstances, not only in
emergency situations, which have required strong and continuous interactions
with the scientific community. As it can be easily understood in theory, but much
less easily in practice, the civil protection approach to seismic risk problems is
strongly different from the research approach, although important synergies could
arise from a cooperation and a reciprocal understanding. From the DPC point of
view, there are many good reasons for a close connection between civil protection
and research, e.g.: the opportunity to reach a scientific consensus on evaluations
that imply wide uncertainties; a better management of the resource allocation for
risk mitigation; the possibility to make precise and rapid analyses for fast and
effective emergency actions; the optimization of resources and actions for the
emergency overcoming. There are of course positive implications also for the
scientific community, such as, for instance: a clear finalization of the research
activities; wider investigation perspectives, too often strictly focused on the
achievement of specific academic advancements; the ethical value of a research
that has direct and positive social implications (Dolce 2008).
Creating a fruitful connection between the two parts implies a continuous and
dynamic adaptation to the different ways of thinking about how to solve problems.
This involves different fields: the language first of all, including the reciprocal and
outward communication, then the timing for the response, the budget available, the
right balance among the different stakeholders, the scientific consensus on the most
significant achievements and, ultimately, the responsibilities.
A great complexity generally characterizes the relationships between science
and civil protection. As will be shown in the following sections, science attains
advances that can allow civil protection organizations to make decisions and
undertake actions more and more effectively. Provided that these advances are
consolidated and shared by a large part of the scientific community, civil protection
has to take them into account in its operational procedures and in its decision-
making processes, and it has to do this while growing side by side with the scientific
knowledge, avoiding any late pursuit.
Such a complexity is summarized in the scheme of Fig. 2.1, which also repre-
sents the backbone of this paper. The aim of the work here presented, indeed, is
to outline the framework and the boundary conditions, to show the overall model
of such relationships and to describe the current state-of-the-art, focusing on the
major results achieved in Italy and on the many criticalities that still remain to be
solved.
Among the boundary conditions, the question of the different roles and respon-
sibilities in the decision-making process will be addressed, dealing in particular
2 Civil Protection Achievements and Critical Issues in Seismology. . . 23

Fig. 2.1 Chart of the relationships between civil protection and science

with the contribution of scientists and decision-makers, among the others, in the
risk management. In this frame, and given the specific organization of the civil
protection system in Italy, which is the cradle of the experience here presented,
the different kinds of contributions that civil protection receives from the scien-
tific community will then be treated. The collection of these contributions follows
different paths. Some of them are directly planned, asked and funded by civil
protection, although with a different commitment for the scientific institutions or
commissions involved, which especially regards their activity field and the related
duration through times (points i to iv in Fig. 2.1). Some contributions come
instead from research that the scientific community develops in other frame-
works: European projects, Regional funds, etc. (points v to vi in Fig. 2.1). All
of them represent an added value from which civil protection wants to take
advantage for sure, but only after a necessary endorsement by a large part of the
scientific community and an indispensable adaptation to civil protection utiliza-
tion. This is fundamental in order to avoid that any decision and any consequent
action, which could in principle affect the life and property of many citizens, be
undertaken on the basis of non-consolidated and/or minor and/or not shared
scientific achievements.
24 M. Dolce and D. Di Bucci

Table 2.1 Points of view of scientists and decision-makers


Scientists Decision-makers
Frequently model events that occurred in the Need well-tested models, which are able to
past in order to understand their dynamics describe events possibly occurring in the future
Follow a scientific approach to the risks that In most cases are asked to make decisions that
is often probabilistic, and always affected by necessarily require a yes or no answer
uncertainties
Need a relatively long time for their work, in Are generally asked to give an immediate
order to acquire more data trying to reduce response, often balancing low occurrence prob-
uncertainties, preferring to wait rather than to abilities versus envisaged catastrophic
be wrong consequences
Exert the “art of doubt” Need solutions
Estimate the costs to carry out their best Manage a pre-defined (often limited) budget
research

2.2 Roles and Responsibilities in the Decision-Making


Process

2.2.1 Scientists and Decision-Makers in the Risk


Management

Scientists and decision-makers are often considered as two counterparts which


dynamically interact in the decision-making process. As a matter of fact, within
the civil protection system, they represent two different points of view that have to
be continuously reconciled (Dolce and Di Bucci 2014), as summarized in Table 2.1.
A further complexity is noticeable, especially in civil protection activities, i.e., the
roles and the responsibilities of decision-makers at the different levels of the decisional
process. One should discriminate between political decision-makers (PDMs) and
technical decision-makers (TDMs). Moreover, PDMs operate in relation to either
general risk management policies or specific scenarios. Indeed, a further and more
subtle distinction could be made (Bretton 2014) between politicians and policy makers.
Nevertheless, for the sake of simplicity, only three categories, i.e., scientists, PDMs,
and TDMs, will be referred hereinafter as the three main actors in the decisional chain.
There is no doubt that in many cases it can be hard to totally separate the
contribution of each of them, since some feedback and interactions are often
necessary. However, in every step of an ideal decision-making process, each of
these actors should play a primary role, as summarized in Table 2.2.
These sophisticated links and interactions can obviously cause distortions in the
roles to be played, and thus in the responsibilities to be taken. This can further
happen if the participants in the decisional process do not, or cannot, accomplish
their tasks or if, for various reasons, they go beyond the limits of their role.
Scientists, for instance, could either:
– not provide fully quantitative evaluations;
– miss to supply scientific support in cost–benefit analyses;
– give undue advice concerning civil protection actions.
2 Civil Protection Achievements and Critical Issues in Seismology. . . 25

Table 2.2 Steps of an ideal decision-making process, and role virtually played by the different
participants
Step Description Scientists PDMs TDMs
1 definition of the acceptable level of risk according to x X
established policy (i.e., in a probabilistic framework, of
the acceptable probability of occurrence of quantitatively
estimated consequences for lives and property)
2 allocation of proper budget for risk mitigation X x
3 quantitative evaluation of the risk (considering hazard, X x
vulnerability, and exposure)
4 identification of specific actions capable of reducing the X
risk to the acceptable level
5 cost-benefit evaluation of the possible risk-mitigating X x
actions
6 adoption of the most suitable technical solution, according x x X
to points 1, 4, and 5
7 implementation of risk-mitigating actions X
PDMs political decision-makers, TDMs technical decision-makers, X primary role, x occasional
support

PDMs could:
– decide not to establish the acceptable risk levels for the community they
represent;
– prefer to state that a “zero” risk solution must be pursued, which is in fact a
non-decision;
– not allocate an adequate budget for risk mitigation.
TDMs could tend (or could be forced, in emergency conditions) to make and
implement decisions they are not in charge for, because of the lack of:
– scientific quantitative evaluations;
– acceptable risk statements (or impossibility to get them);
– budget.
A number of examples of individuals usurping or infringing on roles not
assigned to them in the decisional process is reported by Dolce and Di
Bucci (2014).

2.2.2 Other Actors in the Decision Process

Other actors, besides scientists and decision makers, play an important role in the
risk cycle management; among them mass media, judiciary, and citizens deserve to
be especially mentioned, because their behaviours can strongly affect the decision-
making process.
26 M. Dolce and D. Di Bucci

Table 2.3 Pros and cons for civil protection in the mass media behaviour
Pros Cons
Spreading knowledge about risks and their Distortion of information due to incompe-
reduction in order to increase people’s awareness tence or to commercial or political purposes
on risks
Disseminating best practices on behaviours to be Accreditation of non-scientific ideas and
adopted both in ordinary and in emergency non-expert opinions
conditions
Spreading civil protection alerts Spreading false alarms

Dealing with the communication of civil protection matters to the public


through the media, it is worth mentioning Franco Gabrielli, the Head of the Italian
Department of Civil Protection since 2010. He well summarized the complexity
of this issue when he affirmed that “We have the duty of communicating with
citizens, but we are voiceless and invisible if we don’t pass through the «cultural
mediation» of the information channels and their managers. Maybe we have
neither analysed deeply enough the consequences of such mediation, nor we
have learned well enough to avoid traps and to take the possible advantages”
(Gabrielli 2013).
As a matter of fact, the importance of mass media (newspapers, radio, television,
as well as web and social networks) is quickly increasing in any field and, therefore,
also in risk management. There is a great need for an effective collaboration
between civil protection TDMs and the media. It can determine the advantages
summarized in the left-hand-side of Table 2.3 and, in the meanwhile, could reduce
some of the problems reported in the right-hand-side of the same table, mostly
induced by the need that media have to increase their audience for commercial
purposes, or to support some political orientations.
Two points, well established since long time by the theories of mass communi-
cation, have to be carefully taken into account in the civil protection activities. The
first one deals with the “cause and effect” of communication, stating that “some
kinds of communication, on some kinds of issues, brought to the attention of some
kinds of people, under some kinds of conditions, have some kinds of effects”
(Berelson 1948). The second one was expressed by Wilbur Schramm in 1954: “It
is misleading to think of the communication process as starting somewhere and
ending somewhere. It is really endless. We are little switchboard centres handling
and rerouting the great endless current of information . . .” (Schramm 1954).
These two statements clearly demonstrate how impossible is to establish a direct
and unique link between the original message and the effects on the audience’s
mind due to the complex process leading to those effects. It is of paramount
importance to account for this complexity in the communication of civil protection
issues, if definite effects are expected or wanted.
Concerning the judiciary, the question is multifaceted, also depending on the
legal framework of each country. In general, the magistrates’ action is strictly
related to the roles and specific responsibilities of the various actors in risk
management. After the 2009 L’Aquila earthquake and the following legal
2 Civil Protection Achievements and Critical Issues in Seismology. . . 27

controversies (original documents, along with comments, can be found in the


following blogs: http://processoaquila.wordpress.com/, http://
terremotiegrandirischi.com/ and http://eagris2014.com/), a lively discussion has
been opened worldwide on this theme, that has been addressed in international
conferences and workshops (e.g., AGU Fall Meeting 2012; Gasparini 2013, in the
Goldschmidt Conference; 2nd ECEES – Special Session “Communication of risk
and uncertainty to the general public”; workshop “Who evaluates, who decides,
who judges”, 2011 —http://www.protezionecivile.gov.it/resources/cms/docu
ments/locandina_incontro_di_studio.pdf; workshop “Civil protection in the society
of risk: procedures, guarantees, responsibilities”, 2013 —http://www.
cimafoundation.org/convegno-nazionale-2013/), as well as in books and peer
reviewed papers (e.g., DPC and CIMA Ed. 2013, 2014; Alexander 2014a, b;
Gabrielli and Di Bucci 2014; Mucciarelli 2014). Due to the importance at interna-
tional level of this issue, the Global Science Forum of the Organisation for
Economic Co-operation and Development (OECD) promoted an activity, involving
senior science policy officials of the OECD member countries in a study of “the
quality of scientific policy advice for governments and consequences on the role
and responsibility of scientists” (http://www.oecd.org/sti/sci-tech/
oecdglobalscienceforum.htm).
The experience currently made in Italy, referred to many different kinds of risks,
can be summarized by quoting the words of the Head of the Italian Department of
Civil Protection: “. . . a significant increase of the judiciary actions after a disaster
has occurred, to find the guilt in the behaviour of the catastrophe management
actors. The investigation area is enlarged to the phase of prevision and of ‘prevision
information management’ . . .” (Gabrielli 2013).
In this perspective, it can be easily understood that decisions of the judiciary can
significantly affect the behaviour of the civil protection individual stakeholders and
then of the system, as pointed out in the proceedings of one of the workshops
mentioned above (DPC and CIMA 2013). Some passages in these proceedings
provide the opinion of some judges and experts of criminal law on the bias that can
affect the legal interpretation and the possible consequences of a punishing
approach (i.e., an approach which looks only for a guilty party after a catastrophic
event) on the decision-making process. For instance, Renato Bricchetti, president of
the Court of Lecco, states: “I realize . . . that most of the people feel the need to find
a responsible, I don’t want to say a scapegoat, but to know who has to be blamed for
what happened. And the mass media world amplifies this demand for justice”.
Moreover, Francesco D’Alessandro, Professor of Criminal Law at the Universita
Cattolica of Milan, addresses the “Accusatory approach to the error: a scheme of
analysis for which, in case of errors or incidents, the main effort is made to find who
is the possible responsible for the event that occurred, in order to punish him.
Whereas those elements of the organization that may have contributed to the
adoption of a behaviour characterized by negligence, imprudence, incompetence,
are left in the background.” He also affirms that: “As a consequence, even if you
punish a specific person, the risk conditions and the possibility to commit the same
error again still continue to persist.” Finally, D’Alessandro depicts the devastating
28 M. Dolce and D. Di Bucci

effects of this approach on the risk mitigation: “The accusatory approach . . .


induces a feeling of fear in the operators of the possible punishment . . . and this
keeps them from reporting on the near misses, thus impeding learning by the
organization. This phenomenon . . . is characterized by a progressive, regular
adoption of behaviours that are not aimed at better managing the risk, but rather
at attempting to minimize the possibility to be personally involved in a future legal
controversy.”
Dealing with the role played by citizens in a fully developed civil protection
system, it has to be underlined that this role is fundamental both in ordinary and in
emergency conditions.
On the one hand, in ordinary conditions, citizens should reduce as much as they
can the risks threatening their lives and property, by:
– asking for and/or contributing to create adequately safe conditions at their places
of work, study, and entertainment;
– verifying that civil protection authorities have prepared in advance the preven-
tive measures that must be adopted in case of catastrophic events, especially
civil protection plans, of which citizens are primary users;
– being more aware of the risks which they are exposed to, and having an adequate
civil protection culture, which would allow them to adopt the aforementioned
precautionary measures and induce political representatives to carry out risk-
prevention policies through both their vote and their active involvement in the
local political activities.
On the other hand, in case (or in the imminence, when possible) of an event,
citizens can undertake different actions, depending on the kind of risk and on the
related forecasting probabilities:
– in the immediate aftermath of an event (or in case of an alert), they should follow
and implement the civil protection plans (if available) and the correct behaviours
learned;
– in case of very low occurrence probabilities, they should adopt individual
behaviours, more or less cautious, calibrated on their own estimate of the risk
acceptability.
Finally, citizens can provide support to the civil protection system also by being
part of volunteers organizations.

2.3 Civil Protection and Science

Two main aspects of the relationships between civil protection and science are
relevant from the civil protection point of view:
– scientific advances can allow for more effective civil protection decisions and
actions concerning the entire risk cycle;
2 Civil Protection Achievements and Critical Issues in Seismology. . . 29

– civil protection has to suitably re-shape its activities and operational procedures
to include the scientific advances, as soon as they become available and robust.
In order to fully understand the problems and the possible solutions in the civil
protection – science relationships, it is essential to explain what “having proce-
dures” means for a civil protection system, and to provide an overview of the
possible scientific products for civil protection use and of the organization of the
Italian civil protection system.

2.3.1 Civil Protection Procedures

Civil protection operates following pre-defined procedures, which are needed on


the one hand to improve its efficiency in decision-making and to rapidly undertake
actions during a crisis or an emergency and, on the other hand, to make roles and
responsibilities clear. As the procedures are defined quite rigidly and involve many
actors, modifying them is often “uncomfortable”, especially on the basis of those
new scientific advancements that increase the uncertainties or do not quantify them.
The progressive updating of the procedures is made even more complex by the
fact that civil protection organizations are different in different countries. A
technical-scientific product/tool/study that is suitable for one country or for a
given civil protection system can therefore turn out to be inadequate for another
one. As a matter of fact, each civil protection organization has its own procedures,
that are derived from the distillation of practical experiences and successive
adjustments. These procedures are somehow “digested” by the civil protection
personnel and officials, by the civil protection system and, sometimes, by media
and population, thus creating complex interrelationships which are hard and some-
times dangerous to change abruptly.
Changing procedures is an inescapable fact, that however can be much more
difficult and slow than making scientific advances and improving scientific tools.

2.3.2 Scientific Products for Civil Protection

Scientific products, i.e., any scientific result, tool or finding, for their intrinsic nature
do not usually derive from an overall view of the reality, but they tend to emphasize
some aspects, while neglecting or oversimplifying some others. Therefore, often
research findings can turn out to be unreliable for practical applications, and
sometimes falsely precise or tackling only part of a problem, whereas they leave
unsolved other important parts. To minimize this contingency, research activities
finalized to civil protection aims should proceed in close cooperation with civil
protection stakeholders in defining objectives and products to achieve, as well as in
validating results and/or tools.
30 M. Dolce and D. Di Bucci

Generally speaking, science can, more or less effectively, contribute to civil


protection in the following two ways:
1. with specific scientific products, explicitly requested (and generally funded) by
civil protection and subjected to a wide consensus of the scientific community;
the scientific results provided, although responding to the civil protection needs,
can be still not suitably shaped for a direct or immediate translation into civil
protection procedures and actions, needing further adaptation and a
pre-operational stage before their full operational utilization.
2. with scientific products made freely available by the scientific community,
which typically pertain to one of the following three categories:
(i) many different findings on the same subject; as expected in these cases, in
which the scientific community is still developing a theme and a conclusive
result is still far from being reached, they can be (and often are) inconsistent
or conflicting among them;
(ii) totally new products “standing out from the crowd”; they are proposed by
the authors as innovative/revolutionary/fundamental, and are often con-
veyed to the public through media, claiming their great usefulness for risk
mitigation. In this way, these products can benefit from the favour of a large
public that, however, has not the needed expertise to evaluate the quality of
their scientific content;
(iii) totally new and often scientifically valuable products; in any case they need
to be adapted, if actually possible, to civil protection operability.
A more in-depth and articulated analysis of the different scientific products
proposed for civil protection use is shown in section 4.

2.3.3 The Italian National Civil Protection System

In Italy, civil protection is not just a single self-contained organization but a system,
called National Service of Civil Protection (SNPC), which operates following the
idea that the civil protection is not an administration or an authority, but rather a
function that involves the entire society. Several individuals and organizations
contribute with their own activities and competences to attain the general risk
mitigation objectives of SNPC.
The coordination of this complex system is entrusted to the National Department
of Civil Protection, which acts on behalf of the Prime Minister. The SNPC’s
mandate is the safeguarding of human life and health, property, national heritage,
human settlements and environment from all natural or manmade disasters.
All the ministries, with their national operational structures, including Fire
Brigades, Police, Army, Navy, Air Force, Carabinieri, State Forest Corps and
Financial Police, as well as Prefectures, Regional and local civil protection orga-
nizations, contribute to SNPC actions. Public and private companies of highways,
2 Civil Protection Achievements and Critical Issues in Seismology. . . 31

roads and railways, electricity and telecommunication, as well as volunteers asso-


ciations and individual citizens, are part of the system. The volunteers associations
can have both general aims of assistance to the population, and specific aims related
to particular technical/professional skills (for instance, architects, engineers, geol-
ogists, medical doctors, etc.). Finally, an important strength of SNPC is represented
by the full involvement of the scientific community, which enables timely transla-
tion of up-to-date scientific knowledge into operability and decision making.
All the kinds of natural and manmade risks are dealt with by the SNPC,
including seismic, hydrogeological, flood, volcanic, forest fire, industrial and
nuclear, technological, transports, supply networks and environmental risks. Dif-
ferent kinds of engagement are envisaged, at different territorial levels, according to
the local, regional or national level of the emergency to be faced and, more in
general, to the civil protection activities to be carried out in ordinary conditions.

2.4 How Science Contributes to Civil Protection

Science can provide different kinds of contributions to civil protection. They can be
distinguished and classified according to the type of relationship between the
scientific contributors and the civil protection organizations. The main kinds of
contributions can be categorized as follows:
(i) well-structured scientific activities, permanently performed by scientific insti-
tutions on behalf of civil protection organizations, which usually endow them;
(ii) finalized research activities carried out by scientific institutions, funded by
civil protection organizations to provide results and products for general or
specific purposes of civil protection;
(iii) advices regularly provided by permanent commissions or permanent consul-
tants of civil protection organizations;
(iv) advices on specific topics, provided by temporary commissions ad hoc
established by civil protection organizations;
(v) research activities developed in other frameworks and funded by other sub-
jects (European projects, Regional funds, etc.), that achieve results of interest
for civil protection organizations, especially when these latter are involved as
end-users;
(vi) free-standing research works, producing results of potential interest for civil
protection without any involvement of civil protection organizations.
Hereinafter, the above different kinds of scientific contributions are described
and discussed in the light of the experience made by the DPC, devoting a special
concern to the criticalities observed.
32 M. Dolce and D. Di Bucci

2.4.1 Permanent (i) and Finalized Research Activities


(ii) for Civil Protection – The Competence Centres

In Italy, there is a long-lasting tradition of interactions between civil protection and


scientific community on earthquake research topics. A first important link was
developed after the 1976 Friuli earthquake and continued until 2002, with projects
funded by the DPC and coordinated by the National Research Council that gave a
strong impulse to this research field, involving the whole scientific community. An
even stronger integration between civil protection and research was then promoted
in 2004, with a new organization of the relationships between the DPC and the
scientific community, on behalf of which the “Competence Centres” play a
primary role.
The Competence Centres (CC) of the DPC are scientific institutions which
provide services, information, data, elaborations, technical and scientific contribu-
tions for specific topics, to share the best practices in risk assessment and manage-
ment. These centres are singled out by a decree of the Head of DPC. The activities
carried out by the CC are funded by DPC through annual agreements, according to
general multi-year understandings that establish the main lines of activities to be
carried out in the reference period.
The interrelationships between DPC and CC are in many cases multifaceted, and
their management needs therefore a unified view. With this aim, for each CC which
deals with the seismic risk a DPC-CC joint committee has been established. This
committee, made of an equal number of DPC and CC components (typically 3–4
representatives per part), manages practically the relationships between the DPC
and the CC. Ultimately, the job of the joint committee, consists of acting as a sort of
hinge, a functional linkage between the two worlds of civil protection and seismic
risk science. This role, as much interesting as uncomfortable, guarantees consis-
tency in the management of all the activities concerned. In addition to the commit-
tee components, DPC representatives assure the correct finalization for civil
protection application of each activity/project developed by a CC and of the final
products, directly interacting with the CC scientific managers of the activity/
project. DPC representatives in charge and CC scientific managers report to their
directors and to the DPC-CC joint committee on the regular development of the
activities, on the possible needs that could arise and on the relevant decisions to be
taken, according to the scheme shown in Fig. 2.2.
The three main CC for the seismic risk are:
• INGV – the National Institute of Geophysics and Volcanology;
• ReLUIS – the National Network of University Laboratories of Earthquake
Engineering;
• EUCENTRE – the European Centre for Training and Research in Earthquake
Engineering.
INGV provides DPC with scientific advices and products related to seismolog-
ical (as well as volcanological, not addressed in the present work) issues, while
2 Civil Protection Achievements and Critical Issues in Seismology. . . 33

Fig. 2.2 Scheme of the relationships management between the Italian Department of Civil
Protection and a Competence Centre

EUCENTRE and ReLUIS operate in the field of earthquake engineering. All of


them represent the reference scientific system on seismic risk for DPC, and provides
the most advanced scientific knowledge in Seismology and Earthquake Engineer-
ing. Moreover, these CC have the capability to produce considerable progress and
organisation of the scientific information and to promote a strong finalisation of
research towards products for civil protection purposes (Dolce 2008).

2.4.1.1 INGV

A 10 year agreement between DPC and INGV (http://www.ingv.it/en/) was signed


in 2012, for the period 2012–2021. It envisages three types of activities, that are
described hereinafter with regards to earthquakes.
A-type: operational service activities.
Several different activities pertain to this type:
• seismic monitoring and 24/7 surveillance, through the National Earth-
quake Centre (INGV-CNT),
• implementation and maintenance of data bases useful for civil protection
purposes,
• preparedness and management of technical-scientific activities during the
emergencies,
• divulgation and training activities in coordination with DPC.

B-type: development of operational service activities.


On the one hand, this type concerns the actions to be undertaken by DPC and
INGV in order to improve and develop the activities mentioned in the above
A-type description. On the other hand, it deals with the pre-operational, and
34 M. Dolce and D. Di Bucci

then operational, implementation of research achievements (C-type below)


for civil protection. This occurs when validated scientific outcomes derived
from C-type activities, or from other INGV research, have to be transformed
into products that can be submitted to civil protection pre-operational, exper-
imental testing. In case of positive outcome, the scientific product/tool/study
can then become part of a fully operational service among the A-type
activities.
C-type: finalized research activities.
They consist of seismological-geological projects funded by DPC that involve
the entire scientific community.
Some examples of the above three types of activities are described in the
following paragraphs.

“A-Type” Activities

According to a national law (D. Lgs. 381/99), INGV has in charge the seismic (and
volcanic) monitoring and surveillance of the Italian territory. It manages and
maintains the velocimetric National Seismic Network (more than 300 stations),
whose data are collected and elaborated at the INGV-CNT, providing DPC with
quasi-real-time information on location and magnitude of Italian earthquakes, with
the capability to detect M > 2 earthquakes all over the Italian territory (Sardinia
excluded, in relation to the negligible seismicity of this region) and M > 1 in many
of the most hazardous regions (see Fig. 2.3).
Among the INGV A-type activities, the implementation and maintenance of data
bases that are important for their civil protection applications deserve to be men-
tioned. For instance:
• DISS – The Database of Individual Seismogenic Sources (http://diss.rm.ingv.it/
diss/; Basili et al. 2008; DISS Working Group 2010; Fig. 2.4) is, according to
http://diss.rm.ingv.it/diss/UserManual-Intro.html, a “georeferenced repository
of tectonic, fault and paleoseismological information; it includes individual,
composite and debated seismogenic sources. Individual and composite
seismogenic sources are two alternative seismic source models to choose from.
They are tested against independent geophysical data to ensure the users about
their level of reliability”. Each record in the Database is backed by a Commen-
tary, a selection of Pictures and a list of References, as well as fault scarp or fold
axis data when available (usually structural features with documented Late
Pleistocene – Holocene activity). The Database can be accessed through a web
browser or displayed on Google Earth. DISS was adopted as the reference
catalogue of Italian seismogenic sources by the EU SHARE Project (see below).
2 Civil Protection Achievements and Critical Issues in Seismology. . . 35

Fig. 2.3 (a) Distribution of the Italian seismic network operated by INGV; and (b) example of
magnitude detection threshold on march 16, 2015 (Data provided by INGV to DPC)

Fig. 2.4 DISS website (http://diss.rm.ingv.it/diss/; Basili et al. 2008; DISS Working Group 2010)

• ISIDe – The Italian Seismological Instrumental and parametric Data-basE


(http://iside.rm.ingv.it/iside/standard/index.jsp; Fig. 2.5a) provides verified
information on the current seismicity as soon as it is available, once reviewed
by the seismologists working at the INGV-CNT, along with the updated infor-
mation of past instrumental seismicity contained in the Italian Seismic Bulletin
(Mele and Riposati 2007).
36 M. Dolce and D. Di Bucci

Fig. 2.5 Websites of the data bases (a) ISIDE, and (b) ITACA

• ITACA – The ITalian ACcelerometric Archive (http://itaca.mi.ingv.it; Fig. 2.5b)


contains about 7,500 processed three-component waveforms, generated by about
1,200 earthquakes with magnitude greater than 3. Most of the data have been
recorded by the Italian Strong-motion Network (http://www.protezionecivile.
gov.it/jcms/it/ran.wp), operated by DPC, and also by the National Seismic
Network, operated by INGV (http://itaca.mi.ingv.it/; Luzi et al. 2008; Pacor
et al. 2011). Processed time-series and response spectra, as well as unprocessed
2 Civil Protection Achievements and Critical Issues in Seismology. . . 37

Fig. 2.6 (a) waveforms extracted from ITACA database, and (b) geographical distribution of the
National Strong-Motion Network (RAN-DPC)

time-series, are available from the download pages, where the parameters of
interest can be set and specific events, stations, waveforms and related metadata
can be retrieved (Fig. 2.6).

“B-Type” Activities

Apart from the actions aimed at improving and developing the operational service
activities (A-type), among the pre-operational and operational implementation of
research achievements for civil protection, there are some activities recently
implemented that deserve to be mentioned.

CPS – Centre of Seismic Hazard


The Centre of Seismic Hazard (INGV-CPS) was established in 2013 (http://
ingvcps.wordpress.com/chi-siamo/), promoted and co-funded by DPC. It operates,
in the current experimental phase, working on three different time scales of seismic
hazard: long-term, mid-term and short-term, for different possible applications.
For the long-term seismic hazard the time-window is typically of 50 years,
assuming the basic hypothesis of time-independence for the earthquake occurrence.
Within this framework, the CPS aims at updating the seismic hazard model of Italy
and the relevant maps according to the most recent advances in the international
state-of-the-art and using the most updated information that contributes to the
hazard assessment of the Italian territory.
For the mid-term seismic hazard the time-window is typically of years to tens of
years, assuming some time-dependence hypothesis to model the earthquake
38 M. Dolce and D. Di Bucci

occurrence. In this case, the activities are aimed at producing and comparing time-
dependent hazard models and maps, and defining a consensus-model or an
ensemble-model that can be useful to set up risk mitigation strategies for the near
future.
For the short-term seismic hazard (also known in the international literature as
Operational Earthquake Forecasting, OEF), that is modelled using time-dependent
processes, the time-window is typically days to months. About its possible out-
comes, Jordan et al. (2014) explain: “We cannot yet predict large earthquakes in the
short term with much reliability and skill, but the strong clustering exhibited in
seismic sequences tells us that earthquake probabilities are not constant in time; . . .
OEF must provide a complete description of the seismic hazard—ground-motion
exceedance probabilities as well as short-term rupture probabilities—in concert
with the long-term forecasts of probabilistic seismic-hazard analysis (PSHA)”.
The CPS activities are carried out by a dedicated working group, which uses a
new technological infrastructure for (i) the computation of the seismic hazard, by
integrating the most recent data and different models, (ii) the management of the
available data bases, and (iii) the representation of the hazard estimation, even using
web applications. Moreover, IT tools are developed to facilitate the preparation,
implementation and comparison of hazard models, according to standard formats
and common procedures, in order to make fast checks of the sensitivity of the
estimations. Synergies with some international activities, like the Collaboratory for
the Study of Earthquake Predictability, CSEP (http://www.cseptesting.org/), and the
Global Earthquake Model, GEM (http://www.globalquakemodel.org/), as well as
with the Italian seismic hazard community, are pursued.

CAT – Tsunami Alert Centre


The Tsunami Alert Centre (INGV-CAT) was established in 2013 in order to
contribute to the Italian Tsunami Alert System (see Fig. 2.7). A Memorandum of
Understanding was then signed on January 16th, 2014, between DPC and INGV.
This centre operates within the activities promoted by the Intergovernmental
Coordination Group for the Tsunami Early Warning and Mitigation System in the
North-Eastern Atlantic, the Mediterranean and connected seas (ICG/NEAMTWS).
This group was formally established by the Intergovernmental Oceanographic
Commission of UNESCO (IOC-UNESCO) through the Resolution IOC-XXIII-14.
The Italian Tsunami Alert System deals with earthquake-induced tsunamis and
encompasses different functions: the event detection; the alert transmission to the
potentially involved areas and, more in general, to the entire civil protection
system; the preparedness to the operational response by drawing up the tsunami
civil protection plans at different scales; the citizens’ formation about the correct
behaviour in the case of event. These functions are carried out by different subjects
which operate in close coordination. In particular, three public administrations are
involved in this task: DPC, INGV and ISPRA (Italian Institute for Environmental
Protection and Research) with the following roles:
2 Civil Protection Achievements and Critical Issues in Seismology. . . 39

Fig. 2.7 The Italian Tsunami Warning System (Michelini A, personal communication 2014)

• DPC has the role of Tsunami National Contact (TNC);


• INGV has the role of National Tsunami Warning Centre (NTWC); at national
scale, this corresponds to the INGV-CAT, which is part of the INGV-CNT;
• the Director of the INGV-CNT has the role of National Tsunami Warning Focal
Point (NTWFP);
• ISPRA guarantees the sea level monitoring and surveillance, ensuring the
transmission to the INGV-CAT of the data acquired by its National
Mareographic Network (RMN). From August 2013, ISPRA sends to
CAT@INGV sea level measurements recorded in real time.
Since October 1st, 2014, the INGV-CAT has assumed the role of Candidate
Tsunami Watch Provider (CTWP) for the IOC/UNESCO member states in the
Mediterranean. Moreover, a DPC officer is currently in charge of the
IGC/NEAMTWS Vice-Chair.
The INGV-CAT will operate within the INGV earthquake operational room,
also with the mission to organize the scientific and technological competences
which deal, for instance, with the physics and the modelling of the seismogenic
and tsunami sources, the tsunami hazard, the real-time seismology, the related
computer-science applications. The strong connection with the INGV earthquake
operational room will allow the INGV-CAT to take advantage from the INGV
experience on seismic monitoring activities.
At present, the entire Italian Tsunami Alert System is undergoing a
pre-operational testing phase, which involves the operational structures of the
National Service of Civil Protection and representatives of the Regional authorities.
40 M. Dolce and D. Di Bucci

“C-Type” Activities

DPC promotes a series of seismological projects that are organized in a research


program developed to achieve objectives of specific interest for civil protection in
the field of earthquakes. They are funded by DPC and managed by INGV in the
frame of a 10 year agreement between DPC and INGV (2012–2021; http://istituto.
ingv.it/l-ingv/progetti/progetti-finanziati-dal-dipartimento-di-protezione-civile-1/
Progetti%20DPC-INGV%20Convenzione%20C). These projects also involve
many universities and other research institutes, and in general are carried out with
the contribution of the national and international scientific community.
The ongoing research program is organized in three main projects, which are
presently coming to an end.
• Project S1 – Base-knowledge improvement for assessing the seismogenic poten-
tial of Italy.
This project is structured into three parts. Two of them address the activities
related to geographical areas of interest (Po Plain, Sannio-Matese to the
Calabria-Lucania border), whereas the third one concerns the activities which
may have a specific interest as special case studies or application of innovative
techniques. The project has been structured in sub-projects and tasks. All
sub-projects address regional-scale issues and specific targets within a region,
with one exception, aimed at promoting the optimization of techniques which
are used for earthquake geology and seismic monitoring.
• Project S2 – Constraining observations into seismic hazard
This project aims at comparing and ranking different hazard models,
according to open-shared and widely agreed validation rules, in order to select
the best “local” hazard assessment. The goal is to validate the hazard maps on
instrumental observations, combining expected shakings at bedrock with site-
specific information gathered at local scale.
• Project S3 – Short term earthquake forecasting
The basic aim of this project is the full exploitation of the huge amount of data
collected, with special care to the potential detection of possible large scale/short
term (weeks to months) transient strain field variations, that could be related to
incoming earthquakes. Two are the study areas of major concern (Po plain and
Southern Apennines). In particular, due the larger amount of information avail-
able for the Po Plain (GPS, InSAR, piezometric data, etc.) most of activities is
focused on this area.
The total funding for the current, 2 years seismological topics was 2 M€, 60 % of
which have been devoted to the participation of universities and other scientific
institutions, while 40 % are for the research units of INGV. Several tens of research
units are involved in this program.
2 Civil Protection Achievements and Critical Issues in Seismology. . . 41

2.4.1.2 ReLUIS

DPC and ReLUIS (http://www.reluis.it/) signed a 5 years agreement for the


2014–2018 period. The object of the agreement is related to two main groups of
activities carried out for DPC in the field of earthquake engineering, namely the
technical-scientific support and divulgation, and the development of knowledge.
More in detail, ReLUIS supports DPC in:
• post-earthquake technical emergency management;
• training and divulgation activities in earthquake engineering and seismic risk
(teachers’ availability, high-level course organization, meetings and seminars,
technical-scientific divulgation, conferences);
• training of professionals on the post-earthquake evaluations;
• campaigns of divulgation and spreading of the civil protection culture.
For what concerns the development of knowledge, themes of civil protection
interest are developed according to the following lines of activity:
• finalized research programs on earthquake engineering and seismic risk
mitigation;
• coordination with the DPC, CC and with other technical-scientific subjects;
• implementation, revision and publication of manuals, guidelines, pre-normative
documents;
• assistance for drafting/revising technical norms.
The finalized research programs are in a continuity line with the previous pro-
jects, that started in 2005 (Manfredi and Dolce 2009). For the 2014–2016 period,
they are organized according to the following general lines:
(i) General Themes, relevant to design, safety verifications and vulnerability
assessment of buildings and constructions (e.g., R/C and masonry buildings,
bridges, tanks, geotechnical works, dams, etc.);
(ii) Territorial Themes, aimed at improving the knowledge of the types of build-
ings and of their actual territorial distribution, in order to set up tools for the
improvement of the vulnerability and risk assessment at national/local scale;
(iii) Special Projects on specific topics (e.g. distribution networks and utilities,
provisional interventions, etc.) that are not dealt with in the General Themes,
or on across-the-board themes (e.g., near-source effects on structures, treat-
ment of uncertainties in the safety assessment of existing buildings).
Territorial Themes deserve a special attention from the civil protection point of
view. Seismic risk evaluations at the national scale are currently based on the data
derived from the national population census, which includes only some rough data
on buildings (age, number of stories, type of structural material, i.e., R/C or
masonry). A new approach has been set up, aimed at improving such evaluation
for what concerns the vulnerability and exposure components on a territorial basis,
trying to extract as much information as possible from the knowledge of local
42 M. Dolce and D. Di Bucci

experts (i.e., professionals and local administration officials) on the building char-
acteristics. This approach takes profit of the network organization of ReLUIS, that
involves more than 40 universities all over Italy. It is based on the identification of
the common structural and non-structural features of buildings pertaining to each
district of a given municipality, characterized by a good homogeneity in terms of
age and main characteristics of the building stock (Zuccaro et al. 2014).

2.4.1.3 EUCENTRE

DPC and EUCENTRE (http://www.eucentre.it/) signed an agreement for the


2014–2016 period. Also in this case, as for ReLUIS, the object of the agreement
is related to the two main groups of earthquake engineering activities carried out for
DPC, i.e., the technical-scientific support and divulgation, and the development of
knowledge. In detail, EUCENTRE supports DPC in:
• training and divulgation;
• experimental laboratory testing on structural models, sub-assemblages and
elements;
• management of seismic data banks;
• planning, preparing and managing technical-scientific activities in emergency.
Of particular interest is the management of seismic data banks, due to the
implemented capability of making risk and scenario evaluations. This management
is organized in the following lines of activities (see Fig. 2.8):
• Tool for System Integration (S.3.0 in Fig. 2.8)
• Seismic risk of the Italian dwelling buildings
• Seismic risk of the Italian schools (S.3.2 in Fig. 2.8)
• Management system of the post-event dwelling needs
• Seismic Risk of the Italian road system
• Seismic Risk of the Italian sea harbours (S.3.5 in Fig. 2.8)
• Seismic Risk of the Italian earth dams (S.3.6 in Fig. 2.8)
• Seismic Risk of the Italian airports
• Data base of past earthquake damage to buildings
• Seismic vulnerability of the Italian tunnels
• WebGIS for private buildings upgrade funded by the State with Law n. 77/2009,
Art. 11
The activities devoted to the development of knowledge are related to the two
following themes: (1) Maps of seismic design actions at uniform risk, and (2) Fra-
gility curves and probability of damage state attainment of buildings designed
according to national codes. This latter theme encompasses the seismic safety of
masonry buildings (including the limited knowledge of the structure and of the
uncertainty sources, the improvement of procedures of analysis and verification of
structures, and the fragility curves of masonry buildings), the Displacement Based
2 Civil Protection Achievements and Critical Issues in Seismology. . . 43

Fig. 2.8 Examples of WEB-GIS applications by EUCENTRE

Design in low hazard zones and relevant software implementation DBDsoft, and
the Fragility curves of precast building structures.

2.4.2 Permanent Commissions – The Major Risks


Commission

The National Commission for forecasting and prevention of Major Risks is the
highest-level, connecting structure between the Italian civil protection system and
the scientific community. It is an independent scientific consultation body of DPC,
but it is not part of the Department itself. The Commission was established by Law
n. 225/1992. Its organization and functions have been re-defined on 2011 (DPCM
7 October 2011).
The Major Risks Commission provides advice on technical-scientific matters,
both autonomously and on request of the Head of the Department of Civil Protec-
tion, and may provide recommendations on how to improve capabilities for eval-
uation, forecasting and prevention of the various risks.
The Commission is structured in a Presidency Office and five sectors relevant to:
– seismic risk,
– volcanic risk,
– weather-hydrogeological, hydraulic and landslide risk,
44 M. Dolce and D. Di Bucci

– chemical, nuclear and industrial and transport risk,


– environmental and fire risk.
Each sector has a coordinator and ten to twelve members coming from the whole
scientific community, including experts from the CC.
The term of the office is 5 years. The Commission meets separately for each risk
sector, or in joint sessions for the analysis of inter-disciplinary matters. It usually
meets once a year in plenary session and normally gathers in the DPC premises. In
order to get further scientific contributions, the President can invite also external
experts without voting right.
As far as the formal communications of the Commission are concerned,
according to the current rules the results of each meeting have to be summarized
in minutes that are released to the Head of the Department of Civil Protection. In
case of specific communication needs, the same results can be further summarized
in a public statement, which represents the only official way to provide the opinions
of the Commission to the public.

2.4.3 Commissions on Specific Subjects

In the recent past, DPC turned to the advice of high-level international panels of
scientists to deal with specific and delicate questions of civil protection interest.
Two cases related to seismic risk are summarized in this section.

2.4.3.1 ICEF – International Commission on Earthquake Forecasting

The International Commission on Earthquake Forecasting was charged by DPC on


May 20th, 2009, after the April 6th, 2009, L’Aquila earthquake, to report on the
current state of knowledge of short-term prediction and forecasting of tectonic
earthquakes and to indicate guidelines for utilization of possible forerunners of
large earthquakes to drive civil protection actions. The Commission worked during
4 months to firstly draft an Executive Summary, that was released on October 2nd,
2009. The final ICEF Report, including state-of-art, evaluations and findings, was
then completed and published on August 2011 (Jordan et al. 2011).
The Commission was composed of ten members from nine countries, namely:
T. H. Jordan, Chair – USA, Y.-T. Chen – China, P. Gasparini, Secretary – Italy,
R. Madariaga – France, I. Main – United Kingdom, W. Marzocchi – Italy,
G. Papadopoulos – Greece, G. Sobolev – Russia, K. Yamaoka – Japan, J. Zschau
– Germany.
The final ICEF report is organized into five sections, as follows.
I. Introduction: describes the charge to the Commission, the L’Aquila earthquake
context, and the Commission’s activities.
2 Civil Protection Achievements and Critical Issues in Seismology. . . 45

II. Science of Earthquake Forecasting and Prediction: summarizes the state of


knowledge in earthquake forecasting and prediction and discusses methods for
testing and validating forecasting models.
III. Status of Operational Earthquake Forecasting: reports on how governmental
agencies in China, Greece, Italy, Japan, Russia and United States use opera-
tional forecasting for earthquake risk management.
IV. Key Findings and Recommendations: states the Commission’s key findings
and makes specific recommendation on policies and actions that can be taken
by DPC to improve earthquake forecasting and its utilization in Italy.
V. Roadmap for Implementation: summarizes the DPC actions needed to imple-
ment the main recommendations in Italy.
Among the recommendations, it is worth to mention the following ones:
Recommendation A: DPC should continue to track the scientific evolution of
probabilistic earthquake forecasting and deploy the infrastructure and expertise
needed to utilize probabilistic information for operational purposes.
Recommendation D: DPC should continue its directed research program on devel-
opment of time-independent and time-dependent forecasting models with the
objective of improving long-term seismic hazard maps that are operationally
oriented.
Recommendation G2: Quantitative and transparent protocols should be established
for decision-making that include mitigation actions with different impacts that
would be implemented if certain thresholds in earthquake probability are
exceeded.
Although the activities of the CC, especially of INGV, were already in line with
such recommendations, they have been somewhat re-addressed, according to them.
In the meanwhile, DPC is rethinking about the delicate management of seismic
sequences, in the light of the recent scientific advancements suggested by the ICEF
Commission. In fact, managing seismic sequences from a civil protection point of
view is a very complex question, due to the variety of situations and to the
difficulties in structuring well defined procedures.
Main aspects are:
• the very low probabilities of a strong event during swarms and their communi-
cation to authorities and to citizens (and then to media). This information
competes with different kinds of predictions made available to the public, as
well known since the seventies: “In the 1976 . . . I warned that the next 10 years
were going to be difficult ones for us, with many ‘messy’ predictions to deal with
as we gradually developed a prediction capability. Certainly this has proved to
be the case, with many of the most difficult situations arising from predictions by
amateurs or self-proclaimed scientists who nevertheless gained public credibil-
ity through the news media” (Allen 1982). Although it is well known that the
strengthening of constructions remains by far the more effective way to mitigate
seismic risk, there is still a strong request for predictions or any action that can
46 M. Dolce and D. Di Bucci

alleviate worries and fears of citizens caused by shakes during a seismic


sequence;
• the relatively high probabilities of strong aftershocks following a major event,
especially for what concerns the management of civil protection activities after a
big earthquake, like search and rescue, population assistance, damage assess-
ment, safety countermeasures, etc.
These points have to do with the short-term seismic hazard, and DPC is carefully
evaluating the possibility of using the related information, availing of INGV-CPS
evaluations. An in-depth analysis is going on among and within different DPC
sectors (Technical, Emergency, Communication, Press), also involving the Major
Risks Commission for what concerns the accuracy of the evaluation methods and
other scientific issues. Some of the questions that are more strictly related to civil
protection issues are relevant to the communication to the large public and the
media (about: delivering simplified or complete probabilistic information, either
regularly or just in case of swarms or major events; evaluating how this kind of
communication could encourage private and public owners to undertake the struc-
tural strengthening of their buildings, rather than discourage them; communicating
risk/loss forecast rather than just hazard; educating public, media and administra-
tors to make good use of short-term hazard information), to the civil protection
actions that can be effectively carried out, especially related to the knowledge of the
high probabilities of strong aftershocks, and to the tasks and responsibilities of
information providers and of civil protection organizations.

2.4.3.2 ICHESE – International Commission on Hydrocarbon


Exploration and Seismicity in the Emilia Region

The need for an international commission to deal with ‘Hydrocarbon Exploration


and Seismicity in the Emilia Region’ was expressed by the President of the Emilia
Romagna Region after the 2012 Emilia earthquakes. Members of the commission
were five scientists, namely Peter Styles, Chair – UK, Paolo Gasparini, Secretary –
Italy, Ernst Huenges – Germany, Stanislaw Lasocki – Poland, Paolo Scandone –
Italy, and a representative of the Ministry of Economic Development – Franco
Terlizzese.
On February 2014, the Commission released a final report answering the fol-
lowing questions, on the basis of the technical-scientific knowledge available at the
moment:
1. Is it possible that the seismic crisis in Emilia has been triggered by the recent
research activities at the Rivara site, particularly in the case of invasive
research activities, such as deep drilling, fluids injections, etc.?
2. Is it possible that the Emilia seismic crisis has been triggered by activities for the
exploitation and utilization of reservoirs carried out in recent times in the close
neighbourhood of the seismic sequence of 2012?
2 Civil Protection Achievements and Critical Issues in Seismology. . . 47

While the answer to the first question was trivial, once verified that there had
been no field research activities at the Rivara site, the answer to the second question
was articulated as follows:
• The study does not indicate that there is evidence which can associate the Emilia
2012 seismic activity to the operation activities in Spilamberto, Recovato,
Minerbio and Casaglia fields,
• it cannot be ruled out that the activities carried out in the Mirandola License area
have had a triggering effect,
• In any case, the whole Apennine orogen under the Po Plain is seismically active
and therefore it is essential that the production activity are accompanied by
appropriate actions, which will help to manage the seismic risk associated with
these activities.
Apart from the specific findings, the importance of the Commission stands in
having addressed the induced/triggered seismicity issue in Italy, a research field still
to be thoroughly explored in this country. As it can be easily understood, however,
not only is this topic of scientific interest, but it has also an impact on the
hydrocarbon E&P and the gas storage activities, due to the increased awareness
of national policy makers, local authorities and population (see, for a review of the
current activities on induced/triggered seismicity in Italy, D’Ambrogi et al. 2014).

2.4.4 Research Funded by Other Subjects

In the past, international research projects were little finalized to products for civil
protection use, and the stakeholders’ role, although somehow considered, was not
enough emphasized. Looking at the research funding policy currently undertaken by
the European Union, a more active role is expected from the stakeholders (e.g.,
Horizon 2020, Work Programme 2014–15, 14. Secure societies; http://ec.europa.eu/
research/participants/data/ref/h2020/wp/2014_2015/main/h2020-wp1415-security_
en.pdf) and, among them, from civil protection organizations, as partners or end-user
advisors. Some good cases of EU-funded research projects, finalised to the achieve-
ment of results potentially useful for civil protection can be mentioned, however, also
for the previous EU Seventh Framework Program. Three examples are here discussed,
to show how important is the continuous interaction between scientific community
and civil protection stakeholders to achieve results that can be exploited immediately
or prospectively in practical situations, and how long is the road to get a good
assimilation of scientific products or results within civil protection procedures.
A different case, not dealt in detail, is represented by the GEM Programme and
promoted by the Global Science Forum (OECD). This is a global collaborative
effort in which science is applied to develop high-quality resources for transparent
assessment of earthquake risk and to facilitate their application for risk manage-
ment around the globe (http://www.globalquakemodel.org/). DPC supported the
establishment of GEM in Pavia and currently funds the programme, representing
Italy in the Governing Board.
48 M. Dolce and D. Di Bucci

Hazard Physical Assessment Social and Economic


Event Damage of Impact Consequences

Short Term Long Term


Building Stock Emergency Shelter,
Housing Relocation,
Temporary Housing Displacement

Social and Economic Vulnerability


Direct Damage, Price Fiscal Impacts,
Transportation Increases, Business Economic Business
Systems Interuption, Supply Loss Failure, Job Loss,
Disruption Reconstruction

Systemic Vulnerability

Utility and
Infrastructure Casualties Psychological
Systems Fatalities, Health Care Health Distress,
Disruption Chronic Injury

Critical Social Family Stress,


Facilities Emergency Supplies, Disruption Neighborhood
Family Separation Disruption

Fig. 2.9 General graphic layout of the concept and goals of SYNER-G (http://www.vce.at/
SYNER-G/files/project/proj-overview.html)

2.4.4.1 SYNER–G

Syner-G is a EU project developed within the Seventh Framework Programme,


Theme 6: Environment, and focused on the systemic seismic vulnerability and risk
analysis of buildings, lifelines and infrastructures. It started on November 2009,
with a 3 years duration (Pitilakis et al. 2014a, b). Eleven partners from eight
European countries and three from outside Europe (namely USA, Japan and
Turkey) participated to the project, that was coordinated by the Aristotle University
of Thessaloniki (Greece) (Fig. 2.9).
The main goals of Syner-G were (see http://www.vce.at/SYNER-G/files/project/
proj-overview.html):
• to elaborate, in the European context, appropriate fragility relationships for the
vulnerability analysis and loss estimation of all elements at risk,
• to develop social and economic vulnerability relationships for quantifying the
impact of earthquakes,
• to develop a unified methodology and tools for systemic vulnerability assess-
ment, accounting for all components exposed to seismic hazard, considering
interdependencies within a system unit and between systems,
2 Civil Protection Achievements and Critical Issues in Seismology. . . 49

• to validate the methodology and the proposed fragility functions in selected sites
(at urban scale) and systems, and to implement them in an appropriate open
source and unrestricted access software tool.
DPC acted as an end-user of this project, providing data and expertise; more-
over, one of the authors of the present paper was part of the advisory board. The
comments made in the end-user final report, summarized below, provide an over-
view of the possible interactions and criticalities of this kind of projects with civil
protection organizations. Among the positive aspects:
• the analysis of the systemic vulnerability and risk is a very complex task;
• considerable steps ahead have been made, in Syner-G, both in questions not
dealt with before or in topics that have been better finalized during the project;
• brilliant solutions have been proposed for the problems dealt with and sophis-
ticated models have been utilized;
• of great value is the coordination with other projects, especially with GEM.
It was however emphasized that:
• large gaps still exist between many scientific approaches and practical decision-
makers’ actions;
• the use of very sophisticated approaches and models has often required to
neglect some important factors affecting the real behaviour of some systems;
• when dealing with a specific civil protection issue, all important affecting factors
should be listed, not disregarding any of them, and their influence evaluated,
even though roughly;
• a thorough and clear representation of results is critical for a correct understand-
ing by end-users;
• models and results calibration should be referred to events at different scale, due
to the considerable differences in the system response and in the actions to be
undertaken;
• cases of induced technological risks should be considered as well, since nowa-
days the presence of dangerous technological situations is widespread in the
partner countries.

2.4.4.2 REAKT

REAKT – Strategies and tools for Real time Earthquake risK reducTion (http://
www.reaktproject.eu/) as well is a EU project developed within the Seventh
Framework Programme, Theme 6: Environment. It started on September 2011,
with a 3 years duration. Twenty-three partners from nine European countries and
six from the rest of the world (namely Jamaica, Japan, Taiwan, Trinidad and
Tobago, Turkey, USA) participated to the project, that was coordinated by
AMRA (Italy; http://www.amracenter.com/en/). Many different types of stake-
holders acted as end-users of the Project, among which the Italian DPC, represented
50 M. Dolce and D. Di Bucci

by the authors of this paper. DPC has actively cooperated, by putting at disposal
data and working on application examples.
Among the main objectives of REAKT, one of them deserves specific attention
for the scopes of the present paper, namely: “the definition of a detailed method-
ology to support optimal decision making associated with earthquake early warning
systems (EEWS), with operational earthquake forecasting (OEF) and with real-time
vulnerability and loss assessment, in order to facilitate the end-users’ selection of
risk reduction countermeasures”.
Much in detail, the attention is here focused on the EEWS and, specifically, on
the content of the first version of the “Final Report for Feasibility Study on the
Implementation of Hybrid EEW Approaches on Stations of RAN” (Picozzi
et al. 2014). Actually, during the project, an in-depth study on the possibility of
exploiting for EEW purposes the National Strong-Motion Network RAN was
carried out. It is worth to notice that within the project, consistently with the
purpose of the related task, the attention was exclusively focused on the most
challenging scientific aspects, on which an excellent and exhaustive research
work has been carried out. Summarising, the main outcomes of this work are
related to the reliability of the real-time magnitude computation and to the evalu-
ation of the lead time, i.e., the time needed for the assessment of the magnitude of
the impending earthquake and for the arrival of this information to the site where
some mitigating action has to be undertaken before strong shear waves arrive. Such
evaluation is referred to the performances and the geographical distribution of the
RAN network (see Fig. 2.6b), and to the performances of the algorithm PRESTo
(Satriano et al. 2010) for the fast evaluation of the earthquake parameters. The
knowledge of the lead time allows an evaluation of the so-called blind and safe
zones to be made, where the “blind zone” is the area around the epicentre where the
information arrives after the strong shake starts, while the “safe zone” is the
surrounding area where the information arrives before and where the shake is still
strong enough for the real-time mitigating action to be really useful.
However, neither other technological and scientific requirements that must be
fulfilled have been analysed, nor other components necessary to make a complete
EEW system useful to mitigate risk have been considered, many of which dealing
with civil protection actions. This case appears useful, therefore, to show the
different points of view of science and civil protection and to emphasize again
how important is to consider all the main factors affecting a given problem – in this
case the feasibility and effectiveness of an EEWS – and to evaluate, even roughly,
their influence. At this aim, some of the comments made by DPC to the first draft of
the final report (Picozzi et al. 2014) are summarized below. The main aspects dealt
with are about the effectiveness of EEW systems for real-time risk mitigation. This
latter requires at least that:
• efficiency of all the scientific components is guaranteed,
• efficiency of all the technological components is guaranteed,
• targets and mitigation actions to be carried out are defined,
• time needed for the actions is added to the (scientific) lead time,
2 Civil Protection Achievements and Critical Issues in Seismology. . . 51

Fig. 2.10 Different definitions of blind and safe zone from the scientific and the operational (civil
protection) points of view

• end-users (including population) are educated and trained to receive messages


and act consequently and efficiently,
• costs and benefits of the actions are evaluated,
• infrastructures required for automatic actions are efficient,
• downtime is avoided in the links among elements of the EEW chain,
• responsibilities related to false and missed alarms and legal framework are well
defined.
A very important point, which is strictly related to the capability of an EEWS to
really mitigate risk in real time, is how to identify the so-called “blind zone”, where
no real-time mitigating action can be carried out, as the information about the
impending earthquake arrives too late; and, consequently, how to identify the “safe
zone”, where potentially some mitigating action can be made (see Fig. 2.10).
Actually, defining this latter as a “safe” zone solely on the basis of the above
mentioned scientific evaluations can be misleading, because the identification of a
“safe” zone should also account for the time needed to undertake a specific “real-
time” mitigation action that, obviously, requires from some seconds to some tens of
seconds (Goltz 2002). When including also this time interval in the calculation of
the “blind zone” radius, a considerable increase occurs, from 30–35 km to some
50–60 km. Unfortunately, this reduces considerably the effectiveness of the EEWS
for Italian earthquakes, which are historically characterized by magnitudes that
rarely exceeded 7.0. Dealing with these values, the EEW applicability in the
severely damaged zones around the epicentral area is totally excluded, whereas
52 M. Dolce and D. Di Bucci

the zones of its potential utilization actually correspond to areas where the felt
intensity implies no or negligible structural damage.
From a communication perspective, it has to be noticed that spreading a purely
scientific information that, though correct, neglects a comprehensive analysis
including civil protection issues could determine in the stakeholders and in the
general public undue expectations, beyond the actual EEW potential capabilities in
Italy, if it is based on a regional approach.

2.4.4.3 SHARE

SHARE – Seismic Hazard Harmonization in Europe (http://www.share-eu.org/) is a


Collaborative Project in the Cooperation programme of the EU Seventh Framework
Programme. “SHARE’s main objective is to provide a community-based seismic
hazard model for the Euro-Mediterranean region with update mechanisms. The
project aims at establishing new standards in Probabilistic Seismic Hazard Assess-
ment (PSHA) practice by a close cooperation of leading European geologists,
seismologists and engineers. . . . SHARE produced more than 60 time-independent
European Seismic Hazard Maps, spanning spectral ordinates from 0 (PGA) to 10 s
and exceedance probabilities ranging from 101 to 104 yearly probability”.
Eighteen scientific partners from thirteen countries contributed to the project,
which started on September 2011, with a 3 years duration. No stakeholder acted as
end-user. The most renowned product of SHARE is the 475 years return period
PGA map of Europe, shown in Fig. 2.11, which reproduces the poster of the project,
entitled “European Seismic Hazard Map”.
In Italy, the official set of seismic hazard maps is a product of a DPC-INGV
project released in 2004 (http://esse1-gis.mi.ingv.it/). These maps were enforced in
2006 (OPCM 3519/2006) and they were included in the current Italian seismic code
in 2008 (DM 14 January 2008).
If one compares the two corresponding (475 years return period) PGA hazard
maps, as shown in Fig. 2.12, considerable differences in PGA can be observed, with
systematically greater values in the SHARE map. Such differences are typically in
the order of +0.10 g (up to 0.15–0.20 g, locally), resulting in percentage differences
reaching 50 % even in high hazard areas (Meletti et al. 2013). Based on this
comparison, one could infer that not only is the national official map set
“wrong”, assuming the most recent being the “right” one, but also highly
non-conservative. Therefore, severe doubts about the correctness of the Italian
official hazard and classification maps could arise, along with general problems
of communication with the general public and the media.
From an engineering viewpoint, on the contrary, spectral accelerations are the
only ones that enter into the design procedures and are, therefore, much more
important than PGA for seismic risk mitigation. From this perspective, if one
looks at the hazard maps in terms of spectral accelerations corresponding to
T ¼ 0.5 s vibration period, differences of only 0.05 g are typically detected
(Meletti et al. 2013). Being of opposite signs, these differences highlight that the
2 Civil Protection Achievements and Critical Issues in Seismology. . . 53

Fig. 2.11 Poster of the SHARE project, which reproduces the 475 return period PGA map of
Europe (http://www.share-eu.org/sites/default/files/SHARE_Brochure_public.web_.pdf)

Fig. 2.12 Official (seismic code) PGA hazard map of Italy (a) vs. SHARE PGA hazard map
(b) for the same area, referred to 10 % probability in 50 years (Maps are taken, respectively, from:
http://zonesismiche.mi.ingv.it/mappa_ps_apr04/italia.html, and http://www.share-eu.org/sites/
default/files/SHARE_Brochure_public.web_.pdf)
54 M. Dolce and D. Di Bucci

Italian official hazard model is not under-conservative, differently from what the
PGA maps would induce to believe, and are instead acceptable from an engineering
point of view.

2.4.5 Free Research Works

As anticipated in section 3, there is also a large amount of scientific studies and


published papers that are independently produced by the scientific community, and
sometimes by inventors and amateurs, that could have repercussions on civil
protection activities. They are in many cases related to:
• drafting new hazard maps,
• making earthquake predictions (short- and medium-term),
• discovering new active faults (especially in built environments),
• inventing instruments that try to make a sort of earthquake early warning,
• conceiving new structural devices or building techniques,
• inventing antiseismic indoor shelters, like antiseismic boxes, rooms, cellules,
beds, etc.
There is a very large number of examples that could be mentioned here, but
anyone reading this paper can focalize on his own experience about some of the
above situations raising almost daily.
Without discussing the scientific value, sometimes high, of these products made
freely available, it is quite clear that their integration in the civil protection pro-
cedures or decisional processes cannot be immediate. As a matter of fact, intrinsic
in the research activity is the scientific debate on the new findings. Therefore,
before a new scientific product can be taken into consideration for civil protection
purposes, not only it has to be published on peer reviewed journals, but it has also to
be widely and publicly discussed and somehow “accepted” by a large part of the
scientific community (also assuming that a 100 % consensus is practically impos-
sible to reach). After this pre-requisite is fulfilled, these scientific results need to be
envisaged in the civil protection decisional chain (including a cost-benefit analysis),
and in most cases they need to be adapted and calibrated to civil protection
operability. Finally, a testing phase follows, aimed at verifying if their use, ulti-
mately, brings advantage in the achievement of the system goals. All these steps
stand to reason that civil protection decisions and actions have a strong and direct
impact on the society, and thus they have to be undertaken on well-grounded
premises.
As one can imagine, this integration process takes time, and therefore it can
suffer from some shortcuts followed for instance by individual scientists, who
promote the immediate use of their results through the mass media and the political
authorities, at both national and local level. No matter if the new findings are the
outcome of valuable research or not, when civil protection is improperly urged to
promptly acknowledge or adopt some specific new findings and take any useful
2 Civil Protection Achievements and Critical Issues in Seismology. . . 55

action to mitigate risk based on them, this will cause a damage to the entire system.
This problem can be overcome only by increasing the awareness that scientists,
media, PDMs and TDMs, all of them compose the same puzzle, and cooperation,
interchange, correct communication are the only way to attain the shared goal of a
more effective civil protection when working for risk mitigation.

2.5 Conclusion

The relationships between science and civil protection, as shown in this paper, are
very complex, but they can imply important synergies if correctly addressed. On the
one hand, scientific advances can allow for more effective civil protection decisions
and actions, although critical issues can arise for the civil protection system, that
has to suitably shape its activities and operational procedures according to these
advances. On the other hand, the scientific community can benefit from the
enlargement of the investigation perspectives, the clear finalisation of the applied
research activities and their positive social implications.
In the past decades the main benefits from civil protection-science interaction in
Italy were a general growth of interest on Seismology and Earthquake Engineering
and a general increase of the amount and of the scientific quality of research in these
fields. But there were also a still inadequate finalisation of the products and some
inconsistencies of the results not solved within and among the research groups (i.e.,
lack of consensus).
Progresses recently achieved, consequent to a re-organization effort that started
in 2004, encompass:
• better structured scientific activities, finalised to civil protection purposes;
• an improved coordination among research units for the achievement of civil
protection objectives;
• the realization of products of ready use (e.g.: tools for hazard analysis, databases
in GIS environment, guidelines);
• a substantial increase of experimental investigations, data exchanging and com-
parisons within large groups, as well as the achievement of a consensus on
results, strictly intended for decisional purposes;
• a renewed cooperation in the divulgation activities aimed at increasing risk
awareness in the population;
• better structured advisory activities of permanent and special commissions.
While important progresses are registered, a further improvement in the coop-
eration can be still pursued, and many problems also remain in case of
non-structured interactions between civil protection and scientific community.
For all the above reasons, a smart interface between civil protection and scien-
tific community continues to be necessary (Di Bucci and Dolce 2011), in order to
identify suitable objectives for the research funded by DPC, able to respond to civil
protection needs and consistent with the state-of-the-art at international level.
56 M. Dolce and D. Di Bucci

After the 2009 L’Aquila and 2012 Emilia earthquakes, the scientific partners
have provided a considerable contribution to the National Service of Civil Protec-
tion in Italy, not only with regard to the technical management of the emergency but
also the divulgation campaigns for the population under the DPC coordination.
However, an even more structured involvement of the CC is envisaged, even in the
emergency phase.
The authors strongly believe in the need and the opportunity that the two worlds,
scientific community and civil protection, carry on cooperating and developing an
interaction capability, focusing on those needs that are a priority for the society and
implementing highly synergic relationships, which favour an optimized use of the
limited resources available. Some positive examples come from the Italian experi-
ence and have been described along with some of the tackled difficulties. They deal
with many different themes and are intended to show the multiplicity and diversity
of issues that have to be considered in a day-by-day work of interconnection
between civil protection and scientific community. These examples can help to
get a more in-depth mutual understanding between these two worlds and provide
some suggestions and ideas for the audience, national and international, which
forms the seismic risk world.

Acknowledgments The Authors are responsible for the contents of this work, which do not
necessarily reflect the position and official policy of the Italian Department of Civil Protection.

Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.

References

AGU Fall Meeting (2012) Lessons learned from the L’Aquila earthquake verdicts press confer-
ence. http://www.youtube.com/watch?v¼xNK5nmDFgy8
Alexander DE (2014a) Communicating earthquake risk to the public: the trial of the “L’Aquila
Seven”. Nat Hazards. doi:10.1007/s11069-014-1062-2
Alexander DE (2014b) Reply to a comment by Franco Gabrielli and Daniela Di Bucci: “Commu-
nicating earthquake risk to the public: the trial of the ‘L’Aquila Seven”. Nat Hazards. doi:10.
1007/s11069-014-1323-0
Allen CR (1982) Earthquake prediction—1982 overview. Bull Seismol Soc Am 72(6B):S331–S335
Basili R, Valensise G, Vannoli P, Burrato P, Fracassi U, Mariano S, Tiberti MM, Boschi E (2008)
The Database of Individual Seismogenic Sources (DISS), version 3: summarizing 20 years of
research on Italy’s earthquake geology. Tectonophysics. http://dx.doi.org/10.1016/j.tecto.
2007.04.014
Berelson B (1948) Communication and public opinion. In: Schramm W (ed) Communication in
modern society. University of Illinois Press, Urbana
Bretton R (2014) The role of science within the rule of law. “Science, uncertainty and decision
making in the mitigation of natural risks”, Workshop of Cost Action IS1304 “Expert Judgment
Network: Bridging the Gap Between Scientific Uncertainty and Evidence-Based Decision
Making”. Rome, 8-9-10 Oct 2014. Oral presentation
2 Civil Protection Achievements and Critical Issues in Seismology. . . 57

D’Ambrogi C, Di Bucci D, Dolce D, Donda F, Ferri F, Improta L, Mucciarelli M, Panei L,


Scrocca D, Stabile TA, Vittori E (2014) Tavolo di Lavoro interistituzionale ISPRA. Rapporto
sullo stato delle conoscenze riguardo alle possibili relazioni tra attivita antropiche e sismicita
indotta/innescata in Italia. ISPRA. http://www.isprambiente.gov.it/it/news/rapporto-sullo-
stato-delle-conoscenze-riguardo-alle-possibili-relazioni-tra-attivita-antropiche-e-sismicita-
indotta-innescata-in-italia. 27 June 2014
Di Bucci D, Dolce M (2011) Research projects in seismology funded by the Italian Department of
Civil Protection. DVD e Volume degli atti della “First Sino Italian Conference on: Advanced
Methodologies and Technologies in Geophysics, Geodynamics and Seismic Hazard Assess-
ment”. Pechino, 29–30 Marzo 2010, pp 43–45
Dipartimento della Protezione Civile and Fondazione CIMA (DPC and CIMA) (eds) (2013)
Protezione Civile e responsabilita nella societa del rischio. Chi valuta, chi decide, chi giudica
(Civil protection and responsibilities in the risk society. Who evaluates, who decides, who
judges). ETS Editor, p 152
Dipartimento della Protezione Civile and Fondazione CIMA (DPC and CIMA) (eds) (2014) La
Protezione Civile nella societa del rischio. Procedure, Garanzie, Responsabilita (Civil protec-
tion in the risk society. Procedures, guarantees, responsibilities) ETS Editor, p 92
DISS Working Group (2010) Database of Individual Seismogenic Sources (DISS), Version 3.1.1:
a compilation of potential sources for earthquakes larger than M 5.5 in Italy and surrounding
areas. http://diss.rm.ingv.it/diss/, © INGV 2010 – Istituto Nazionale di Geofisica e
Vulcanologia – All rights reserved; doi:10.6092/INGV.IT-DISS3.1.1
Dolce M (2008) Civil protection vs. earthquake engineering and seismological research, Proceed-
ing of 14th world conference on earthquake engineering, Oct 2008, Beijing, Keynote speech
Dolce M, Di Bucci D (2014) Risk management: roles and responsibilities in the decision-making
process. In: Peppoloni S, Wyss M (eds) Geoethics: ethical challenges and case studies in earth
science. Section IV: Communication with the public, officials and the media, Chapter 18.
Elsevier. Publication Date: 21 Nov 2014 | ISBN-10: 0127999353 | ISBN-13: 978–0127999357
| Edition: 1
Gabrielli F (2013) Preface in: Dipartimento della Protezione Civile and Fondazione CIMA (DPC
and CIMA) (eds), 2013. Protezione Civile e responsabilita nella societa del rischio. Chi valuta,
chi decide, chi giudica (Civil protection and responsibilities in the risk society. Who evaluates,
who decides, who judges). ETS Editor, pp 3–10
Gabrielli F, Di Bucci D (2014) Comment on “communicating earthquake risk to the public: the
trial of the ‘L’Aquila Seven” by David E. Alexander. Nat Hazards. doi:10.1007/s11069-014-
1322-1. Published online: 19.07.2014
Gasparini P (2013) Natural hazards and scientific advice: interactions among scientists, decision
makers and the public. Plenary Lecture, 2013 Goldschmidt Conference (Florence, Italy).
Mineral Mag 77(5):1146
Goltz JD (2002) Introducing earthquake early warning in California: a summary of social science
and public policy issues, technical report, Governor’s Off. of Emergency Serv., Pasadena
HORIZON 2020, Work Programme 2014–2015. 14. Secure societies – protecting freedom and
security of Europe and its citizens. European Commission Decision C (2014) 4995 of 22 July
2014. http://ec.europa.eu/research/participants/data/ref/h2020/wp/2014_2015/main/h2020-
wp1415-security_en.pdf
Jordan T, Chen Y, Gasparini P, Madariaga R, Main I, Marzocchi W, Papadopoulos G, Sobolev G,
Yamaoka K, Zschau J (2011) Operational earthquake forecasting. State of knowledge and
guidelines for utilization. Ann Geophys 54(4). doi:10.4401/ag-5350
Jordan TH, Marzocchi W, Michael AJ, Gerstenberger MC (2014) Operational earthquake fore-
casting can enhance earthquake preparedness. Seismol Res Lett 85(5):955–959
Luzi L, Hailemikael S, Bindi DD, Pacor F, Mele F, Sabetta F (2008) ITACA (ITalian
ACcelerometric Archive): a web portal for the dissemination of Italian strong-motion data.
Seismol Res Lett 79(5):716–722. doi:10.1785/gssrl.79.5.716
58 M. Dolce and D. Di Bucci

Manfredi G, Dolce M (eds) (2009) The state of the art of earthquake engineering research in Italy:
the ReLUIS-DPC 2005–2008 Project, Doppiavoce, Napoli. http://www.reluis.it/CD/ReLUIS-
DPC/ReLUIS-DPC.htm
Mele F, Riposati D (2007) ISIDe, Italian Seismological Instrumental and parametric Data-basE.
GNGTS 2007
Meletti C, Rovida A, D’Amico V, Stucchi M (2013) Seismic hazard models for the Italian area:
“MPS04-S1” and “SHARE”, Progettazione Sismica – Vol. 5, N. 1, Anno 2014. doi:10.7414/
PS.5.1.15-25 – http://dx.medra.org/10.7414/PS.5.1.15-25
Mucciarelli M (2014) Some comments on the first degree sentence of the “L’Aquila trial”. In:
Peppoloni S, Wyss M (eds) Geoethics: ethical challenges and case studies in earth science.
Elsevier. Publication Date: 21 Nov 2014 | ISBN-10: 0127999353 | ISBN-13: 978–0127999357
| Edition: 1
Pacor F, Paolucci R, Luzi L, Sabetta F, Spinelli A, Gorini A, Marcucci S, Nicoletti M, Filippi L,
Dolce M (2011) Overview of the Italian strong motion database ITACA 1.0. Bull Earthq Eng 9
(6):1723–1739. doi:10.1007/s10518-011-9327-6, Springer Ltd, Dordrecht, The Netherlands
Picozzi M, Zollo A, Brondi P, Colombelli S, Elia L, Martino C (2014) Exploring the feasibility of a
nation-wide earthquake early warning system in Italy, First draft of the final report for the
REAKT Project
Pitilakis K, Crowley E, Kaynia A (eds) (2014a) SYNER-G: typology definition and fragility
functions for physical elements at seismic risk, vol 27, Geotechnical, geological and earth-
quake engineering. Springer Science + Business Media, Dordrecht. ISBN 978-94-007-7872-6
Pitilakis K, Franchin P, Khazai B, Wenzel H (eds) (2014b) SYNER-G: systemic seismic vulner-
ability and risk assessment of complex urban, utility, lifeline systems and critical facilities, vol
31, Geotechnical, geological and earthquake engineering. Springer Science + Business Media,
Dordrecht. ISBN 978-94-017-8835-9
Satriano C, Elia L, Martino C, Lancieri M, Zollo A, Iannaccone G (2010) PRESTo, the earthquake
early warning system for southern Italy: concepts, capabilities and future perspectives. Soil
Dyn Earthq Eng. doi:10.1016/j.soildyn.2010.06.008
Schramm W (1954) How communication works. In: Schramm W (ed) The process and effects of
mass communication. University of Illinois Press, Urbana
Zuccaro G, De Gregorio D, Dolce M, Speranza E, Moroni C (2014) Manuale per la compilazione
della scheda di 1 livello per la caratterizzazione tipologico-strutturale dei comparti urbani
costituiti da edifici ordinari (Manual for the compilation of the 1st level form to characterize
urban districts with respect to the structural types of ordinary building), preliminary draft.
ReLUIS
Chapter 3
Earthquake Risk Assessment: Certitudes,
Fallacies, Uncertainties and the Quest
for Soundness

Kyriazis Pitilakis

Abstract This paper addresses, from engineering point of view, issues in seismic
risk assessment. It is more a discussion on the current practice, emphasizing on the
multiple uncertainties and weaknesses of the existing methods and approaches, which
make the final loss assessment a highly ambiguous problem. The paper is a modest
effort to demonstrate that, despite the important progress made the last two decades or
so, the common formulation of hazard/risk based on the sequential analyses of source
(M, hypocenter), propagation (for one or few IM) and consequences (losses) has
probably reached its limits. It contains so many uncertainties affecting seriously the
final result, and the way that different communities involved, modellers and end users
are approaching the problem is so scattered, that the seismological and engineering
community should probably re-think a new or an alternative paradigm.

3.1 Introduction

Seismic hazard and risk assessments are nowadays rather established sciences, in
particular in the probabilistic formulation of hazard. Long-term hazard/risk assess-
ments are the base for the definition of long-term actions for risk mitigation.
However, several recent events raised questions about the reliability of such
methods. The occurrence of relatively “unexpected” levels of hazard and loss
(e.g., Emilia, Christchurch, Tohoku) and the continuous increase of hazard with
time, basically due to the increase of seismic data, and the increase of exposure,
make loss assessment a highly ambiguous problem.
Existing models present important discrepancies. Sometimes such discrepancies
are only apparent, since we do not always compare two “compatible” values. There

K. Pitilakis (*)
Department of Civil Engineering, Aristotle University of Thessaloniki,
Thessaloniki 54124, Greece
e-mail: [email protected]

© The Author(s) 2015 59


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_3
60 K. Pitilakis

are several reasons for this. In general, it is usually statistically impossible to falsify
one model only with one (or too few) datum. Whatever the value of probability for
such an event is, a probability (interpreted as “expected annual frequency”) value
greater than zero means that the occurrence of the event is possible, and we cannot
know how much unlucky we have been. If the probability is interpreted as “degree
of belief”, is instead in principle not testable. In addition, the assessments are often
based on “average” values, knowing that the standard deviations are high. This is
common practice, but this also means that such assessments should be compared to
the average over multiple events, instead of one single specific event. However, we
almost never have enough data to test long-term assessments. This is probably the
main reason why different alternative models exist.
Another important reason why significant discrepancies are expected is the fact
that we do know that many sources of uncertainties do exist in the whole chain from
hazard to risk assessment. However, are we propagating accurately all the known
uncertainties? Are we modelling the whole variability? The answer is that often it is
difficult to define “credible” limits and constraints to the natural variability (alea-
tory uncertainty). One of the consequences is that the “reasonable” assessments are
often based on “conservative” assumptions. However, conservative choices usually
imply subjectivity and statistical biases, and such biases are, at best, only partially
controlled. In engineering practice this is often the rule, but can this be generalized?
And if yes, how can it be achieved? Epistemic uncertainty usually offers a solution
to this point in order to constrain the limits of “subjective” and “reasonable” choices
in the absence of rigorous rules. In this case, epistemic uncertainties are intended as
the variability of results among different (but acceptable) models. But, are we really
capable of effectively accounting for and propagating epistemic uncertainties? In
modelling epistemic uncertainties, different alternative models are combined
together, often arbitrarily, assuming that one true model exists and, judging this
possibility, assigning a weight to each model based on the consensus on its
assumptions. Here, two questions are raised. First, is the consensus a good metric?
Are there any alternatives? How many? Second, does a “true” model exist? Can a
model be only “partially” true, as different models are covering different “ranges”
of applicability? To judge the “reliability” of one model, we should analyze its
coherence with a “target behaviour” that we want to analyze, which is a-priori
unknown and more important it is evolving with time. The model itself is a
simplification of the reality, based on the definition of the main degrees of freedom
that control such “target behaviour”.
In the definition of “target behaviour” and, consequently, in the selection of the
appropriate “degrees of freedom”, several key questions remain open. First, are we
capable of completely defining what the target of the hazard/risk assessments is?
What is “reasonable”? For example, we tend to use the same approach at different
spatiotemporal levels, which is probably wrong. Is the consideration of a “changing
or moving target” acceptable by the community? Furthermore, do we really explore
all the possible degrees of freedom to be accounted for? And if yes, are we able to
do it accurately considering the eternal luck of good and well-focused data? Are we
missing something? For example, in modelling fragility, several degrees of freedom
are missing or over-simplified (e.g., aging effects, poor modelling including the
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 61

absence of soil-structure interaction), while recent results show that this “degree of
freedom” may play a relevant role to assess the actual vulnerability of a structure.
More in general, the common formulation of hazard/risk is based on the sequential
analyses of source (M, hypocenter), propagation (for one or few intensity measures)
and consequences (impact/losses). Is this approach effective, or is it just an easy
way to tackle the complexity of the nature, since it keeps the different disciplines
(like geology, geophysics and structural engineering) separated? Regarding
“existing models”, several attempts are ongoing to better constrain the analyses
of epistemic uncertainties like critical re-analysis of the assessment of all the
principal factors of hazard/risk analysis or proposal of alternative modelling
approaches (e.g., Bayesian procedures instead of logic trees). All these follow the
conventional path to go. Is this enough? Wouldn’t it be better to start criticizing the
whole model? Do we need a change of the paradigm? Or maybe better, can we think
of alternative paradigms? The general tendency is to complicate existent models, in
order to obtain new results, which we should admit are sometimes better correlated
with specific observations or example cases. Is this enough? Have we really deeply
thought that in this way we may build “new” science over not consolidated roots?
Maybe it is time to re-think these roots, in order to evaluate their stability in space,
time and reliability.
The paper that follows is a modest effort to argue on these issues, unfortunately
without offering any idea of the new paradigm.

3.2 Modelling, Models and Modellers

3.2.1 Epistemology of Models

Seismic hazard and risk assessments are made with models. The biggest problem of
models is the fact that they are made by humans who have a limited knowledge of
the problem and tend to shape or use their models in ways that mirror their own
notion of which a desirable outcome would be. On the other hand, models are
generally addressed to end users with different level of knowledge and perception
of the uncertainties involved. Figure 3.1 gives a good picture of the way that
different communities perceive “certainty”. It is called the “certainty trough”.
In the certainty trough diagram, users are presented as either under-critical or
over-critical, in contrast to producers, who have detailed understanding of the
technology’s strengths and weaknesses. Model producers or modellers are
a-priori aware of the uncertainties involved in their model. At least they should
be. For end-users communities the situation is different. Experienced over-critical
users are generally in better position to evaluate the accuracy of the model and its
uncertainties, while the alienated under-critical users have the tendency to follow
the “believe the brochures” concept. When this second category of end-users uses a
model, the uncertainties are generally increased.
62 K. Pitilakis

Fig. 3.1 The certainty trough (after MacKenzie 1990)

The present discussion focuses on the models and modellers and less on the
end-users; however, the criticism will be more from the side of the end users.
All models are imperfect. Identifying model errors is difficult in the case of
simulations of complex and poorly understood systems, particularly when the
simulations extend to hundreds or thousands of years. Model uncertainties are a
function of a multiplicity of factors (degrees of freedom). Among the most impor-
tant are limited availability and quality of empirical-recorded data, the imperfect
understanding of the processes being modelled and, finally, the poor modelling
capacities. In the absence of well-constrained data, modellers often gauge any given
model’s accuracy by comparing it with other models. However, the different
models are generally based on the same set of data, equations and assumptions,
so that agreement among them may indicate very little about their realism.
A good model is based on a wise balance of observation and measurement of
accessible phenomena with informed judgment “theory”, and not in inconvenience.
Modellers should be honestly aware of the uncertainties involved in their models
and of how the end users could make use of them. They should take the models
“seriously but not literally”, avoiding mixing up “qualitative realism” with “quan-
titative realism”. However, modellers typically identify the problem as users’
misuse of their model output, suggesting that the latter interpret the results too
uncritically.

3.2.2 Data: Blessing or Curse

It is widely accepted that science, technology, and knowledge in general, are


progressing with the accumulation of observation and data. However, it is equally
true that without proper judgment, solid theoretical background and focus, an
accumulation of data may fade out the problem and drive the scientist-modeller
to a wrong direction. The question is how much aware of that is the modeller.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 63

Une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.
Jules Henri Poincare

Historically the accumulation of seismic and strong motion data resulted in


higher seismic hazard when seismic design motion is targeted. Typical example
is the increase of the design Peak Ground Acceleration (PGA) value in Greece since
1956 and the even further increase recently proposed in SHARE (Giardini
et al. 2013).
Data are used to propose models, for example Ground Motion Prediction
Equations (GMPEs), or improve existing ones. There is a profound belief that
more data lead to better models and deeper knowledge. This is not always true.
The majority of recording stations worldwide are not located after proper selection
of the site and in most cases the knowledge of the parameters affecting the recorded
ground motion is poor and limited. Rather simple statistics and averaging, often of
heterogeneous data, is usually the way to produce “a model” but not “the model”,
which should describe the truth. A typical example is the research on the “sigma”
on GMPEs. Important research efforts have been dedicated during the last two
decades to improve “sigma” but in general it refuses to be improved, except for few
cases of very well constrained conditions. Sometimes less data of excellent quality
and well constrained in terms of all involved parameters, lead to better solutions
and models. This is true in both engineering seismology and earthquake engineer-
ing. An abundant mass of poorly constrained and mindless produced data is actually
a curse and probably it will strangle an honest and brave modeller. Unfortunately,
this is often the case when one considers the whole chain from seismic hazard to
risk assessment.

3.2.3 Modeller: Sisyphus or Prometheus

A successful parameterization requires understanding of the phenomena being


parameterized, but such understanding is often lacking. For example, the influence
of seismic rupture and wave propagation patterns in complex media are poorly
known and poorly modelled.
When confronted with limited understanding of how the seismic pattern is, and
engineering structures or human behaviours are, modellers seek to make their
models comply with the expected earthquake generation, spatial distribution of
ground motion and structural response. The adjustments may “save appearances”
without integrating precise understanding of the causal relationships the models are
intended to simulate.
Huge research in seismic risk consists of modifying a subset of variables in
models developed elsewhere. This complicates clear-cut distinctions between users
and producers of models. And even more important: there is no in depth criticism
on the paradigm used (basic concepts). Practically no scientist single-handedly
64 K. Pitilakis

develops a complex risk model from bottom-up. He is closer to Sisyphus while


sometimes he believes to be Prometheus.
Modellers are sometimes identified with their own models and become invested
in their projections, which in turn can reduce sensitivity to their inaccuracy. Users
are perhaps in the best position to identify model inaccuracies.
Model producers are not always willing or they are not always able to recognize
weaknesses in their own models, contrary to what it is suggested by the certainty
trough. They spend a lot of time working on something, and they are really trying to
do their best at simulating what happens in the real world. It is easy to get caught up
in it and start to believe that what happens in the model must be what happens in the
real world. And often that is not true. The danger is that the modeller begins to lose
some objectivity on the response of the model and starts to believe that the model
really works like the real world and then he begins to take too seriously its response
to a change in forcing.
Modellers often “trust” their models and sometimes they have some degree of
“genuine confidence, maybe over-confidence” in their quantitative projections. It is
not simply a “calculating seduction” but a “sincere act of faith”!

3.2.4 Models: Truth or Heuristic Machines

Models should be perceived as “heuristic” and not as “truth machines”. Unfortu-


nately, very often modellers – keen to preserve the authority of their models –
deliberately present and encourage interpretations of models as “truth machines”
when speaking to external audiences and end users. They “oversell” their products
because of potential funding considerations. Highest level of objectivity about a
given technology should be found among those who produced it, and this is not
always achieved.

3.3 Risk, Uncertainties and Decision-Making

Risk is uncertain by definition. The distinction between uncertainty and risk


remains of fundamental importance today. The scientific and engineering commu-
nities do not unanimously accept the origins of the concept of uncertainty in risk
studies. However, it is permanently criticized and subsequently it evolved into
dominant models of decision making upon which the dominant risk-based theories
of seismic risk assessment and policy-making were subsequently built.
The challenge is really important. Everything in our real world is formed and is
working with risk and uncertainty. Multiple conventions deserve great attention as
we seek to understand the preferences and strategies of economic and political
actors. Within this chaotic and complicate world, risk assessment and policy-
making is a real challenge.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 65

Usually uncertainties (or variability) are classified in two categories: aleatory


variability and epistemic uncertainty. Aleatory variability is the natural-intrinsic
randomness in a phenomenon and a process. It is a result of our simplified
modelling of a complex process parameterized by probability density functions.
Epistemic uncertainty is considered as the scientific uncertainty in the simplified
model of the process and is characterized by alternative models. Usually it is related
to the lack of knowledge or the necessity to use simplified models to simulate the
nature or the elements at risk.
Uncertainty is also related to the perception of the model developer or the user.
Often these two distinctive terms of uncertainties are familiar to the model devel-
opers but not to the users for whom there is only one uncertainty seen in a scale
“low” to “high”. A model developer probably believes that the two terms provide an
unambiguous terminology. However this is not the case for the community of users.
In most cases they cannot even understand it. So, they are often forced to “believe”
the scientists, who have or should have the “authority” of the “truth”. At least the
modellers should know better the limits of their model and the uncertainties
involved and communicate them to the end-users.
A common practice to anticipate the epistemic uncertainty is through the use of
the “logic tree” approach. Using this approach to overcome the lack of knowledge
and the imperfection of the modelling is strongly based on subjectivity, regarding
the credibility of each model, which is not a rigorous scientific method. It may be
seen as a compromising method to smooth “fighting” among models and modellers.
Moreover, a typical error is to put aleatory variability on some of the branches of
the logic tree. The logic tree branches should be mainly relevant to the source
characterization, the GMPE used and furthermore to the fragility curves used for
the different structural typologies.
An important problem is then raised up. Is using many alternative models for
each specific site and project a wrong or a wise approach? The question in its
simplicity seems stupid and the answer obvious, but this is not true because more
data usually lead to larger uncertainties.
For example, in a poorly known fault with few data and only one hazard study,
there will be a single model and consequently 100 % credibility. In a very well
known and studied fault, with many data, there will be probably several good or
“acceptable” models and the user should be forced to attribute much lower credi-
bility to each one of them, which leads to the absurd situation for the poorly known
fault to have lower uncertainty than well known faults!
Over time additional hazard models are developed, but our estimates of the
epistemic uncertainty have increased, not decreased, as additional data have been
collected and new models have been developed!
Fragility curves on the other hand are based on simplified models (usually
equivalent SDOF systems), which are an oversimplification of the real world and
it is not known whether this oversimplification is on the conservative side. In any
case, the scatter among different models is so high (Pitilakis et al. 2014a) that a
logic tree approach should be recommended to treat the epistemic uncertainties
related to the selection of the fragility curves. No such approach has been used so
66 K. Pitilakis

far. Moreover these curves, normally produced for simplified structures, are used to
estimate physical damages and implicitly the associated losses for a whole city with
a very heterogeneous fabric and typology of buildings. Then aleatory and epistemic
uncertainties are merged.
At the end of the game there is always a pending question: How can we really
differentiate the two sources of uncertainty?
Realizing the importance of all different sources of uncertainties characterizing
each step of the long process from seismic hazard to risk assessment, including all
possible consequences and impact, beyond physical damages, it is understood how
difficult it is to derive a reliable global model covering the whole chain from hazard
to risk. For the moment, scientists, engineers and policy makers are fighting with
rather simple weapons, using simple paradigms. It is time to re-think the whole
process merging their capacities and talents.

3.4 Taxonomy of Elements at Risk

The key assumption in the vulnerability assessment of buildings, infrastructures and


lifelines is that structures and components of systems, having similar structural
characteristics, and being in similar geotechnical conditions (e.g., a bridge of a
given typology), are expected to perform in the same way for a given seismic
excitation. Within this context, damage is directly related to the structural proper-
ties of the elements at risk. The hazard should be also related to the structure under
study. Taxonomy and typology are thus fundamental descriptors of a system that
are derived from the inventory of each element and system. Geometry, material
properties, morphological features, age, seismic design level, anchorage of the
equipment, soil conditions, and foundation details are among usual typology
descriptors/parameters. Reinforced concrete (RC) buildings, masonry buildings,
monuments, bridges, pipelines (gas, fuel, water, waste water), tunnels, road
embankments, harbour facilities, road and railway networks, have their own spe-
cific set of typologies and different taxonomy.
The elements at risk are commonly categorized as populations, communities,
built environment, natural environment, economic activities and services, which are
under the threat of disaster in a given area (Alexander 2000). The main elements at
risk, the damages of which affect the losses of all other elements, are the multiple
components of the built environment with all kinds of structures and infrastructures.
They are classified into four main categories: buildings, utility networks, transpor-
tation infrastructures and critical facilities. In each category, there are (or should be)
several sets of fragility curves, that have been developed considering the taxonomy
of each element and their typological characteristics. In that sense there are
numerous typologies for reinforced concrete or masonry buildings, numerous
typologies for bridges and numerous typologies for all other elements at risk of
all systems exposed to seismic hazard.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 67

The knowledge of the inventory of a specific structure in a region and the


capability to create classes of structural types (for example with respect to material,
geometry, design code level) are among the main challenges when carrying out a
general seismic risk assessment for example at a city scale, where it is practically
impossible to perform this assessment at building level. It is absolutely necessary to
classify buildings, and other elements at risk, in “as much as possible” homogenous
classes presenting more-or-less similar response characteristics to ground shaking.
Thus, the derivation of appropriate fragility curves for any type of structure depends
entirely on the creation of a reasonable taxonomy that is able to classify the
different kinds of structures and infrastructures in any system exposed to seismic
hazard.
The development of a homogenous taxonomy for all engineering elements at
risk exposed to seismic hazard and the recommendation of adequate fragility
functions for each one, considering also the European context, achieved in
SYNER-G project (Pitilakis et al. 2014a), is a significant contribution to the
reduction of seismic risk in Europe and worldwide.

3.5 Intensity Measures

A main issue related to the construction and use of fragility curves is the selection of
appropriate earthquake Intensity Measures (IM) that characterize the strong ground
motion and best correlate with the response of each element at risk, for example,
building, pier bridge or pipeline. Several intensity measures of ground motion have
been proposed, each one describing different characteristics of the motion, some of
which may be more adverse for the structure or system under consideration. The use
of a particular IM in seismic risk analysis should be guided by the extent to which
the measure corresponds to damage to the components of a system or the system of
systems. Optimum intensity measures are defined in terms of practicality, effec-
tiveness, efficiency, sufficiency, robustness and computability (Cornell et al. 2002;
Mackie and Stojadinovic 2003, 2005).
Practicality refers to the recognition that the IM has some direct correlation to
known engineering quantities and that it “makes engineering sense” (Mackie and
Stojadinovic 2005; Mehanny 2009). The practicality of an IM may be verified
analytically via quantification of the dependence of the structural response on the
physical properties of the IM such as energy, response of fundamental and higher
modes, etc. It may also be verified numerically by the interpretation of the struc-
ture’s response under non-linear analysis using existing time histories.
Sufficiency describes the extent to which the IM is statistically independent of
ground motion characteristics such as magnitude and distance (Padgett et al. 2008).
A sufficient IM is the one that renders the structural demand measure conditionally
independent of the earthquake scenario. This term is more complex and is often at
odds with the need for computability of the IM. Sufficiency may be quantified via
statistical analysis of the response of a structure for a given set of records.
68 K. Pitilakis

The effectiveness of an IM is determined by its ability to evaluate its relation


with an engineering demand parameter (EDP) in closed form (Mackie and
Stojadinovic 2003), so that the mean annual frequency of a given decision variable
exceeding a given limiting value (Mehanny 2009) can be determined analytically.
The most widely used quantitative measure from which an optimal IM can be
obtained is efficiency. This refers to the total variability of an engineering demand
parameter (EDP) for a given IM (Mackie and Stojadinovic 2003, 2005).
Robustness describes the efficiency trends of an IM-EDP pair across different
structures, and therefore different fundamental period ranges (Mackie and
Stojadinovic 2005; Mehanny 2009).
In general and in practice, IMs are grouped in two general classes: empirical
intensity measures and instrumental intensity measures. With regards to the empir-
ical IMs, different macroseismic intensity scales could be used to identify the
observed effects of ground shaking over a limited area. In the instrumental IMs,
which are by far more accurate and representative of the seismic intensity charac-
teristics, the severity of ground shaking can be expressed as an analytical value
measured by an instrument or computed by analysis of recorded accelerograms.
The selection of the intensity parameter is also related to the approach that is
followed for the derivation of fragility curves and the typology of element at risk.
The identification of the proper IM is determined from different constraints, which
are first of all related to the adopted hazard model, but also to the element at risk
under consideration and the availability of data and fragility functions for all
different exposed assets.
Empirical fragility functions are usually expressed in terms of the macroseismic
intensity defined according to different scales, namely EMS, MCS and
MM. Analytical or hybrid fragility functions are, on the contrary, related to
instrumental IMs, which are related to parameters of the ground motion (PGA,
PGV, PGD) or of the structural response of an elastic SDOF system (spectral
acceleration Sa or spectral displacement Sd for a given value of the period of
vibration T). Sometimes integral IMs, which consider a specific integration of a
motion parameter can be useful, for example Arias Intensity IA or a spectral value
like the Housner Intensity IH. When the vulnerability of elements due to ground
failure is examined (i.e., liquefaction, fault rupture, landslides) permanent ground
deformation (PGD) is the most appropriate IM.
The selection of the most adequate and realistic IMs for every asset under
consideration is still debated and a source of major uncertainties.

3.6 Fragility Curves and Vulnerability

The vulnerability of a structure is described in all engineering-relevant approaches


using vulnerability and/or fragility functions. There are a number of definitions of
vulnerability and fragility functions; one of these describes vulnerability functions
as the probability of losses (such as social or economic losses) given a level of
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 69

Fig. 3.2 Examples of (a) vulnerability function and (b) fragility function

ground shaking, whereas fragility functions provide the probability of exceeding


different limit states (such as physical damage or injury levels) given a level of
ground shaking. Figure 3.2 shows examples of vulnerability and fragility functions.
The former relates the level of ground shaking with the mean damage ratio (e.g.,
ratio of cost of repair to cost of replacement) and the latter relates the level of
ground motion with the probability of exceeding the limit states. Vulnerability
functions can be derived from fragility functions using consequence functions,
which describe the probability of loss, conditional on the damage state.
Fragility curves constitute one of the key elements of seismic risk assessment
and at the same time an important source of uncertainties. They relate the seismic
intensity to the probability of reaching or exceeding a level of damage (e.g., minor,
moderate, extensive, collapse) for the elements at risk. The level of shaking can be
quantified using different earthquake intensity parameters, including peak ground
acceleration/velocity/displacement, spectral acceleration, spectral velocity or spec-
tral displacement. They are often described by a lognormal probability distribution
function as in Eq. 3.1 although it is noted that this distribution may not always be
the best fit.
  
1 IM
P f ðds  dsi jIMÞ ¼ Φ  ln ð3:1Þ
βtot IMmi

where Pf(·) denotes the probability of being at or exceeding a particular damage


state, dsi, for a given seismic intensity level defined by the earthquake intensity
measure, IM (e.g., peak ground acceleration, PGA), Φ is the standard cumulative
probability function, IMmi is the median threshold value of the earthquake intensity
measure IM required to cause the ith damage state and βtot is the total standard
deviation. Therefore, the development of fragility curves according to Eq. 3.1
requires the definition of two parameters, IMmi and βtot.
There are several methods available in the literature to derive fragility functions
for different elements exposed to seismic hazard and in particular to transient
ground motion and permanent ground deformations due to ground failure. Conven-
tionally, these methods are classified into four categories: empirical, expert
70 K. Pitilakis

elicitation, analytical and hybrid. All these approaches have their strengths and
weaknesses. However, analytical methods, when properly validated with large-
scale experimental data and observations from recent strong earthquakes, have
become more popular in recent years. The main reason is the considerable improve-
ment of computational tools, methods and skills, which allows comprehensive
parametric studies covering many possible typologies to be undertaken. Another
equally important reason is the better control of several of the associated
uncertainties.
The two most popular methods to derive fragility (or vulnerability) curves for
buildings and pier bridges are the capacity spectrum method (CSM) (ATC-40 and
FEMA273/356) with its alternatives (e.g., Fajfar 1999), and the incremental
dynamic analysis (IDA) (Vamvatsikos and Cornell 2002). Both have contributed
significantly and marked the substantial progress observed the last two decades;
however they are still simplifications of the physical problem and present several
limitations and weaknesses. The former (CSM) is approximate in nature and is
based on static loading, which ignores the higher modes of vibration and the
frequency content of the ground motion. A thorough discussion on the pushover
approach may be found in Krawinkler and Miranda (2004).
The latter (IDA) is now gaining in popularity because among other advantages it
offers the possibility to select the most relevant to the structural response Engi-
neering Demand Parameters (EDP) (inter-story drifts, component inelastic defor-
mations, floor accelerations, hysteretic energy dissipation etc.). IDA is commonly
used in probabilistic seismic assessment frameworks to produce estimates of the
dynamic collapse capacity of global structural systems. With the IDA procedure the
coupled soil-foundation-structure system is subjected to a suite of multiply scaled
real ground motion records whose intensities are “ideally?” selected to cover the
whole range from elasticity to global dynamic instability. The result is a set of
curves (IDA curves) that show the EDP plotted against the IM used to control the
increment of the ground motion amplitudes. Fragility curves for different damage
states can be estimated through statistical analysis of the IDA results (pairs of EDP
and IM) derived for a sufficiently large number of ground motions (normally
15–30). Among the weaknesses of the approach is the fact that scaling of the real
records changes the amplitude of the IMs but keeps the frequency content the same
throughout the inelastic IDA procedure. In summary both approaches introduce
several important uncertainties, both aleatory and epistemic.
Among the most important latest developments in the field of fragility curves is
the recent publication “SYNER-G: Typology Definition and Fragility Functions for
Physical Elements at Seismic Risk”, Pitilakis K, Crowley H, Kaynia A (Eds)
(2014a).
Several uncertainties are introduced in the process of constructing a set of
fragility curves of a specific element at risk. They are associated to the parameters
describing the fragility curves, the methodology applied, as well as to the selected
damage states and the performance indicators (PI) of the element at risk. The
uncertainties may again be categorized as aleatory and epistemic. However, in
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 71

this case epistemic uncertainties are probably more pronounced, especially when
analytical methods are used to derive the fragility curves.
In general, the uncertainty of the fragility parameters is estimated through the
standard deviation, βtot that describes the total variability associated with each
fragility curve. Three primary sources of uncertainty are usually considered,
namely the definition of damage states, βDS, the response and resistance (capacity)
of the element, βC, and the earthquake input motion (demand), βD. Damage state
definition uncertainties are due to the fact that the thresholds of the damage indexes
or parameters used to define damage states are not known. Capacity uncertainty
reflects the variability of the properties of the structure as well as the fact that the
modelling procedures are not perfect. Demand uncertainty reflects the fact that IM
is not exactly sufficient, so different records of ground motion with equal IM may
have different effects on the same structure (Selva et al. 2013). The total variability
is modelled by the combination of the three contributors assuming that they are
stochastically independent and log-normally distributed random variables, which is
not always true.
Paolo Emilio Pinto (2014) in Pitilakis et al. (2014a) provides the general
framework of the treatment of uncertainties in the derivation of the fragility
functions. Further discussion on this issue is made in the last section of this paper.

3.7 Risk Assessment

3.7.1 Probabilistic, Deterministic and the Quest


of Reasonable

In principle, the problem of seismic risk assessment and safety is probabilistic and
several sources of uncertainties are involved. However, a full probabilistic
approach is not applied throughout the whole process. For the seismic hazard the
approach is usually probabilistic, at least partially. Deterministic approach, which is
more appreciated by engineers, is also used. Structures are traditionally analyzed in
a deterministic way with input motions estimated probabilistically. PSHA ground
motion characteristics, determined for a selected return period (e.g., 500 or 1,000
years), are traditionally used as input for the deterministic analysis of a structure
(e.g., seismic codes). On the other hand, fragility curves by definition represent the
conditional probability of the failure of a structure or equipment at a given level of
ground motion intensity measure, while seismic capacity of structures and compo-
nents is usually estimated deterministically. Finally, damages and losses are esti-
mated in a probabilistic way, mainly, if not exclusively, because of PSHA and
fragility curves used. So in the whole process of risk assessment, probabilistic and
deterministic approaches are used indifferently without knowing exactly what the
impact of that is and how the involved uncertainties are treated and propagated.
72 K. Pitilakis

In the hazard assessment the main debate is whether deterministic or probabi-


listic approach is more adequate and provides more reasonable results for engi-
neering applications and in particular for the evaluation of the design ground
motion. In the deterministic hazard approach, individual earthquake scenarios
(i.e., Mw and location) are developed for each relevant seismic source and a
specified ground motion probability level is selected (by tradition, it is usually
either 0 or 1 standard deviation above the median). Given the magnitude, distance,
and number of standard deviations, the ground motion is then computed for each
earthquake scenario using one or several ground motion models (GMPEs) that are
based on empirical data (records). Finally, the largest ground motion from any of
the considered scenarios is used for the design.
Actually with this approach single values of the parameters (Mw, R, and ground
motion parameters with a number of standard deviations) are estimated for each
selected scenario. However, the final result regarding the ground shaking is prob-
abilistic in the sense that the ground motion has a probability being exceeded given
that the scenario earthquake occurred.
In the probabilistic approach all possible and relevant deterministic earthquake
scenarios (e.g., all possible Mw and location combinations of physically possible
earthquakes) are considered, as well as all possible ground motion probability
levels with a range of the number of standard deviations above or below the median.
The scenarios from the deterministic analyses are all included in the full set of
scenarios from the probabilistic analysis. For each earthquake scenario, the ground
motions are computed for each possible value of the number of standard deviations
above or below the median ground motion. So the probabilistic analysis can be
considered as a large number of deterministic analyses and the chance of failure is
addressed by estimating the probability of exceeding the design ground motion.
The point where the two approaches are coinciding is practically the choice of
the standard deviations. The deterministic approach traditionally uses at most one
standard deviation above the median for the ground motion, but in the probabilistic
approach, larger values of the number of standard deviations above the median
ground motion are considered. As a result, the worst-case ground motions will be
much larger than the 84th percentile deterministic ground motions.
Considering that in both deterministic and probabilistic approaches the design
ground motions, (and in particular the largest ones), are controlled by the number of
the standard deviations above the median, which usually are different in the two
approaches, how can the design motion or the worst case scenario be estimated in a
rigorous way?
If now we enter in the game the selection of standard deviations in all other
stages of the risk assessment process, namely in the estimation of site effects, the
ground motion variability, the fragility and capacity curves, without mentioning the
necessary hypothesis regarding the intensity measures, performance indicators and
damage states to be used, it is realized that the final result is highly uncertain.
At the end of the game the quest of soundness is still illusionary and what is
reasonable is based on past experience and economic constraints considering
engineering judgment and political decision. In other words we come back to the
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 73

modeler’s “authority” and the loneliness and sometime desolation of the end-user in
the decision making procedure.

3.7.2 Spatial Correlation

Ground motion variability and spatial correlation could be attributed to several


reasons, i.e., fault rupture mechanism, complex geological features, local site
conditions, azimuth and directivity effects, basin and topographic effects and
induced phenomena like liquefaction and landslides. In practice most of these
reasons are often poorly known and poorly modelled, introducing important uncer-
tainties. The occurrence of earthquake scenarios (magnitude and location) and the
occurrence of earthquake shaking at a site are related but they are not the same.
Whether probabilistic or deterministic scenario is used, the ground motion at a site
should be estimated considering the variability of ground motion. However in
practice, and in particular in PSHA, this is not treated in a rigorous way, which
leads to a systematic underestimation of the hazard (Bommer and Abrahamson
2006). PSHA should always consider ground motion variability otherwise in most
cases it is incorrect (Abrahamson 2006).
With the present level of know-how for a single earthquake scenario
representing the source and the magnitude of a single event, the estimation of the
spatial variation of ground motion field is probably easier and in any case better
controlled. In a PSHA, which considers many sources and magnitude scenarios to
effectively sample the variability of seismogenic sources, the presently available
models to account for spatial variability are more complicated and often lead to an
underestimation of the ground motion at a given site, simply because all possible
sources and magnitudes are considered in the analysis.
In conclusion it should not be forgotten that seismic hazard is not a tool to
estimate a magnitude and a location but to evaluate the design motion for a specific
structure at a given site. To achieve this goal more research efforts should be
focused on better modelling of the spatial variability of ground motion considering
all possible sources for that, knowing that there are a lot of uncertainties hidden in
this game.

3.7.3 Site Effects

The important role of site effects in seismic hazard and risk assessment is now well
accepted. Their modelling has been also improved in the last two decades.
In Eurocode 8 (CEN 2004) the influence of local site conditions is reflected with
the shape of the PGA-normalized response spectra and the so-called “soil factor” S,
which represents ground motion amplification with respect to outcrop conditions.
As far as soil categorization is concerned, the main parameter used is Vs,30, i.e., the
74 K. Pitilakis

Table 3.1 Improved soil Type 2 (Ms  5.5) Type 1 (Ms > 5.5)
factors for EC8 soil classes
Soil class Improved EC8 Improved EC8
(Pitilakis et al. 2012)
B 1.40 1.35 1.30 1.20
C 2.10 1.50 1.70 1.15
D 1.80a 1.80 1.35a 1.35
E 1.60a 1.60 1.40a 1.40
a
Site specific ground response analysis required

time-based average value of shear wave velocity in the upper 30 m of the soil
profile, first proposed by Borcherdt and Glassmoyer (1992). Vs,30 has the advantage
that it can be obtained easily and at relatively low cost, since the depth of 30 m is a
typical depth of geotechnical investigations and sampling borings, and has defi-
nitely provided engineers with a quantitative parameter for site classification. The
main and important weakness is that the single knowledge of the Vs profile at the
upper 30 m cannot quantify properly the effects of the real impedance contrast,
which is one of the main sources of the soil amplification, as for example in case of
shallow (i.e., 15–20 m) loose soils on rock or deep soil profiles with variable
stiffness and contrast. Quantifying site effects with the simple use of Vs,30 intro-
duces important uncertainties in the estimated IM.
Pitilakis et al. (2012) used an extended strong motion database compiled in the
framework of SHARE project (Giardini et al. 2013) to validate the spectral shapes
proposed in EC8 and to estimate improved soil amplification factors for the existent
soil classes of Eurocode 8 for a potential use in an EC8 update (Table 3.1). The soil
factors were estimated using a logic tree approach to account for the epistemic
uncertainties. The major differences in S factor values were found for soil category
C. For soil classes D and E, due to the insufficient datasets, the S factors of EC8
remain unchanged with a prompt for site-specific ground response analyses.
In order to further improve design spectra and soil factors Pitilakis et al. (2013)
proposed a new soil classification system that includes soil type, stratigraphy,
thickness, stiffness and fundamental period of soil deposit (T0) and average shear
wave velocity of the entire soil deposit (Vs,av). They compiled an important subset
of the SHARE database, containing records from sites, which dispose a well-
documented soil profile concerning dynamic properties and depth up to the “seis-
mic” bedrock (Vs > 800 m/s). The soil classes of the new classification scheme are
illustrated in comparison to EC8 soil classes in Fig. 3.3.
The proposed normalized acceleration response spectra were evaluated by
fitting the general spectral equations of EC8 closer to the 84th percentile, in
order to account as much as possible for the uncertainties associated with the
nature of the problem. Figure 3.4 is a representative plot, illustrating the median,
16th and 84th percentiles, and the proposed design normalized acceleration
spectra for soil sub-class C1. It is obvious that the selection of a different
percentile would affect dramatically the proposed spectra and consequently the
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 75

Fig. 3.3 Simplified illustration of ground types according to (a) EC8 and (b) the new classifica-
tion scheme of Pitilakis et al. (2013)

Fig. 3.4 Normalized elastic acceleration response spectra for soil class C1 of the classification
system of Pitilakis et al. (2013) for Type 2 seismicity (left) and Type 1 seismicity (right). Red lines
represent the proposed spectra. The range of the 16th to 84th percentile is illustrated as a gray area

demand spectra, the performance points and the damages. While there is no
rigorous argument why the median should be chosen, 84th percentile or close to
this sounds more reasonable.
The proposed new elastic acceleration response spectra, normalized to the
design ground acceleration at rock-site conditions PGArock, are illustrated in
Fig. 3.5. Dividing the elastic response spectrum of each soil class with the
corresponding response spectrum for rock, period-dependent amplification factors
can be estimated.
76 K. Pitilakis

Fig. 3.5 Type 2 (left) and Type 1 (right) elastic acceleration response spectra for the classification
system of Pitilakis et al. (2013)

3.7.4 Time Dependent Risk Assessment

Nature and earthquakes are unpredictable both in short and long term especially in
case of extreme or “rare” events. Traditionally seismic hazard is estimated as time
independent, which is probably not true. We all know that after a strong earthquake
it is rather unlikely that another strong earthquake will happen in short time on the
same fault. Exceptions like the sequence of Christchurch earthquakes in
New Zealand or more recently in Cephalonia Island in Greece are rather exceptions
that prove the general rule, if there is any.
Exposure is certainly varying with time, normally increasing. The vulnerability
is also varying with time, increasing or decreasing (for example after mitigation
countermeasures or post earthquake retrofitting have been undertaken). On the
other hand aging effects and material degradation with time increase the vulnera-
bility (Pitilakis et al. 2014b). Consequently the risk cannot be time independent.
Figure 3.6 sketches the whole process.
For the time being time dependent seismic hazard and risk assessment are in a
very premature stage. However, even if in the near future rigorous models should be
developed, the question still remains: is it realistic to imagine that time dependent
hazard could be ever introduced in engineering practice and seismic codes? If it
ever happens, it will have a profound political, societal and economic impact.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 77

Fig. 3.6 Schematic illustration of time dependent seismic hazard, exposure, vulnerability and risk
(After J. Douglas et al. in REAKT)

3.7.5 Performance Indicators and Resilience

In seismic risk assessment, the performance levels of a structure, for example a RC


building belonging to a specific class, can be defined through damage thresholds
called limit states. A limit state defines the boundary between two different damage
conditions often referred to as damage states. Different damage criteria have been
proposed depending on the typologies of elements at risk and the approach used for
the derivation of fragility curves. The most common way to define earthquake
consequences is a classification in terms of the following damage states: no
damage; slight/minor; moderate; extensive; complete.
This qualitative approach requires an agreement on the meaning and the content
of each damage state. The number of damage states is variable and is related to the
functionality of the components and/or the repair duration and cost. In this way the
total losses of the system (economic and functional) can be estimated.
Traditionally physical damages are related to the expected serviceability level of
the component (i.e., fully or partially operational or inoperative) and the
corresponding functionality (e.g., power availability for electric power substations,
number of available traffic lanes for roads, flow or pressure level for water system).
These correlations provide quantitative measures of the component’s performance,
and can be applied for the definition of specific Performance Indicators (PIs).
Therefore, the comparison of a demand with a capacity quantity, or the conse-
quence of a mitigation action, or the accumulated consequences of all damages (the
“impact”) can be evaluated. The restoration cost, when provided, is given as the
percentage of the replacement cost. Downtime days to identify the elastic or the
collapse limits are also purely qualitative and cannot be generalized for any
78 K. Pitilakis

Fig. 3.7 Conceptual relationship between seismic hazard intensity and structural performance
(From Krawinkler and Miranda (2004), courtesy W. Holmes, G. Deierlein)

structure type. These thresholds are qualitative and are given as general outline
(Fig. 3.7). The user could modify them accordingly, considering the particular
conditions of the structure, the network or component under study. The selection
of any value of these thresholds inevitably introduces uncertainties, which are
affecting the target performance and finally the estimation of damages and losses.
Methods for deriving fragility curves generally model the damage on a discrete
damage scale. In empirical procedures, the scale is used in reconnaissance efforts to
produce post-earthquake damage statistics and is rather subjective. In analytical
procedures the scale is related to limit state mechanical properties that are described
by appropriate indices, such as for example displacement capacity (e.g., inter-story
drift) in the case of buildings or pier bridges. For other elements at risk the
definition of the performance levels or limit states may be more vague and follow
other criteria related, for example in the case of pipelines, to the limit strength
characteristics of the material used in each typology.
The definition and consequently the selection of the damage thresholds, i.e.,
limit states, are among the main sources of uncertainties because they rely on rather
subjective criteria. A considerable effort has been made in SYNER-G (Pitilakis
et al. 2014a) to homogenize the criteria as much as possible.
Measuring seismic performance (risk) through economic losses and downtime
(and business interruption), introduces the idea of measuring risk through a new
more general concept: the resilience.
Resilience referring to a single element at risk or a system subjected to natural
and/or manmade hazards usually goes towards its capability to recover its func-
tionality after the occurrence of a disruptive event. It is affected by attributes of the
system, namely robustness (for example residual functionality right after the dis-
ruptive event), rapidity (recovery rate), resourcefulness and redundancy (Fig. 3.8).
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 79

Fig. 3.8 Schematic representation of seismic resilience concept (Bruneau et al. 2003)

It is also obvious that resilience has very strong societal, economic and political
components, which amplify the uncertainties.
Accepting the resilience to measure and quantify performance indicators and
implicitly fragility and vulnerability, means that we introduce a new complicated
world of uncertainties, in particular when from the resilience of a single asset e.g., a
building, we integrate the risk in a whole city, with all its infrastructures, utility
systems and economic activities.

3.7.6 Margin of Confidence or Conservatism?

The use of medians is traditionally considered as a reasonably conservative


approach. Increased margin of confidence, i.e., 84th percentiles, is often viewed
as over-conservatism. Conservatism and confidence are not actually reflecting the
same thing in a probabilistic process. Figures 3.9 and 3.10 illustrate in a schematic
example the estimated damages when using the median or median 1 standard
deviation (depending on which one is the more “conservative” or reasonable) in all
steps of the assessment process of damages, from the estimation of UHS for rock
and the soil amplification factors to the capacity curve and the fragility curves. The
substantial differences observed in the estimated damages cannot be attributed to an
increased margin of confidence or conservatism. Considering all relevant uncer-
tainties, all assumptions are equally possible or at least “reasonable”. Who can
really define in a scientifically rigorous way the threshold between conservatism
and reasonable? Confidence is a highly subjective term varying among different
end-users and model producers.
80 K. Pitilakis

Fig. 3.9 Schematic example of estimated damages when using the median for UHS for rock, soil
amplification factors, capacity curve and fragility curves

Fig. 3.10 Schematic example of estimated damages when using the median 1 standard devia-
tion (depending on which one is the more “conservative” or reasonable) for UHS for rock, soil
amplification factors, capacity curve and fragility curves
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 81

3.8 Damage Assessment: Subjectivity and Ineffectiveness


in the Quest of the Reasonable

To further highlight the inevitable scatter in the current risk assessment of physical
assets we use as example the seismic risk assessment and the damages of building
stock in an urban area and in particular the city of Thessaloniki, Greece.
Thessaloniki is the second largest city in Greece with about one million inhabitants.
It has a long seismic history of devastating earthquakes, with the most recent one
occurring in 1978 (Mw ¼ 6.5, R ¼ 25 km). Since then a lot of studies have been
performed in the city to estimate the seismic hazard and to assess the seismic risk.
Due to the very good knowledge of the different parameters, the city has been
selected as pilot case study in several major research projects of the European
Union (SYNER-G, SHARE, RISK-UE, LessLoss etc.)

3.8.1 Background Information and Data

The study area considered in the present application (Fig. 3.11) covers the central
municipality of Thessaloniki. With a total population of 380,000 inhabitants and
about 28,000 buildings of different typologies (mainly reinforced concrete), it is
divided in 20 sub-city districts (SCD) (http://www.urbanaudit.org). Soil conditions
are very well known (e.g., Anastasiadis et al. 2001). Figures 3.12 and 3.13 illustrate
the classification of the study area based on the classification schemes of EC8 and
Pitilakis et al. (2013) respectively. The probabilistic seismic hazard (PSHA) is
estimated applying SHARE methodology (Giardini et al. 2013), with its rigorous
treatment of aleatory and epistemic uncertainties. The PSHA with a 10 % proba-
bility of exceedance in 50 years and the associated UHS have been estimated for
outcrop conditions. The estimated rock UHS has been then properly modified to
account for soil conditions applying adequate period-dependent amplification fac-
tors. Three different amplification factors have been used: the current EC8 factors
(Hazard 1), the improved ones (Pitilakis et al. 2012) (Hazard 2) and the new ones
based on a more detailed soil classification scheme (Pitilakis et al. 2013) (Hazard 3)
(see Sect. 3.7.3). Figure 3.14 presents the computed UHS for soil type C (or C1
according to the new classification scheme). Vulnerability is expressed through
appropriate fragility curves for each building typology (Pitilakis et al. 2014a).
Damages and associated probability of a building of a specific typology to exceed
a specific damage state have been calculated with the Capacity Spectrum Method
(Freeman 1998; Fajfar and Gaspersic 1996).
The detailed building inventory for the city of Thessaloniki, which includes
information about material, code level, number of storeys, structural type and
volume for each building, allows a rigorous classification in different typologies
according to SYNER-G classification and based on a Building Typologies Matrix
representing practically all common RC building types in Greece (Kappos
82 K. Pitilakis

Fig. 3.11 Municipality of Thessaloniki. Study area; red lines illustrate Urban Audit Sub-City
Districts (SCDs) boundaries

Fig. 3.12 Map of EC8 soil


classes (based on Vs,30) for
Thessaloniki

et al. 2006). The building inventory comprises 2,893 building blocks with 27,738
buildings, the majority of which (25,639) are reinforced concrete (RC) buildings.
The buildings are classified based on their structural system, height and level of
seismic design (Fig. 3.15). Regarding the structural system, both frames and frame-
with-shear walls (dual) systems are included, with a further distinction based on the
configuration of the infill walls. Regarding the height, three subclasses are consid-
ered (low-, medium- and high-rise). Finally, as far as the level of seismic design is
concerned, four different levels are considered:
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 83

Fig. 3.13 Map of the soil


classes according to the new
soil classification scheme
proposed by Pitilakis
et al. (2013) for
Thessaloniki

Fig. 3.14 SHARE rock


UHS for Thessaloniki
amplified with the current
EC8 soil amplification
factor for soil class C (CEN
2004), the improved EC8
soil amplification factor for
soil class C (Pitilakis
et al. 2012) and the soil
amplification factors for soil
class C1 of the classification
system of Pitilakis
et al. (2013). All spectra
refer to a mean return period
T ¼ 475 years

• No code (or pre-code): R/C buildings with very low level of seismic design and
poor quality of detailing of critical elements.
• Low code: R/C buildings with low level of seismic design.
• Medium code: R/C buildings with medium level of seismic design (roughly
corresponding to post-1980 seismic code and reasonable seismic detailing of
R/C members).
84 K. Pitilakis

9000
8000
Number of buildings
7000
6000
5000
4000
3000
2000
1000
0
RC3.1LL
RC3.1ML
RC3.1HL
RC3.2LL
RC3.2ML
RC3.2HL
RC4.2LL
RC4.2ML
RC4.2HL
RC4.2LM
RC4.2MM
RC4.2HM
RC4.2LH
RC4.2MH
RC4.2HH
RC4.3LL
RC4.3ML
RC4.3HL
RC4.3LM
RC4.3MM
RC4.3HM
RC4.3MH
RC4.3HH
Building Type

Fig. 3.15 Classification of the RC buildings of the study area (Kappos et al. 2006). The first letter
of each building type refers to the height of the building (L low, M medium, H high), while the
second letter refers to the seismic code level of the building (N no, L low, M medium, H high)

• High code: R/C buildings with enhanced level of seismic design and ductile
seismic detailing of R/C members according to the new Greek Seismic Code
(similar to Eurocode 8).
The fragility functions used (in terms of spectral displacement Sd) were derived
though classical inelastic pushover analysis. Bilinear pushover curves were
constructed for each building type, so that each curve is defined by its yield and
ultimate capacity. Then they were transformed into capacity curves (expressing
spectral acceleration versus spectral displacement). Fragility curves were finally
derived from the corresponding capacity curves, by expressing the damage states in
terms of displacements along the capacity curves (See Sect. 3.6 and in D’Ayala
et al. 2012).
Each fragility curve is defined by a median value of spectral displacement and a
standard deviation. Although the standard deviation of the curves is not constant,
for the present application a standard deviation equal to 0.4 was assigned to all
fragility curves, due to a limitation of the model used to perform the risk analyses.
This hypothesis will be further discussed later in this section.
Five damage states were used in terms of Sd: DS1 (slight), DS2 (moderate), DS3
(substantial to heavy), DS4 (very heavy) and DS5 (collapse) (Table 3.2). According
to this classification a spectral displacement of 2 cm or even lower can bring
ordinary RC structures in the moderate (DS2) damage state, which is certainly a
conservative assumption and in fact is penalizing, among other things, seismic risk
assessment.
The physical damages of the buildings have been estimated using the open-
source software EarthQuake Risk Model (EQRM http://sourceforge.net/projects/
eqrm, Robinson et al. 2005), developed by Geoscience Australia. The software is
based on the HAZUS methodology (FEMA and NIBS 1999; FEMA 2003) and has
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 85

Table 3.2 Damage states and spectral displacement thresholds (D’Ayala et al. 2012)
Bare frames Bare dual
Infilled frames Infilled frames Infilled dual-
Damage with Sdu,bare < with Sdu,bare  Infilled dual – shear infill walls
state 1.1Sdu 1.1Sdu wall drop strength failure
DS1 0.7Sdy 0.7Sdy
DS2 Sdy + 0.05 (Sdu  Sdy) Sdy + 0.05 (Sdu  Sdy)
DS3 Sdy + (1/3) (Sdu  Sdy + (1/2) (Sdu  Sdy + (1/2) (Sdu  0.9Sdu
Sdy) Sdy) Sdy)
DS4 Sdy + (2/3) (Sdu  Sdu Sdu Sdu,bare
Sdy)
DS5 Sdu Sdu,bare 1.3Sdu 1.3Sdu,bare
Sdy spectral displacement for yield capacity
Sdu spectral displacement for ultimate capacity

been properly modified so that it can be used for any region of the world (Crowley
et al. 2010). The method is based on the Capacity Spectrum Method. The so called
“performance points”, after properly adjusted to account for the elastic and hyster-
etic damping of each structure, have been overlaid with the relevant fragility curves
in order to compute the damage probability in each of the different damage states
and for each building type.
The method relies on two main parameters: The demand spectra (properly
modified to account for the inelastic behaviour of the structure), which are driven
from the hazard analysis, and the capacity curve. The latter is not user-defined and it
is automatically estimated by the code using the building parameters supplied by
the user. The capacity curve is defined by two points: the yield point (Sdy, Say) and
the ultimate point (Sdu, Sdy) and is composed of three parts: a straight line to the
yield point (representing elastic response of the building), a curved part from the
yield point to the ultimate point expressed by an exponential function and a
horizontal line starting from the ultimate point (Fig. 3.16). The yield point and
ultimate point are defined in terms of the building parameters (Robinson et al. 2005)
introducing inevitably several extra uncertainties, especially in case of existing
buildings, designed and constructed several decades ago. In overall the following
data are necessary to implement the Capacity Spectrum Method in EQRM: height
of the building, natural elastic period, design strength coefficient, fraction of
building weight participating in the first mode, fraction of the effective building
height to building displacement, over-strength factors, ductility factor and damping
degradation factors for each building or building class. All these introduce several
uncertainties, which are difficult to be quantified in a rigorous way mainly because
the uncertainties are mostly related to the difference between any real RC structure
belonging in a certain typology and the idealized model.
86 K. Pitilakis

Fig. 3.16 Typical capacity curve in EQRM software, defined by the yield point (Sdy, Say) and the
ultimate point (Sdu, Sdy) (Modified after Robinson et al. (2005))

3.8.2 Physical Damages and Losses

For each building type in each building block, the probabilities for slight, moderate,
extensive and complete damage were calculated. These probabilities were then
multiplied with the total floor area of the buildings of the specific building block
that are classified to the specific building type in order to estimate for this building
type the floor area, which will suffer each damage state. Repeating this for all
building blocks which belong to the same sub-city district (SCD) and for all
building types, the total floor area of each building type that will suffer each damage
state in the specific SCD can be calculated (Fig. 3.17). The total percentages of
damaged floor area per damage state for all SCD and for the three hazard analyses
illustrated in the previous figures are given in Table 3.3.
The economic losses were estimated through the mean damage ratio (MDR)
(Table 3.4), multiplying then this value with an estimated replacement cost of
1,000 €/m2 (Table 3.5).

3.8.3 Discussing the Differences

The observed differences in the damage assessment and losses are primarily
attributed to the numerous uncertainties associated to the hazard models, to the
way the uncertainties are treated and to the number of standard deviations accepted
in each step of the analysis. Higher site amplification factors associated for example
to median value plus one standard deviation, result in increasing building damages
and consequently economic losses. The way inelastic demand spectra are estimated
and the difference between computed UHS and a real earthquake records may also
affect the final result (Fig. 3.18).
Despite the important influence of the hazard parameters, there are several other
sources of uncertainties related mainly to the methods used. The effect of some of
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 87

Fig. 3.17 Thessaloniki.


Seismic risk per Sub-City
District for a mean return
period of 475 years in terms
of the percentage of
damaged floor area per
damage state for (a) Hazard
1, (b) Hazard 2 and (c)
Hazard 3

the most influencing parameters involved in the methodological chain of risk


assessment will be further discussed for the most common building type
(RC4.2ML) located in SCD 16. In particular the effect of the following parameters
will be discussed:
88 K. Pitilakis

Table 3.3 Percentages of damaged floor area per damage state for hazard cases 1–3, for a mean
return period of 475 years
Hazard 1 (%) Hazard 2 (%) Hazard 3 (%)
No 7.4 6.4 4.3
Slight [D1] 17.6 12.9 11.1
Moderate [D2] 54.4 43.9 42.2
Extensive [D3] 18.9 22.4 20.3
Complete [D5] 1.7 14.4 22.1

Table 3.4 Mean damage ratios for hazard cases 1–3, for a mean return period of 475 years
Hazard 1 (%) Hazard 2 (%) Hazard 3 (%)
MDR 7.94 18.28 23.87

Table 3.5 Economic losses for hazard cases 1–3, for a mean return period of 475 years, assuming
an average replacement cost equal to 1,000 €/m2 (in billions €)
Hazard 1 Hazard 2 Hazard 3
Economic losses 2.7 6.2 8.1

Fig. 3.18 Estimation of the


performance points:
demand spectra (elastic) for
Hazard 1, Hazard 2 and
Hazard 3, and for a real
record (Northridge, 1994);
mean capacity curve of the
most frequent building
classes (RC4.2ML) and
resulted performance points

• Selection of the reduction factors for the inelastic demand spectra.


• Effect of the duration of shaking.
• Methodology for estimation of performance (EQRM versus N2).
• Uncertainties in the fragility curves.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 89

Reduction factors of the inelastic demand spectra


One of the main debated issues of CSM is the estimation of the inelastic demand
spectrum for the estimation of the final performance of the structure. When build-
ings are subjected to ground shaking they do not remain elastic and dissipate
hysteretic energy. Hence, the elastic demand curve should be appropriately reduced
in order to incorporate the inelastic energy dissipation. Reduction of spectral values
to account for the hysteretic damping associated with the inelastic behaviour of
structures may be carried out using different techniques like the ATC-40 method-
ology, or inelastic design spectra and equivalent elastic over-damped spectra.
In the present study the ATC-40 methodology (ATC 1996) has been used
combined with HAZUS methodology (FEMA and NIBS 1999; FEMA 2003).
More specifically, damping-based spectral reduction factors were used assuming
different reduction factors associated to different periods of the ground motion.
According to this pioneer method the effective structural damping is the sum of the
elastic damping and the hysteretic one. The hysteretic damping Bh is a function of
the yield and ultimate points of the capacity curve (Eq. 3.2).
 
A yi D yi
Bh ¼ 63:5  κ   ð3:2Þ
Au D u

k is a degradation factor that defines the effective amount of hysteretic damping as a


function of earthquake duration and energy-absorption capacity of the structure
during cyclic earthquake load. This factor depends on the duration of the ground
shaking while it is also a measure of the effectiveness of the hysteresis loops. When
k factor is equal to unity, the hysteresis loops are full and stable. On the other hand
when k factor is equal to 0.3 the hysteretic behaviour of the building is poor and the
loop area is substantially reduced. It is evident that for a real structure the selection
of the value of k is based on limited information, and hence practically introduces
several uncontrollable uncertainties. In the present study a k factor equal to 0.333 is
applied assuming moderate duration and poor hysteretic behaviour according to
ATC-40 (ATC 1996).
Except from Newmark and Hall (1982) damping based spectral reduction
factors, in the literature there are several other strength or spectral reduction factors
one can use in order to estimate inelastic strength demands from elastic strength
demands (Miranda and Bertero 1994). To illustrate the effect of the selection of
different methods we compared the herein used inelastic displacement performance
according to HAZUS (assuming k factor equal to 0.333 and 1), with other methods,
namely those proposed by Newmark and Hall (1982) (as a function of ductility),
Krawinkler and Nassar (1992), Vidic et al. (1994) and Miranda and Bertero (1994).
Applying the above methods for one building type (e.g., RC4.2ML) subjected to
Hazard 3 (new soil classification and soil amplification factors according to
Pitilakis et al. 2013), it is observed (Table 3.6) that the method used herein gives
the highest displacements compared to all other methodologies (Fig. 3.19), a fact
which further explains the over-predicted damages (Table 3.6).
90 K. Pitilakis

Table 3.6 Inelastic displacement demand computed with different methods and total physical
damages for SCD16 and Hazard 3, for a mean return period of 475 years in terms of the percentage
of damage per damage state using various methodologies for the reduction of the elastic spectrum
dPP DS1 DS2 DS3 DS4 DS5
(cm) (%) (%) (%) (%) (%)
ATC-40_Hazus, k ¼ 0.33 8.0 0.00 0.00 0.94 35.99 63.08
(Hazus_k ¼ 0.333)
ATC-40_Hazus, k ¼ 1 4.2 0.00 0.04 22.85 66.98 10.13
(Hazus_k ¼ 1)
Newmark and Hall (1982) (NH) 2.5 0.02 1.90 68.95 28.60 0.53
Krawinkler and Nassar (1992) (KN) 2.2 0.10 5.01 78.54 16.21 0.14
Vidic et al. (1994) (VD) 2.2 0.06 3.83 76.86 19.06 0.20
Miranda and Bertero (1994) (MB) 1.8 0.31 9.99 81.14 8.53 0.04

Duration of shaking
The effect of the duration of shaking is introduced through the k factor. It is
supposed that the shorter the duration is, the higher the damping value should
be. Applying this approach to the study case it is found that the effective damping
for short earthquake duration is equal to 45 % while the effective damping for
moderate earthquake duration is equal to 25 %. The differences are too high to
underestimate the importance of the rigorous selection of this single parameter.
Figure 3.20 presents the damages for SCD16 in terms of the percentage of damage
per damage state considering short, moderate or long duration of the ground
shaking.
EQRM versus N2 method (Fajfar 1999)
There are various methodologies that can be used for the vulnerability assessment
and thus for building damage estimation (e.g., Capacity Spectrum Method, N2
Method). CSM (ATC-40 1996) that is also utilized in EQRM, evaluates the seismic
performance of structures by comparing structural capacity with seismic demand
curves. The key to this method is the reduction of 5 %-damped elastic response
spectra of the ground motion to take into account the inelastic behaviour of the
structure under consideration using appropriate damping based reduction factors.
This is the main difference of EQRM methodology compared to “N2” method
(Fajfar 1999, 2000), in which the inelastic demand spectrum is obtained from code-
based elastic design spectra using ductility based reduction factors. The computed
damages in SCD16 for Hazard 3 using EQRM and N2 methodology are depicted in
Fig. 3.21. It is needless to comment on the differences.
Uncertainties in the Fragility Curves
Figure 3.22 shows the influence of beta (β) factor of the fragility curves. EQRM
considers that beta factor is equal to 0.4. However the selection of a different,
equally logical value, results in a very different damage level.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 91

a b
Hazus_k=0.333 Hazus_k=1

NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE

c d
NH KN

NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE

e f
VD MB

NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE

Fig. 3.19 Seismic risk (physical damages) in SCD16 for Hazard 3 and mean return period of
475 years in terms of the percentage of damage per damage state using (a) ATC-40 methodology
combined with Hazus for k ¼ 0.333 (b) ATC-40 methodology combined with Hazus for k ¼ 1 (c)
Newmark and Hall (1982) (d) Krawinkler and Nassar (1992) (e) Vidic et al. (1994) and (f)
Miranda and Bertero (1994)

3.9 Conclusive Remarks

The main conclusion that one could make from this short and fragmented discus-
sion is that we need a re-thinking of the whole analysis chain from hazard assess-
ment to consequences and loss assessment. The uncertainties involved in every step
of the process are too important, affecting the final result. Probably it is time to
change the paradigm because so far we just use the same ideas and models trying to
92 K. Pitilakis

a short (k=0.5) b moderate (k=0.3)

NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE

c long (k=0.1)

NO DAMAGE
SLIGHT
MODERATE
EXTENSIVE
COMPLETE

Fig. 3.20 Computed damages for SCD16 for Hazard 3 and mean return period of 475 years in
terms of the percentage of damage per damage state considering (a) short (b) moderate and (c)
long duration of the ground shaking

EQRM N2

NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE

Fig. 3.21 Computed damages in SCD16 for Hazard 3 and mean return period of 475 years in
terms of the percentage of damage per damage using EQRM and N2 methodology

EQRM (b=0.4) b=0.7

NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE

Fig. 3.22 Seismic risk for SCD16 for Hazard 3 and mean return period of 475 years in terms of the
percentage of damage per damage state using EQRM with different β factor

improve them (often making them very complex), not always satisfactorily. Con-
sidering the starting point of the various models and approaches and the huge
efforts made so far, the progress globally is rather modest. More important is that
in many cases the uncertainties are increased, not decreased, a fact that has a serious
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 93

implication to the reliability and efficiency of the models regarding the assessment
of the physical damages in particular in large scale e.g., city scale. Alienated
end-users are more apt to serious mistakes and wrong decisions; wrong in the
sense of extreme conservatism, high cost or unacceptable safety margins. It should
be admitted, however, that our know-how has increased considerably and hence
there is the necessary scientific maturity for a qualitative rebound towards a new
global paradigm reducing partial and global uncertainties.

Acknowledgments Special acknowledgment to Dr Jacopo Selva, Dr Sotiris Argyroudis and


Professor Theodoros Chatzigogos for several breakthrough discussions we had on the nature and
the practical treatment of uncertainties. Also to Dr Zafeiria Roumelioti, Dr Stavroula Fotopoulou
and my PhD students Evi Riga, Anna Karatzetzou and Sotiria Karapetrou for helping me in
preparing this lecture and paper.

Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.

References

Abrahamson NA (2006) Seismic hazard assessment: problems with current practice and future
developments. Proceedings of First European Conference on Earthquake Engineering and
Seismology, Geneva, September 2006, p 17
Alexander D (2000) Confronting catastrophe: new perspectives on natural disasters. Oxford
University Press, New York, p 282
Anastasiadis A, Raptakis D, Pitilakis K (2001) Thessaloniki’s detailed microzoning: subsurface
structure as basis for site response analysis. Pure Appl Geophys 158:2597–2633
ATC-40 (1996) Seismic evaluation and retrofit of concrete buildings. Applied Technology Coun-
cil, Redwood City
Bommer JJ, Abrahamson N (2006) Review article “Why do modern probabilistic seismic hazard
analyses often lead to increased hazard estimates?”. Bull Seismol Soc Am 96:1967–1977.
doi:10.1785/0120070018
Borcherdt RD, Glassmoyer G (1992) On the characteristics of local geology and their influence on
ground motions generated by the Loma Prieta earthquake in the San Francisco Bay region,
California. Bull Seismol Soc Am 82:603–641
Bruneau M, Chang S, Eguchi R, Lee G, O’Rourke T, Reinhorn A, Shinozuka M, Tierney K,
Wallace W, Von Winterfelt D (2003) A framework to quantitatively assess and enhance the
seismic resilience of communities. EERI Spectra J 19(4):733–752
CEN (European Committee for Standardization) (2004) Eurocode 8: Design of structures for
earthquake resistance, Part 1: General rules, seismic actions and rules for buildings. EN
1998–1:2004. European Committee for Standardization, Brussels
Cornell CA, Jalayer F, Hamburger RO, Foutch DA (2002) Probabilistic basis for 2000
SAC/FEMA steel moment frame guidelines. J Struct Eng 128(4):26–533
Crowley H, Colombi M, Crempien J, Erduran E, Lopez M, Liu H, Mayfield M, Milanesi (2010)
GEM1 Seismic Risk Report Part 1, GEM Technical Report, Pavia, Italy 2010–5
D’Ayala D, Kappos A, Crowley H, Antoniadis P, Colombi M, Kishali E, Panagopoulos G, Silva V
(2012) Providing building vulnerability data and analytical fragility functions for PAGER,
Final Technical Report, Oakland, California
94 K. Pitilakis

Fajfar P (1999) Capacity spectrum method based on inelastic demand spectra. Earthq Eng Struct
Dyn 28(9):979–993
Fajfar P (2000) A nonlinear analysis method for performance-based seismic design. Earthq
Spectra 16(3):573–592
Fajfar P, Gaspersic P (1996) The N2 method for the seismic damage analysis for RC buildings.
Earthq Eng Struct Dyn 25:23–67
FEMA, NIBS (1999) HAZUS99 User and technical manuals. Federal Emergency Management
Agency Report: HAZUS 1999, Washington DC
FEMA (2003) HAZUS-MH Technical Manual. Federal Emergency Management Agency,
Washington, DC
FEMA 273 (1996) NEHRP guidelines for the seismic rehabilitation of buildings — ballot version.
U.S. Federal Emergency Management Agency, Washington, DC
FEMA 356 (2000) Prestandard and commentary for the seismic rehabilitation of buildings.
U.S. Federal Emergency Management Agency, Washington, DC
Freeman SA (1998) The capacity spectrum method as a tool for seismic design. In: Proceedings of
the 11th European Conference on Earthquake Engineering, Paris
Giardini D, Woessner J, Danciu L, Crowley H, Cotton F, Gruenthal G, Pinho R, Valensise G,
Akkar S, Arvidsson R, Basili R, Cameelbeck T, Campos-Costa A, Douglas J, Demircioglu MB,
Erdik M, Fonseca J. Glavatovic B, Lindholm C, Makropoulos K, Meletti F, Musson R,
Pitilakis K, Sesetyan K, Stromeyer D, Stucchi M, Rovida A (2013) Seismic Hazard Harmo-
nization in Europe (SHARE): Online Data Resource. doi:10.12686/SED-00000001-SHARE
Kappos AJ, Panagopoulos G, Penelis G (2006) A hybrid method for the vulnerability assessment
of R/C and URM buildings. Bull Earthq Eng 4(4):391–413
Krawinkler H, Miranda E (2004) Performance-based earthquake engineering. In: Bozorgnia Y,
Bertero VV (eds) Earthquake engineering: from engineering seismology to performance-based
engineering, chapter 9. CRC Press, Boca Raton, pp 9.1–9.59
Krawinkler H, Nassar AA (1992) Seismic design based on ductility and cumulative damage
demands and capacities. In: Fajfar P, Krawinkler H (eds) Nonlinear seismic analysis and
design of 170 reinforced concrete buildings. Elsevier Applied Science, New York, pp 23–40
LessLoss (2007) Risk mitigation for earthquakes and landslides, Research Project, European
Commission, GOCE-CT-2003-505448
MacKenzie D (1990) Inventing accuracy: a historical sociology of nuclear missle guidance. MIT
Press, Cambridge
Mackie K, Stojadinovic B (2003) Seismic demands for performance-based design of bridges,
PEER Report 2003/16. Pacific Earthquake Engineering Research Center, University of Cali-
fornia, Berkeley
Mackie K, Stojadinovic B (2005) Fragility basis for California highway overpass bridge seismic
decision making. Pacific Earthquake Engineering Research Center, University of California,
Berkeley
Mehanny SSF (2009) A broad-range power-law form scalar-based seismic intensity measure. Eng
Struct 31:1354–1368
Miranda E, Bertero V (1994) Evaluation of strength reduction factors for earthquake-resistant
design. Earthq Spectra 10(2):357–379
Newmark NM, Hall WJ (1982) Earthquake spectra and design. Earthquake Engineering Research
Institute, EERI, Berkeley
Padgett JE, Nielson BG, DesRoches R (2008) Selection of optimal intensity measures in proba-
bilistic seismic demand models of highway bridge portfolios. Earthq Eng Struct Dyn
37:711–725
Pinto PE (2014) Modeling and propagation of uncertainties. In: Pitilakis K, Crowley H, Kaynia A
(eds) SYNER-G: typology definition and fragility functions for physical elements at seismic
risk, vol 27, Geotechnical, geological and earthquake engineering. Springer, Dordrecht. ISBN
978-94-007-7872-6
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 95

Pitilakis K, Riga E, Anastasiadis A (2012) Design spectra and amplification factors for Eurocode
8. Bull Earthq Eng 10:1377–1400. doi:10.1007/s10518-012-9367-6
Pitilakis K, Riga E, Anastasiadis A (2013) New code site classification, amplification factors and
normalized response spectra based on a worldwide ground-motion database. Bull Earthq Eng
11(4):925–966. doi:10.1007/s10518-013-9429-4
Pitilakis K, Crowley H, Kaynia A (eds) (2014a) SYNER-G: typology definition and fragility
functions for physical elements at seismic risk, vol 27, Geotechnical, geological and earth-
quake engineering. Springer, Dordrecht. ISBN 978-94-007-7872-6
Pitilakis K, Karapetrou ST, Fotopoulou SD (2014b) Consideration of aging and SSI effects on
seismic vulnerability assessment of RC buildings. Bull Earthq Eng. doi:10.1007/s10518-013-
9575-8
REAKT (2014) Strategies and tools for real time earthquake and risk reduction. Research Project,
European Commission, Theme: ENV.2011.1.3.1-1, Grant agreement: 282862. http://www.
reaktproject.eu
RISK-UE (2004) An advanced approach to earthquake risk scenarios with applications to different
European towns. Research Project, European Commission, DG ΧII2001-2004, CEC: EVK4-
CT-2000-00014
Robinson D, Fulford G, Dhu T (2005) EQRM: Geoscience Australia’s earthquake risk model
technical manual Version 3.0. Geoscience Australia Record 2005/01
Selva J, Argyroudis S, Pitilakis K (2013) Impact on loss/risk assessments of inter-model variability
in vulnerability analysis. Nat Hazards 67(2):723–746. doi:10.1007/s11069-013-0616-z
SHARE (2013) Seismic Hazard Harmonization in Europe. Research Project, European Commis-
sion, ENV.2008.1.3.1.1, Grant agreement: 226769. www.share-eu.org
SYNER-G (2013) Systemic seismic vulnerability and risk analysis for buildings, lifeline networks
and infrastructures safety gain. Research Project, European Commission, ENV-2009-1-244061
Vamvatsikos D, Cornell CA (2002) Incremental dynamic analysis. Earthq Eng Struct Dyn
31:491–514
Vidic T, Fajfar P, Fischinger M (1994) Consistent inelastic design spectra: strength and displace-
ment. Earthq Eng Struct Dyn 23:507–521
Chapter 4
Variability and Uncertainty in Empirical
Ground-Motion Prediction for Probabilistic
Hazard and Risk Analyses

Peter J. Stafford

Abstract The terms aleatory variability and epistemic uncertainty mean different
things to people who routinely use them within the fields of seismic hazard and risk
analysis. This state is not helped by the repetition of loosely framed generic
definitions that actually inaccurate. The present paper takes a closer look at the
components of total uncertainty that contribute to ground-motion modelling in
hazard and risk applications. The sources and nature of uncertainty are discussed
and it is shown that the common approach to deciding what should be included
within hazard and risk integrals and what should be pushed into logic tree formu-
lations warrants reconsideration. In addition, it is shown that current approaches to
the generation of random fields of ground motions for spatial risk analyses are
incorrect and a more appropriate framework is presented.

4.1 Introduction

Over the past few decades a very large number of empirical ground-motion models
have been developed for use in seismic hazard and risk applications throughout the
world, and these contributions to engineering seismology collectively represent a
significant body of literature. However, if one were to peruse this literature it would,
perhaps, not be obvious what the actual purpose of a ground-motion model is. A
typical journal article presenting a new ground-motion model starts with a brief
introduction, proceeds to outlining the dataset that was used, presents the functional
form that is used for the regression analysis along with the results of this analysis,
shows some residual plots and comparisons with existing models and then wraps up
with some conclusions. In a small number of cases this pattern is broken by the
authors giving some attention to the representation of the standard deviation of the
model. Generally speaking, the emphasis is very much upon the development and

P.J. Stafford (*)


Department of Civil & Environmental Engineering, Imperial College London, London, UK
e-mail: [email protected]

© The Author(s) 2015 97


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_4
98 P.J. Stafford

behaviour of the median predictions of these models and the treatment of the
standard deviation (and its various components) is very minimal in comparison.
If it is reasonable to suspect that this partitioning of effort in presenting the model
reflects the degree of effort that went into developing the model then there are two
important problems with this approach: (1) the parameters of the model for the
median predictions are intrinsically linked to the parameters that represent the
standard deviation – they cannot be decoupled; and (2) it is well known from
applications of ground-motion models in hazard and risk applications that the
standard deviation exerts at least as much influence as the median predictions for
return periods of greatest interest.
The objective of the present article is to work against this trend by focussing
almost entirely upon the uncertainty associated with ground-motion predictions.
Note that what is actually meant by ‘uncertainty’ will be discussed in detail in
subsequent sections, but the scope includes the commonly referred to components
of aleatory variability and epistemic uncertainty. Furthermore, the important con-
siderations that exist when one moves from seismic hazard analysis into seismic
risk analysis will also be discussed.
As noted in the title of the article, the focus herein is upon empirical ground-
motion models and discussion of the uncertainties associated with stochastic
simulation-based models, or seismological models is not within the present scope.
That said, some of the concepts that are dealt with herein are equally applicable to
ground-motion models in a more general sense.
While at places in the article reference will be made to peak ground acceleration
or spectral acceleration, the issues discussed here at not limited to these intensity
measures. For the particular examples that are presented, although the extent of
various effects will be tied to the choice of intensity measure, the emphasis is upon
the underlying concept rather than the numerical results.

4.2 Objective of Ground-Motion Prediction

In both hazard and risk applications the objective is usually to determine how
frequently a particular state is exceeded. For hazard, this state is commonly a level
of an intensity measure at a site, while for risk applications the state could be related
to a level demand on a structure, a level of damage induced by this demand, or the
cost of this damage and its repair, among others. In order to arrive at estimates of
these rates (or frequencies) of exceedance it is not currently possible to work with
empirical data related to the state of interest as a result of insufficient empirical
constraint. For example, if one wished to compute an estimate of the annual rate at
which a level of peak ground acceleration is exceeded at a site then an option in an
ideal world would be to assume that the seismogenic process is stationary and that
what has happened in the past is representative of what might happen in the future.
On this basis, counting the number of times the state was exceeded and dividing this
by the temporal length of the observation period would provide an estimate of the
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 99

exceedance rate. Unfortunately, there is not a location on the planet for which this
approach would yield reliable estimates for return periods of common interest.
To circumvent the above problem hazard and risk analyses break down the
process of estimating rates of ground-motions into two steps: (1) estimate the
rates of occurrence of particular earthquake events; and (2) estimate the rate of
exceedance of a particular state of ground motion given this particular earthquake
event. The important point to make here is that within hazard and risk applications
the role of an empirical ground-motion model is to enable this second step in which
the rate of exceedance of a particular ground-motion level is computed for a given
earthquake scenario. The manner in which these earthquake scenarios are (or can
be) characterised has a strong impact upon how the ground-motion models can be
developed. For example, if the scenario can only be characterised by the magnitude
of the event and its distance from the site then it is only meaningful to develop the
ground-motion model as a function of these variables.
To make this point more clear, consider the discrete representation of the
standard hazard integral for a site influenced by a single seismic source:

XK X
J     
λY>y* ¼ ν P Y > y*m j , r k P M ¼ m j , R ¼ r k ð4:1Þ
k¼1 j¼1

where, Y is a random variable representing the ground-motion measure of interest,


y * is a particular value of this measure, ν is the annual rate of occurrence of
earthquakes that have magnitudes greater than some minimum value of interest,
and M and R generically represent magnitude and distance, respectively. If we
factor out the constant parameter ν, then we have an equation in terms of proba-
bilities and we can see that the objective is to find:

λY>y* X K X J     
P½Y > y* ¼ ¼ P Y > y*m j , r k P M ¼ m j , R ¼ r k
ν
Z 1 k¼1 j¼1
ZZ Z 1 ð4:2Þ
  
¼ f Y ð yÞdy ¼ f  ym, r f M, R ðm; r Þdmdr
Y m, r
y* y*

When we discuss the uncertainty associated with ground-motion models it is


important to keep this embedding framework in mind. The framework shows
 that
the role of a ground-motion model is to define the distribution f  ym, r of
Y m, r
levels of motion that can occur for a given earthquake scenario, defined in this case
by m and r. The uncertainty that is ultimately of interest to us relates to the estimate
of P½Y > y* and this depends upon the uncertainty in the ground-motion prediction
as well as the uncertainty in the definition of the scenario itself.
For seismic hazard analysis, the ground-motion model alone is sufficient to
provide the univariate distribution of the intensity measure for a given earthquake
scenario. However, for seismic risk applications, a typical ground-motion model
may need to be coupled with a model for spatial, and potentially spectral,
100 P.J. Stafford

correlations in order to define a multivariate conditional distribution of motions at


multiple locations (and response periods) over a region.
At a given site, both in hazard and risk applications, the conditional distribution
of ground-motions (assuming spectral acceleration as the intensity measure) given a
scenario is assumed to be lognormal and is defined as:
 
lnSa  N μlnSa ; σ 2lnSa ð4:3Þ

where the moments of the distribution are specific to the scenario in question, i.e.,
μlnSa μlnSa ðm; r; . . .Þ and σ lnSa σ lnSa ðm; r; . . .Þ. The probability of exceeding a
given level of motion for a scenario is therefore defined using the cumulative
standard normal distribution Φ(z):
 
   lnSa*  μlnSa
P Sa > Sa*m, r, . . . ¼ 1  Φ ð4:4Þ
σ lnSa

The logarithmic mean μln Sa and standard deviation σ ln Sa for a scenario would differ
for hazard and risk analyses as in the former case one deals with the marginal
distribution of the motions conditioned upon the given the scenario while in the
latter case one works with the conditional distribution of the motions, conditioned
upon both the given scenario and the presence of a particular event term for the
scenario. That is, in portfolio risk analysis one works at the level of inter-event
variability and intra-event variability while for hazard analysis one uses the total
variability.
An empirical ground-motion model must provide values of both the logarithmic
mean μln Sa and the standard deviation σ ln Sa in order to enable the probability
calculations to be made and these values must be defined in terms of the predictor
variables M and R, among potentially others. Both components of the distribution
directly influence the computed probabilities, but can exert greater or lesser influ-
ence upon the probability depending upon the particular value of ln Sa *.

4.3 Impact of Bias in Seismic Hazard and Risk

Equation (4.4) is useful to enable one to understand how the effects of bias in
ground-motion models would influence the contributions to hazard and risk esti-
mates. The computation of probabilities of exceedance is central to both cases.
Imagine that we assume that any given ground-motion model is biased for a
particular scenario in that the predicted median spectral accelerations differ from
an unknown true value by a factor γ μ and that the estimate of the aleatory variability
also differs from the true value by a factor of γ σ . To understand the impact of these
biases upon the probability computations we can express Eq. (4.4) with explicit
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 101

Effective increase in σ or γ σ >1 Effective increase in σ or γ σ <1


Positive Epsilon
Scenarios
0.200 0.500

0.200 0.500
Decrease in Hazard
Probability of Exceedance P(ε> ε*)

Probability of Exceedance P(ε> ε*)


0.020 0.050

0.020 0.050
Negative Epsilon Positive Epsilon Negative Epsilon
Scenarios Scenarios Scenarios
0.002 0.005

0.002 0.005
Decrease in Hazard Increase in Hazard Increase in Hazard

−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Critical Epsilon ε* Critical Epsilon ε*

Fig. 4.1 Illustration of the effect that a bias in the logarithmic standard deviation has upon the
computation of probabilities of exceedance. The left panel corresponds to γ σ ¼ 2 while the right
panel shows γ σ ¼ 1=2

inclusion of these bias factors as in Eq. (4.5). Now we recognise that the probability
that we compute is an estimate and denote this as P^ .
 
   lnSa*  lnγ μ  μlnSa
^ 
P Sa > Sa* m, r, . . . ¼ 1  Φ ð4:5Þ
γ σ σ lnSa

This situation is actually much closer to reality than Eq. (4.4). For many scenarios
predictions of motions will be biased by some unknown degree and it is important
to understand how sensitive our results are to these potential biases. The influence
of the potential bias in the logarithmic standard deviation is shown in Fig. 4.1. The
case shown here corresponds to an exaggerated example in which the bias factor is
either γ σ ¼ 2 or γ σ ¼ 1=2.
What sort of bias could one expect to be reasonable for a given ground-motion
model? This is a very difficult question to answer in any definitive way, but one way
to get a feel for this is to compare the predictions of both median logarithmic
motions and logarithmic standard deviations for two generations of modern ground-
motion models. In particular, the very recent release of the models from the second
phase of the PEER NGA project (NGA West 2) provides one with the ability to
compare the predictions from the NGA West 1 and NGA West 2 studies.
Figures 4.2 and 4.3 show these estimates of the possible extent of bias for the
ground-motion models of Campbell and Bozorgnia (2008, 2014) and Chiou and
Youngs (2008, 2014). It should be noted that the point here is not that these models
are necessarily biased, but that it is reasonable to assume that the 2014 versions are
102 P.J. Stafford

20 40 60 80
Campbell & Bozorgnia Chiou & Youngs
1.2

7.5 1.1

1.0
7.0
0.9
Magnitude

6.5 0.8

0.7
6.0
0.6

0.5
5.5
0.4

20 40 60 80
Distance [km]

Fig. 4.2 Example bias factors computed as the ratios between predictions of two generations of
models from the same developers. The left panel shows ratios between the medians,
SaðT ¼ 0:01sÞ, of Campbell and Bozorgnia (2014, 2008) – 2014:2008, while the right panel is
for Chiou and Youngs (2014, 2008) – 2014:2008

less biased than their 2008 counterparts. Therefore, the typical extent of bias that
has existed through the use of the 2008 NGA models over the past few years can be
characterised through plots like those shown in Figs. 4.2 and 4.3. However, in order
to see how these differences in predicted moments translate into differences in
hazard estimates the following section develops hazard results for a simple aca-
demic example.

4.3.1 Probabilistic Seismic Hazard Analysis

A probabilistic seismic hazard analysis is conducted using the ground-motion


models of Campbell and Bozorgnia (2008, 2014) as well as those of Chiou and
Youngs (2008, 2014). The computations are conducted for a hypothetical case of a
site located in the centre of a circular source. The seismicity is described by a
doubly-bounded exponential distribution with a b-value of unity and minimum and
maximum magnitudes of 5 and 8 respectively. The maximum distance considered
in the hazard integrations is 100 km. For this exercise, the depths to the top of the
ruptures for events of all magnitudes are assumed to be the same and it is also
assumed that the strike is perpendicular to the line between the site and the closest
point on the ruptures. All ruptures are assumed to be for strike-slip events and the
site itself is characterised by an average shear-wave velocity over the uppermost
30 m of 350 m/s. Note that these assumptions are equivalent to ignoring finite
source dimensions and working with a point-source representation. For the
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 103

20 40 60 80
Campbell & Bozorgnia Chiou & Youngs
1.35

7.5 1.30

1.25
7.0
Magnitude

1.20
6.5
1.15

6.0
1.10

5.5 1.05

1.00

20 40 60 80
Distance [km]

Fig. 4.3 Example bias factors for the logarithmic standard deviations. The left panel shows ratios
between the σ ln Sa predictions of Campbell and Bozorgnia (2014, 2008) – 2014:2008, while the
right panel shows the ratios for Chiou and Youngs (2014, 2008) – 2014:2008. The standard
deviations are for a period of 0.01 s

purposes of this exercise, this departure from a more realistic representation does
not influence the point that is being made.
Hazard curves for spectral acceleration at a response period of 0.01 s are
computed through the use of the standard hazard integral in Eq. (4.6).
X ZZ   
λY>y* ¼ νi P Y > y*m, r f M, R ðm; r Þdmdr ð4:6Þ
i¼1

For this particular exercise we have just one source ( i ¼ 1 ) and will also
appreciate that νi simply scales the hazard curve linearly and so using ν1 ¼ 1
enables us to convert the annual rates of exceedance λY>y* directly into annual
probabilities of exceedance.
Hazard curves computed according to this equation are shown in Fig. 4.4. The
curves show that for long return periods the hazard curves predicted by both models
of Campbell and Bozorgnia are very similar while at short return periods there are
significant differences between the two versions of their model. From consideration
of Figs. 4.2 and 4.3 we can see that the biggest differences between the two versions
of the Campbell and Bozorgnia model for the scenarios of relevance to this exercise
(T ¼ 0:01 seconds and V S, 30 ¼ 350 m/s) are at small magnitudes between roughly
Mw5.0 and Mw5.5 where the new model predicts significantly smaller median
motions but also has a much larger standard deviation for these scenarios. As will
be shown shortly, both of these effects lead to a reduction in the hazard estimates for
these short return periods.
In contrast, the two versions of the Chiou and Youngs model compare
favourably for the short return periods but then exhibit significant differences as
104 P.J. Stafford

1e−01
Annual Probabilty of Exceedance
1e−03

Campbell & Bozorgnia (2008)


Campbell & Bozorgnia (2014)
Chiou & Youngs (2008)
1e−05

Chiou & Youngs (2014)

0.001 0.005 0.010 0.050 0.100 0.500 1.000


Spectral Acceleration Sa(T=0.01s)

Fig. 4.4 Hazard curves computed for the ground-motion models of Campbell and Bozorgnia
(2008, 2014) and Chiou and Youngs (2008, 2014)

one moves to longer return periods. Again making use of Figs. 4.2 and 4.3 we
can see that the latest version of their model provides a relatively consistent, yet
mild (γ μ  1:0  1:1), increase in motions over the full magnitude-distance space
considered here and that we have a 15–20 % increase in the standard deviation over
this full magnitude-distance space. Again, from the developments that follow, we
should expect to observe the differences between the hazard curves at these longer
return periods.
We have just seen how bias factors for the logarithmic mean γ μ and logarithmic
standard deviation γ σ can influence the computation of estimates of the probability
of exceedance for a given scenario. The hazard integral in Eq. (4.6) is simply a
weighted sum over all relevant scenarios as can be seen from the approximation
(that this ceases to be an approximation in the limit as Δm, Δr ! 0):
X XX     
λY>y*  vi P Y > y* m j ; r k f M, R m j ; r k ΔmΔr ð4:7Þ
i¼1 j k

If we now accept that when using a ground-motion model we will only obtain an
estimate of the annual rate of exceedance we can write:
X XX     
λ^ Y>y*  vi P^ Y > y* m j ; r k f M, R m j ; r k ΔmΔr ð4:8Þ
i¼1 j k

where now this expression is a function of the bias factors for both the logarithmic
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 105

motions for every scenario. One can consider the effects of systematic bias from the
ground motion model expressed through factors modifying the conditional mean
and standard deviation for a scenario. The biases in this case hold equally for all
scenarios (although this can be relaxed). At least for the standard deviation, this
assumption is not bad given the distributions shown in Fig. 4.3.
Therefore, for each considered combination of mj and rk we can define our
estimate of the probability of exceeding y * from Eq. (4.5). Note that the bias in
the median ground motion is represented by a factor γ μ multiplying the median
motion S^ a ¼ γ μ Sa. This translates to an additive contribution to the logarithmic
mean leading to μlnSa þ lnγ μ representing the biased median motion.
To understand how such systematic biases could influence hazard estimates we
can compute the partial derivatives with respect to these bias factors, considering
one source of bias at a time.

∂λ^ X XX ∂ 
lny*  lnγ μ  μ


 
 νi 1Φ f M, R m j ; r k ΔmΔr ð4:9Þ
∂γ μ i¼1 j k
∂γ μ σ

and

∂λ^ X XX ∂ 
lny*  μ


 
 νi 1Φ f M, R m j ; r k ΔmΔr ð4:10Þ
∂γ σ i¼1 j k
∂γ σ γ σ σ

which can be shown to be equivalent to:


ZZ "  2 #
∂λ^ X 1 lnγ μ þ μ  lny*
¼ νi pffiffiffiffiffi exp  f M, R ðm; r Þdmdr ð4:11Þ
∂γ μ i¼1 γ μ σ 2π 2σ 2

and
ZZ " #
∂λ^ X lny*  μ ðμ  lny*Þ2
¼ νi pffiffiffiffiffi exp  f M, R ðm; r Þdmdr ð4:12Þ
∂γ σ i¼1 γ 2σ σ 2π 2γ 2σ σ 2

When these expressions are evaluated for the hypothetical scenario that we have
considered we obtain partial derivatives as shown in Fig. 4.5. The curves in this
figure show that the sensitivity of the hazard curve to changes in the mean pre-
dictions for the scenarios is most significant when there is relatively weak influence
from the standard deviation. That is, when the hazard curve is dominated by
contributions with epsilon values near zero then biases in the mean predictions
matter most strongly.
The scaling of the partial derivatives with respect to the bias in the standard
deviation is more interesting, and reflects the schematic result previously shown in
Fig. 4.1. We see that we have positive gradients for the larger spectral accelerations
106 P.J. Stafford

Campbell & Bozorgnia Chiou & Youngs

CB2008 CY2008
CB2014 CY2014

0.4
0.4
Partial Derivatives of Hazard Curve

Partial Derivatives of Hazard Curve


0.2
0.2

0.0
0.0

Partial derivative ⭸λ/ ⭸γμ Partial derivative ⭸λ/ ⭸γμ

−0.2
−0.2

Partial derivative ⭸λ/ ⭸γσ Partial derivative ⭸λ/ ⭸γσ

0.001 0.005 0.050 0.500 0.001 0.005 0.050 0.500


Spectral Acceleration Sa(T=0.01s) Spectral Acceleration Sa(T=0.01s)

Fig. 4.5 Partial derivatives of the hazard curves with respect to the bias factors γ μ and γ σ

while we have negative gradients for weak motions. These ranges effectively
represent the positive and negative epsilon ranges that were shown explicitly in
the previous section. However, in this case we must recognise that when consider-
ing the derivative of the hazard curve that we have many different contributions for
epsilon values corresponding to a given target level of the intensity measure y * and
that the curves shown in Fig. 4.5 reflect a weighted average of the individual curves
that have the form shown in Fig. 4.1.
The utility of the partial derivative curves shown in Fig. 4.5 is that they enable
one to appreciate over which range of intensity measures (and hence return periods)
changes to either the median motion or logarithmic standard deviation will have the
greatest impact upon the shape of the hazard curves. Note that with respect to the
typical hazard curves shown in Fig. 4.4, these derivatives should be considered as
being in some sense orthogonal to the hazard curves. That is, they are not
representing the slope of the hazard curve (which is closely related to the annual
rate of occurrence of a given level of ground-motion), but rather saying that for any
given level of motion, how sensitive is the annual rate of exceedance to a change in
the logarithmic mean and standard deviation. It is clear from Fig. 4.4 that a change
in the standard deviation itself has a strong impact upon the actual nature of the
hazard curve at long return periods, whereas the sensitivity indicated in Fig. 4.5 is
low for the corresponding large motions. However, it should be born in mind that
these partial derivatives are ∂λ^ =∂γ i rather than, say, ∂lnλ^ =∂γ i and that the
apparently low sensitivity implied by Fig. 4.6 should be viewed in terms of the
fact that small changes Δλ^ are actually very significant when the value of λ^ itself is
very small over this range.
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 107

2.0
Campbell & Bozorgnia (2008)
Ratio of Derivatives: ⭸λ/ ⭸γ σ : ⭸λ/ ⭸γ μ Campbell & Bozorgnia (2014)
1.5 Chiou & Youngs (2008)
Chiou & Youngs (2014)
1.0
0.5
0.0

10% in 50 years

2% in 50 years
−0.5
−1.0

1 10 100 1000 10000


Return Period (years)

Fig. 4.6 Ratios of the partial derivatives with respect to the logarithmic standard deviation and
mean. Vertical lines are shown to indicate the commonly encountered 475 and 2,475 year return
periods

Another way of making use of these partial derivatives is to compare the relative
sensitivity of the hazard curve to changes in the logarithmic mean and standard
deviation. This relative sensitivity can be computed by taking the ratio of the partial
derivatives with respect to both the standard deviation and the mean and then seeing
the range of return periods (or target levels of the intensity measure) for which one
or the other partial derivative dominates. Ratios of this type are computed for this
hypothetical scenario and are shown in Fig. 4.6. When ratios greater than one are
encountered the implication is that the hazard curves are more sensitive to changes
in the standard deviation than they are to changes in the mean. As can be seen from
Fig. 4.6, this situation arises as the return period increases. However, for the
example shown here (which is fairly typical of active crustal regions in terms of
the magnitude-frequency distribution assumed) the influence of the standard devi-
ation tends to be at least as important as the median, if not dominant, at return
periods of typical engineering interest (on the order of 475 years or longer).
The example just presented has highlighted that ground-motion models must
provide estimates of both the logarithmic mean and standard deviation for any
given scenario, and that in many cases the ability to estimate the standard deviation
is at least as important as the estimate of the mean. Historically, however, the
development of ground-motion models has focussed overwhelmingly upon the
scaling of median predictions, with many people (including some ground-motion
model developers) still viewing the standard deviation as being some form of error
in the prediction of the median rather than being an important parameter of the
108 P.J. Stafford

ground-motion distribution that is being predicted. The results presented for this
example here show that ground-motion model developers should shift the balance
of attention more towards the estimation of the standard deviation than what has
historically occurred.

4.3.2 Probabilistic Seismic Risk Analysis

When one moves to seismic risk analyses the treatment of the aleatory variability
can differ significantly. In the case that a risk analysis is performed for a single
structure the considerations of the previous section remain valid. However, for
portfolio risk assessment it becomes important to account for the various correla-
tions that exist with ground-motion fields for a given earthquake scenario. These
correlations are required for developing the conditional ground-motion fields that
correspond to a multivariate normal distribution.
The multivariate normal distribution represents the conditional random field of
relative ground-motion levels (quantified through normalised intra-event residuals)
conditioned upon the occurrence of an earthquake and the fact that this event will
generate seismic waves with a source strength that may vary from the expected
strength. The result of this source deviation is that all locations that register this
ground-motion will have originally had this particular level of source strength. This
event-to-event variation that systematically influences all sites is represented in
ground-motion models by the inter-event variability, while the conditional variation
of motions at a given site is given by the intra-event variability.
For portfolio risk analysis it is therefore important to decompose the total
aleatory variability in ground-motions into a component that reflects the source
strength (the inter-event variability) and a component that reflects the site-specific
aleatory variability (the intra-event variability). It should also be noted in passing
that this is not strictly equivalent to the variance decomposition that is performed
using mixed effects models in regression analysis.
When one considers ground-motion models that have been developed over
recent years it is possible to appreciate that some significant changes have occurred
to the value of the total aleatory variability that is used in hazard analysis, but also
to the decomposition of this total into the inter-event and intra-event components.
For portfolio risk analysis, this decomposition matters. To demonstrate why this is
the case, Fig. 4.7 compares conditional ground-motion fields that have been sim-
ulated for the 2011 Christchurch Earthquake in New Zealand. In each case shown,
the inter-event variability is assumed to be a particular fraction of the total vari-
ability and this fraction is allowed to range from 0 to 100 %. As one moves from a
low to a high fraction it is clear that the within event spatial variation of the ground-
motions reduces.
For portfolio risk assessment, these differences in the spatial variation are
important as the extreme levels of loss correspond to cases in which spatial regions
of high-intensity ground-motion couple with regions of high vulnerability and
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 109

2460 2470 2480 2490 2500


0% of Total 33% of Total

5760 0.0

5750
−0.5

5740

−1.0
5730

5720 −1.5
Northings [km]

67% of Total 100% of Total −2.0

5760
−2.5

5750
−3.0
5740

−3.5
5730

5720
−4.0

2460 2470 2480 2490 2500


Eastings [km]

Fig. 4.7 Impact upon the nature of ground-motion fields generated assuming that the inter-event
variability is a given fraction of the total aleatory variability. The ground-motion fields shown are
possible fields consistent with a repeat of the Christchurch earthquake

exposure. The upper left panel of Fig. 4.7 shows a clear example of this where a
patch of high intensity is located in a region of high exposure.
In addition to ensuring that the total aleatory variability is well-estimated, it is
therefore also very important (for portfolio risk analysis) to ensure that the
partitioning of the total variability between inter- and intra-event components is
done correctly.

4.4 Components of Uncertainty

The overall uncertainty in ground-motion prediction is often decomposed into


components of Aleatory Variability and Epistemic Uncertainty. In the vast majority
of applications only these two components are considered and they are defined in
110 P.J. Stafford

such as way that the aleatory variability is supposed to represent inherent random-
ness in nature while epistemic uncertainties represent contributions resulting from
our lack of knowledge. The distinction is made for more than semantic reasons and
the way that each of these components is treated within hazard and risk analysis
differ. Using probabilistic seismic hazard analysis as an example, the aleatory
variability is directly accounted for within the hazard integral while epistemic
uncertainty is accounted for or captured through the use of logic trees.
However, when one constructs a logic tree the approach is to consider alternative
hypotheses regarding a particular effect, or component, within the analysis. Each
alternative is then assigned a weight that has been interpreted differently by various
researchers and practitioners, but is ultimately treated as a probability. No alterna-
tive hypotheses are considered for effects that we do not know to be relevant. That
is, the representation of epistemic uncertainty in a logic tree only reflects our
uncertainty regarding the components of the model that we think are relevant. If
we happen to be missing an important physical effect then we will never think to
include it within our tree and this degree of ignorance is never reflected in our
estimate of epistemic uncertainty.
It is therefore clear that there is a component of the overall uncertainty in our
analyses that is not currently accounted for. This component is referred to as
Ontological Uncertainty (Elms 2004) and represents the unknown unknowns
from the famous quote of Donald Rumsfeld.
These generic components of uncertainty are shown schematically in Fig. 4.8.
The actual numbers that are shown in this figure are entirely fictitious and the
objective is not to define this partitioning. Rather, the purpose of this figure is to
illustrate the following:
• What we currently refer to as being aleatory variability is not all aleatory
variability and instead contains a significant component of epistemic uncertainty
(which is why it reduces from the present to the near future)
• The fact that ontological uncertainty exists means that we cannot assign a
numerical value to epistemic uncertainty
• The passage of time allows certain components to be reduced
In the fields of seismic hazard and risk it is common for criticism to be made of
projects due to the improper handling of aleatory variability and epistemic uncer-
tainty by the analysts. However, the distinction between these components is not
always clear and this is at least in part a result of loose definitions of the terms as
well as a lack of understanding about the underlying motivation for the
decomposition.
As discussed at length by Der Kiureghian and Ditlevsen (2009), what is aleatory
or epistemic can depend upon the type of analysis that is being conducted. The
important point that Der Kiureghian and Ditlevsen (2009) stress is that the
categorisation of an uncertainty as either aleatory of epistemic is largely at the
discretion of the analyst and depends upon what is being modelled. The uncer-
tainties themselves are generally not properties of the parameter in question.
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 111

Fig. 4.8 Components of the total uncertainty in ground motion prediction, and their evolution in
time. The percentage values shown are entirely fictitious

4.4.1 Nature of Uncertainty

Following the more complete discussion provided by Der Kiureghian and Ditlevsen
(2009), consider the physical process that results in the generation of a ground
motion y for a particular scenario. The underlying basic variables that parameterise
this physical process can be written as X.
Now consider a perfect deterministic ground-motion model (i.e., one that makes
predictions with no error) that provides a mathematical description of the physical
link between these basic variables and the observed motion. In the case that we
knew the exact values of all basic variables for a given scenario we would write
such a model as:
 
y ¼ g x; θg ð4:13Þ

where, here θg are the parameters or coefficients of the model. Note that the above
model must account for all relevant physical effects related to the generation of y. In
practice, we cannot come close to accounting for all relevant effects and so rather
than working with the full set X, we instead work with a reduced set Xk
(representing the known random variables) and accept that the effect of the
unknown basic variables Xu will manifest as differences between our now approx-
imate model ĝ and the observations. Furthermore, as we are working with an
observed value of y (which we assume to be known without error) we also need
to recognise that we will have an associated observed instance of Xk that is not
perfectly known xk. Our formulation is then written as:
 
y ¼ g^ x^k ; θ^g þ ε ð4:14Þ

What is important to note here is that the residual error ε is the result of three
distinct components:
112 P.J. Stafford

• The effect of unobserved, or not considered, variables Xu


• The imperfection of our mathematical model, both in terms of its functional
form and the estimation of its parameters θ^ g
• The uncertainties associated with estimated known variables x^ k
The imperfection referred to in the second point above means that the residual
error ε does not necessarily have a zero mean (as is the case for regression analysis).
The reason being that the application of imperfect physics does not mean that our
simplified model will be unbiased – both when applied to an entire ground-motion
database, but especially when applied to a particular scenario. Therefore, it could be
possible to break down the errors in prediction into components representing bias,
 
Δ ; x^; θ^g , and variability, ε0 :
 
ε ! Δ x^; θ^g þ ε
0
ð4:15Þ

In the context seismic hazard and risk analysis, one would ordinarily regard the
variability represented by ε as being aleatory variability and interpret this as being
inherent randomness in ground motions arising from the physical process of
ground-motion generation. However, based upon the formulation just presented
one must ask whether any actual inherent randomness exists, or whether we are just
seeing the influence of the unexplained parameters xu. That is, should our starting
point have been:
 
y ¼ g x; θg þ εA ð4:16Þ

where here the εA represents intrinsic randomness associated with ground motions.
When one considers this problem one must first think about what type of
randomness we are dealing with. Usually when people define aleatory variability
they make an analogy with the rolling of a die, but often they are unwittingly
referring to one particular type of randomness. There are broadly three classes of
randomness:
• Apparent Randomness: This is the result of viewing a complex deterministic
process from a simplified viewpoint.
• Chaotic Randomness: This randomness arises from nonlinear systems that
evolve from a particular state in a manner that depends very strongly upon that
state. Responses obtained from very slightly different starting conditions can be
markedly different from each other, and our inability to perfectly characterise a
particular state means that the system response is unpredictable.
• Inherent Randomness: This randomness is an intrinsic part of reality. Quantum
mechanics arguably provides the most pertinent example of inherent
randomness.
Note that there is also a subtle distinction that can be made between systems that
are deterministic, yet unpredictable, and systems that possess genuine randomness.
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 113

In addition, some (including historically Einstein) argue that systems that possess
‘genuine randomness’ are actually driven by deterministic processes and variables
that we simply are not aware of. In this case, these systems would be subsumed
within the one or more of the other categories of apparent or chaotic randomness.
However, at least within the context of quantum mechanics, Bell’s theorem dem-
onstrates that the randomness that is observed at such scales is in fact inherent
randomness and not the result of apparent randomness.
For ground-motion modelling, what is generally referred to as aleatory variabil-
ity is at least a combination of both apparent randomness and chaotic randomness
and could possibly also include an element of inherent randomness – but there is no
hard evidence for this at this point. The important implication of this point is that
the component associated with apparent randomness is actually an epistemic
uncertainty that can be reduced through the use of more sophisticated models.
The following two sections provide examples of apparent and chaotic randomness.

4.4.2 Apparent Randomness – Simplified Models

Imagine momentarily that it is reasonable to assume that ground-motions arise from


deterministic processes but that we are unable to model all of these processes. We
are therefore required to work with simplified models when making predictions. To
demonstrate how this results in apparent variability consider a series of simplified
models for the prediction of peak ground acceleration (here denoted by y) as a
function of moment magnitude M and rupture distance R:
Model 0

lny ¼ β0 þ β1 M ð4:17Þ

Model 1
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β2 ln R2 þ β23 ð4:18Þ

Model 2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β2 ln R2 þ β23 þ β4 lnV S, 30 ð4:19Þ

Model 3
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β1a ðM  6:5Þ2 þ ½β2 þ β2a ðM  6:5Þln R2 þ β23
þ β4 lnV S, 30 ð4:20Þ
114 P.J. Stafford

Model 4
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β1a ðM  6:5Þ2 þ ½β2 þ β2a ðM  6:5Þln R2 þ β23 ð4:21Þ
þ β4 lnV S, 30 þ β5 Fnm þ β6 Frv

Models 5 and 6
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β1a ðM  6:5Þ2 þ ½β2 þ β2a ðM  6:5Þln R2 þ β23 ð4:22Þ
þ β4 lnV S, 30 þ β5 Fnm þ β6 Frv þ β7 Fas

where we see that the first of these models is overly simplified, but that by the time
we reach Models 5 and 6, we are accounting for the main features of modern
models. The difference between Models 5 and 6 is not in the functional form, but in
how the coefficients are estimated. Models 1–5 use standard mixed effects regres-
sion with one random effect for event effects. However, Model 6 includes this
random effect, but also distinguishes between these random effects depending upon
whether we have mainshocks or aftershocks and also partitions the intra-event
variance into components for mainshocks and aftershocks. The dataset consists of
2,406 records from the NGA database.
Figure 4.9 shows estimates of apparent randomness for each of these models,
assuming that Model 6 is ‘correct’. That is, the figure shows the difference between
the total standard deviation of Model i and Model 6 and because we assume the
latter model is correct, this difference in variance can be attributed to apparent
randomness. The figure shows that the inclusion of distance scaling and
distinguishing between mainshocks and aftershocks has a very large impact, but
that other additions in complexity provide a limited reduction in apparent random-
ness. The important point here is that this apparent randomness is actually epistemic
uncertainty – not aleatory as is commonly assumed.

4.4.3 Chaotic Randomness – Bouc-Wen Example

Chaotic randomness is likely to be a less-familiar concept than apparent random-


ness given that the latter is far more aligned with our normal definition of epistemic
uncertainty. To explain chaotic randomness in the limited space available here is a
genuine challenge, but I will attempt this through the use of an example based
heavily upon the work of Li and Meng (2007). The example concerns the response
of a nonlinear oscillator and is not specifically a ground-motion example. However,
this type of model has been used previously for characterising the effects of
nonlinear site response. I consider the nonlinear Bouc-Wen single-degree-of-free-
dom system characterised by the following equation:
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 115

0.8
Apparent Variability, σ i2 − σ 26

Effect of Distance
0.6
0.4
0.2

Effect of Aftershocks
0.0

Model 0 Model 1 Model 2 Model 3 Model 4 Model 5 Model 6

Fig. 4.9 Variation of apparent randomness associated with models of increasing complexity


u þ 2ζω0 u_ þ αω20 u þ ð1  αÞω20 z ¼ B sin ðΩtÞ ð4:23Þ

where the nonlinear hysteretic response is defined by:


  
z_ ¼ Au_  γ u_ zzn1  βu_ zn ð4:24Þ

This model is extremely flexible and can be parameterised so that it can be


applied in many cases of practical interest, but in the examples that follow we will
assume that we have a system that exhibits hardening when responding in a
nonlinear manner (see Fig. 4.10).
Now, if we subject this system to a harmonic excitation we can observe a
response at relatively low amplitudes that resembles that in Fig. 4.11. Here we
show the displacement response, the velocity response, the trajectory of the
response in the phase space (u  u_ space) and the nonlinear restoring force. In all
cases the line colour shifts from light blue, through light grey and towards a dark red
as time passes. In all panels we can see the influence of the initial transient response
before the system settles down to a steady-state. In particular, we can see that we
reach a limit-cycle in the phase space in the lower left panel.
For Fig. 4.11 the harmonic amplitude is B ¼ 5 and we would find that if we were
to repeat the analysis for a loading with an amplitude slightly different to this value
that our response characteristics would also only be slightly different. For systems
in this low excitation regime we have predictable behaviour in that the effect of
small changes to the amplitude can be anticipated.
However, consider now a plot of the maximum absolute displacement and
maximum absolute velocity against the harmonic amplitude shown in Fig. 4.12.
Note that the response values shown in this figure correspond to what are essentially
116 P.J. Stafford

10 Bouc−Wen: γ = 0.15, β = −0.75, n = 1, A = 1 Restoring Force: γ = 0.15, β = − 0.75, n = 1, A = 1

6
4
5

Restoring Force, f S(u)/ ω 20


Hysteretic Parameter, z

2
0

0
−2
−5

−4
−10

−6
−8
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Displacement, u Displacement, u

Fig. 4.10 Dependence of the hysteretic parameter z (left), and the normalised restoring force f S
_ zÞ (right) on the displacement for the example system considered
ðu; u;

steady-state conditions. For this sort of system we expect that the transient terms
will decay according to expðζω0 tÞ and for these examples we have set ζ ¼ 0:05
and ω0 ¼ 1:0 and we only look at the system response after 200 s have passed in
order to compute the maximum displacement and velocity shown in Fig. 4.12. We
would expect that the transient terms would have decayed to less than 0:5  105 of
their initial amplitudes at the times of interest.
Figure 4.12 shows some potentially surprising behaviour for those not familiar
with nonlinear dynamics and chaos. We can see that for low harmonic amplitudes
we have a relatively smoothly varying maximum response and that system response
is essentially predictable here. However, this is not to say that the response does not
become more complex. For example, consider the upper row of Fig. 4.13 that shows
the response for B ¼ 15. Here we can see that the system tends towards some stable
state and that we have a stable limit-cycle in the phase space. However, it has a
degree of periodicity that corresponds to a loading/unloading phase for negative
restoring forces.
This complexity continues to increase as the harmonic amplitude increases as
can be seen in the middle row of Fig. 4.13 where we again have stable steady-state
response, but also have another periodic component of unloading/reloading for both
positive and negative restoring forces. While these figures show increased com-
plexity as we move along the harmonic amplitude axis of Fig. 4.12, the system
response remains stable and predictable in that we know that small changes in the
value of B continues to map into small qualitative and quantitative changes to the
response. However, Fig. 4.12 shows that once the harmonic amplitude reaches
values of roughly B ¼ 53 we suddenly have a qualitatively different behaviour. The
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 117

4
4

2
2
Displacement

Velocity
0
0

−2
−2

−4
−4

0 50 100 150 200 250 0 50 100 150 200 250


Time Time

20
4

10
2

Restoring Force
0
Velocity
0

−10
−2

−20
−4

−4 −2 0 2 4 −4 −2 0 2 4
Displacement Displacement

Fig. 4.11 Response of the nonlinear system for a harmonic amplitude of B ¼ 5. Upper left panel
shows the displacement time-history; upper right panels shows the velocity time history; lower
right panel shows the response trajectory in phase space; and lower right panel shows the
hysteretic response

system response now becomes extremely sensitive to the particular value of the
amplitude that we consider. The reason for this can be seen in the bottom row of
Fig. 4.13 in which it is clear that we never reach a stable steady state. What is
remarkable in this regime is that we can observe drastically different responses for
very small changes in amplitude of the forcing function. For example, when we
move from B ¼ 65:0 to B ¼ 65:1 we have transition back into a situation in which
we have a stable limit cycle (even if it is a complex cycle).
This lesson here is that for highly nonlinear processes there exist response
regimes where the particular response trajectory and system state depends very
strongly upon a prior state of the system. There are almost certainly aspects of the
ground-motion generation process that can be described in this manner. Although
these can be deterministic processes, as it is impossible to accurately define the state
of the system the best we can do is to characterise the observed chaotic randomness.
Note that although this is technically epistemic uncertainty, we have no choice but
to treat this as aleatory variability as it is genuinely irreducible.
118

25
35
30

20
25

15
20
15

10
10

5
Maximum Absolute Steady−state Velocity
5

Maximum Absolute Steady−state Displacement


0 20 40 60 80 100 0 20 40 60 80 100

Harmonic Forcing Amplitude Harmonic Forcing Amplitude

Fig. 4.12 Maximum absolute steady-state displacement (left) and velocity (right) response against the harmonic forcing amplitude B
P.J. Stafford
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 119

Fig. 4.13 Response of the nonlinear system for a harmonic amplitude of B ¼ 15 (top), B ¼ 35
(middle), and B ¼ 65 (bottom). Panels on the left show the response trajectory in phase space; and
panels on the right show the hysteretic response

4.4.4 Randomness Represented by Ground-Motion Models

The standard deviation that is obtained during the development of a ground-motion


model definitely contains elements of epistemic uncertainty that can be regarded as
apparent randomness, epistemic uncertainty that is the result of imperfect metadata,
120 P.J. Stafford

and variability that arises from the ergodic assumption. It is also almost certain that
the standard deviation reflects a degree of chaotic randomness and possibly also
includes some genuine randomness and it is only these components that are
actually, or practically, irreducible. Therefore, it is clear that the standard deviation
of a ground-motion model does not reflect aleatory variability as it is commonly
defined – as being ‘inherent variability’.
If the practical implications of making the distinction between aleatory and
epistemic are to dictate what goes into the hazard integral and what goes into the
logic tree then one might take the stance that of these contributors to the standard
deviation just listed we should look to remove the effects of the ergodic assumption
(which is attempted in practice), we should minimise the effects of metadata
uncertainty (which is not done in practice), and we should increase the sophistica-
tion of our models so that the apparent randomness is reduced (which some would
argue has been happening in recent years, vis- a-vis the NGA projects).
An example of the influence of metadata uncertainty can be seen in the upper left
panel of Fig. 4.14 in which the variation in model predictions is shown when
uncertainties in magnitude and shear-wave velocity are considered in the regression
analysis. The boxplots in this figure show the standard deviations of the predictions
for each record in the NGA dataset when used in a regression analysis with Models
1–6 that were previously presented. The uncertainty that is shown here should be
regarded as a lower bound to the actual uncertainty associated with meta-data for
real ground-motion models. The estimates of this variable uncertainty are obtained
by sampling values of magnitude and average shear-wave velocity for each event
and site assuming a (truncated) normal and lognormal distribution respectively.
This simulation process enables a hypothetical dataset to be constructed upon
which a regression analysis is performed. The points shown in the figure then
represent the standard deviation of median predictions from each developed regres-
sion model.
Figure 4.14 also shows how an increase in model complexity is accompanied by
an increase in parametric uncertainty for the models presented previously. It should
be noted that these estimates of parametric uncertainty are also likely to be near
lower bounds given that the functional forms used for this exercise are relatively
simple and that the dataset is relatively large (consisting of 2,406 records from the
NGA database). The upper right panel of Fig. 4.14 shows this increasing parametric
uncertainty for the dataset used to develop the models, but the lower panel shows
the magnitude dependence of this parametric uncertainty when predictions are
made for earthquake scenarios that are not necessarily covered by the empirical
data. In this particular case, the magnitude dependence is shown when motions are
computed for a distance of just 1 km and a shear-wave velocity of 316 m/s is used. It
can be appreciated from this lower panel that the parametric uncertainty is a
function of both the model complexity but also of the particular functional form
adopted. The parametric uncertainty here is estimated by computing the covariance
matrix of the regression coefficients and then sampling from the multivariate
normal distribution implied by this covariance matrix. The simulated coefficients
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 121

Fig. 4.14 Influence of meta-data uncertainty (upper left), increase in parametric uncertainty with
increasing complexity of models (upper right), and the dependence of parametric uncertainty upon
magnitude (bottom)

are then used to generate predictions for each recording and the points shown in this
panel represent the standard deviation of these predictions for every record.
Rather than finally looking to increase the complexity of the functional forms
that are used for ground-motion predictions, herein I propose that we look at this
problem in a different light and refer back to Eq. (4.2) in which we say explicitly
that what matters for hazard and risk is the overall estimate of ground-motion
exceedance and that this is the result of two components (not just the ground-
motion model). We should forget about trying to push the concept that only aleatory
variability should go into the hazard integral and rather take the viewpoint that our
optimal model (which is a model of the ground motion distribution – not median
predictions) should go into the hazard integral and that our uncertainties should then
be reflected in the logic tree. The reason why we should forget about only pushing
122 P.J. Stafford

aleatory variability into the hazard integral is that from a quantitative ground-
motion perspective we are still not close to understanding what is actually aleatory
and irreducible.
The proposed alternative of defining an optimal model is stated in the light of
minimising the uncertainty in the estimate of the probability of exceedance of
ground-motions. This uncertainty comes from two components: (1) our ability to
accurately define the probability of occurrence of earthquake scenarios; and (2) our
ability to make robust predictions of the conditional ground-motion distribution.
Therefore, while a more complex model will act to reduce the apparent variability,
if this same model requires the specification of a number of independent variables
that are poorly constrained in practice then the overall uncertainty will be large. In
such cases, one can obtain a lower level of overall uncertainty in the prediction of
ground-motion exceedance by using a less complex ground-motion model. A
practical example of this trade-off is related to the requirement to define the
depth distribution of earthquake events. For most hazard analyses this depth
distribution is poorly constrained and the inclusion of depth-dependent terms in
ground-motion models only provides a very small decrease in the apparent
variability.
Figure 4.15 presents a schematic illustration of the trade-offs between apparent
randomness (the epistemic uncertainty that is often regarded as aleatory variability)
and parametric uncertainty (the epistemic uncertainty that is usually ignored) that
exist just on the ground-motion modelling side. The upper left panel of this figure
shows, as we have seen previously, that the apparent randomness decreases as we
increase the complexity of our model. However, the panel also shows that this
reduction saturates once we reach the point where we have chaotic randomness,
inherent randomness, or a combination of these irreducible components. The upper
right panel, on the other hand, shows that as this model complexity increases we
also observe an increase in parametric uncertainty. The optimal model must balance
these two contributors to the overall uncertainty as shown in the lower left panel.
On this basis, one can identify an optimal model when only ground-motion model-
ling is considered. When hazard or risk is considered then the parametric uncer-
tainty shown here should reflect both the uncertainty in the model parameters
(governed by functional form complexity, and data constraints) and the uncertainty
associated with the characterisation of the scenario (i.e., the independent variables)
and its likelihood.
The bottom right panel of Fig. 4.15 shows how one can justify an increased
complexity in the functional form when the parametric uncertainty is reduced, as in
this case the optimal complexity shifts to the right. To my knowledge, these sorts of
considerations have never been explicitly made during the development of more
complex ground-motion models. Although, in some ways, the quantitative inspec-
tion of residual trends and of parameter p-values is an indirect way of assessing if
increased complexity is justified by the data.
Recent years have seen the increased use of external constraint during ground-
motion model development. In particular, numerical simulations are now com-
monly undertaken in order to constrain nonlinear site response scaling, large
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 123

Parametric Uncertainty
Apparent Randomness

Chaotic/ Inherent
Randomness

Model Complexity Model Complexity


Predictive Uncertainty

Predictive Uncertainty
Justified Increase in
Model Complexity
Optimal Model
Complexity

Reduction in Parametric
Uncertainty

Model Complexity Model Complexity

Fig. 4.15 Schematic illustration of the trade-off that exists between the reduction in apparent
randomness (upper left) and the increase in parametric uncertainty (upper right). The optimal
model in this context balances the two components (lower left) and an increase in complexity is
justified when parametric uncertainty is reduced (lower right)

magnitude scaling, and near field effects. Some of the most recent models that have
been presented have very elaborate functional forms and the model developers have
justified this additional complexity on the basis of the added functional complexity
being externally constrained. In the context of Fig. 4.15, the implication is that the
model developers are suggesting that the red curves do not behave in this manner,
but rather that they saturate at some point as all of the increasing complexity does
not contribute to parametric uncertainty. On one hand, the model developers are
correct in that the application of external constraints does not increase the estimate
of the parametric uncertainty from the regression analysis on the free parameters.
However, on the other hand, in order to properly characterise the parametric
uncertainty the uncertainty associated with the models used to provide the external
constraint must also be accounted for. In reality this additional parametric uncer-
tainty is actually larger than what would be obtained from a regression analysis
because the numerical models used for these constraints are normally very complex
and involve a large number of poorly constrained parameters. Therefore, it is not
clear that the added complexity provided through the use of external constraints is
actually justified.
124 P.J. Stafford

4.5 Discrete Random Fields for Spatial Risk Analysis

The coverage thus far has been primarily focussed upon issues that arise most
commonly within hazard analysis, but that are also relevant to risk analysis.
However, in this final section the attention is turned squarely to a particular issue
associated with the generation of ground-motion fields for use in earthquake loss
estimation for spatially-distributed portfolios. This presentation is based upon the
work of Vanmarcke (1983) and has only previously been employed by
Stafford (2012).
The normal approach that is taken when performing risk analyses over large
spatial regions is to subdivide the region of interest into geographic cells (often
based upon geopolitical boundaries, such as districts, or postcodes). The generation
of ground-motion fields is then made by sampling from a multivariate normal
distribution that reflects the joint intra-event variability of epsilon values across a
finite number of sites equal to the number of geographic cells. The multivariate
normal distribution for epsilon values is correctly assumed to have a zero mean
vector, but the covariance matrix of the epsilon values is computed using a
combination of the point-to-point distances between the centroids of the cells
(weighted geographically, or by exposure) and a model for spatial correlation
between two points (such as that of Jayaram and Baker 2009). The problem with
this approach is that the spatial discretisation of the ground-motion field has been
ignored. The correct way to deal with this problem is to discretise the random field
to account for the nature of the field over each geographic cell and to define a
covariance matrix for the average ground-motions over the cells. This average level
of ground-motion over the cell is a far more meaningful value to pass into fragility
curves than a single point estimate.
Fortunately, the approach for discretisation of a two-dimensional random field is
well established (Vanmarcke 1983). The continuous field is denoted by ln y(x)
where y is the ground motion and x now denotes a spatial position. The logarithmic
motion at a point can be represented as a linear function of the random variable ε(x).
Hence, the expected value of the ground motion field at a given point is defined by
Eq. (4.25), where μln y is the median ground motion, and η is an event term.

E½lnyðxÞ ¼ μlny þ η þ E½εðxÞ ð4:25Þ

Therefore, in order to analyse the random field of ground motions, attention need
only be given to the random field of epsilon values. Once this field is defined it may
be linearly transformed into a representation of the random field of spectral
ordinates.
In order to generate ground-motion fields that account for the spatial
discretisation, under the assumption of joint normality, we require three
components:
• An expression for the average mean logarithmic motion over a geographic cell
• An expression for the variance of motions over a geographic cell
• An expression for the correlation of average motions from cell-to-cell
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 125

For the following demonstration, assume that the overall region for which we are
conducting the risk analysis is discretised into a regular grid aligned with the N-S
and E-W directions. This grid has a spacing (or dimension) in the E-W direction of
D1 and a spacing in the N-S direction of D2. Note that while the presentation that
follows concerns this regular grid, Vanmarcke (1983) shows how to extend this
treatment to irregularly shaped regions (useful for regions defined by postcodes or
suburbs, etc.).
Within each grid cell one may define the local average of the field by integrating
the field and dividing by the area of the cell (A ¼ D1 D2 ).
Z
1
lnyA ¼ lnyðxÞdx ð4:26Þ
A
A

Now, whereas the variance of the ground motions for a single point in the field,
given an event term, is equal to σ 2, the variance of the local average ln yA must be
reduced as a result of the averaging. Vanmarcke (1983) shows that this reduction
can be expressed as in Eq. (4.27).
Z Z

1 D2 D1
jδ1 j jδ2 j
σ 2A ¼ γ ðD1 ; D2 Þσ 2
! γ ðD1 ; D2 Þ ¼ 1 1
D1 D2 D2 D1 D1 D2

ρðδ1 ; δ2 Þdδ1 dδ2 ð4:27Þ

In Eq. (4.27), the correlation between two points within the region is denoted by
ρ(δ1, δ2), in which δ1 and δ2 are orthogonal co-ordinates defining the relative
positions of two points within a cell. In practice, this function is normally defined
as in Eq. (4.28) in which b is a function of response period.
0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
δ21 þ δ22
3
ρðδ1 ; δ2 Þ ¼ exp@ A ð4:28Þ
b

The reduction in variance associated with the averaging of the random field is
demonstrated in Fig. 4.16 in which values of γ(D1, D2) are shown for varying values
of the cell dimension and three different values of the range parameter b. For this
example the cells are assumed to be square.
With the expressions for the spatial average and the reduced variance now given,
the final ingredient that is required is the expression for the correlation between the
average motions over two cells (rather than between two points). This is provided in
Eq. (4.29), with the meaning of the distances D1k and D2l shown in Fig. 4.17.
126 P.J. Stafford

1.0
Range, b
Variance Reduction, γ(D 1, D 2)

10 km
0.8
20 km
30 km
0.6
0.4
0.2
0.0

0 5 10 15 20

Spatial Dimension of Cell, D 1=D 2 (km)

Fig. 4.16 Variance function for a regular square grid

Fig. 4.17 Definition of geometry used in Eq. (4.29) (Redrawn from Vanmarcke (1983))

  σ2 X3 X 3
ρ lnyA1 , lnyA2 ¼ ð1Þk ð1Þl ðD1k D2l Þ2 γ ðD1k ; D2l Þ ð4:29Þ
4A1 A2 σ A1 σ A2 k¼0 l¼0

The correlations that are generated using this approach are shown in Fig. 4.18 both
in terms of the correlation against separation distance of the cell centroids and in
terms of the correlation against the separation measured in numbers of cells.
Figure 4.18 shows that the correlation values can be significantly higher than the
corresponding point-estimate values (which lie close to the case for the smallest
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 127

1.0

1.0
Cell Dimension
Correlation, ρ(lny A 1, lny A 2)

Correlation, ρ(lny A 1, lny A 2)


1 km
0.8

0.8
2 km
4 km
0.6

0.6
6 km
8 km
10 km
0.4

0.4
0.2

0.2
0.0

0.0
0 10 20 30 40 0 5 10 15 20 25 30
Centroid−to−Centroid Distance (km) Distance in Number of Cells

Fig. 4.18 Example correlations computed using Eq. (4.29) for square cells of differing dimension

dimension shown). However, the actual covariances do not differ as significantly


due to the fact that these higher correlations must be combined with the reduced
variances.

4.6 Conclusions

Empirical ground-motion modelling is in a relatively mature state, but the historical


emphasis has been biased towards median predictions with the result that the
characterisation of ground-motion variability has been somewhat neglected. This
paper emphasises the importance of the variance of the ground-motion distribution
and quantifies the sensitivity of hazard results to this variance. The partitioning of
total uncertainty in ground-motion modelling among the components of aleatory
and epistemic uncertainty is also revisited and a proposal is made to relax the
definitions that are often blindly advocated, but not properly understood. A new
approach for selecting an optimal model complexity is proposed. Finally, a new
framework for generating correlated discrete random fields is presented.

Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
128 P.J. Stafford

References

Campbell KW, Bozorgnia Y (2008) NGA ground motion model for the geometric mean horizontal
component of PGA, PGV, PGD and 5 %-damped linear-elastic response spectra for periods
ranging from 0.01 to 10.0 s. Earthq Spectra 24:139–171
Campbell KW, Bozorgnia Y (2014) NGA-West2 ground motion model for the average horizontal
components of PGA, PGV, and 5%-damped linear acceleration response spectra. Earthq
Spectra. http://dx.doi.org/10.1193/062913EQS175M
Chiou BSJ, Youngs RR (2008) An NGA model for the average horizontal component of peak
ground motion and response spectra. Earthq Spectra 24:173–215
Chiou BSJ, Youngs RR (2014) Update of the Chiou and Youngs NGA model for the average
horizontal component of peak ground motion and response spectra. Earthq Spectra. http://dx.
doi.org/10.1193/072813EQS219M
Der Kiureghian A, Ditlevsen O (2009) Aleatory or epistemic? Does it matter? Struct Saf
31:105–112
Elms DG (2004) Structural safety – issues and progress. Prog Struct Eng Mat 6:116–126
Jayaram N, Baker JW (2009) Correlation model for spatially distributed ground-motion intensi-
ties. Earthq Eng Struct D 38:1687–1708
Li H, Meng G (2007) Nonlinear dynamics of a SDOF oscillator with Bouc-Wen hysteresis. Chaos
Soliton Fract 34:337–343
Stafford PJ (2012) Evaluation of the structural performance in the immediate aftermath of an
earthquake: a case study of the 2011 Christchurch earthquake. Int J Forensic Eng 1(1):58–77
Vanmarcke E (1983) Random fields, analysis and synthesis. The MIT Press, Cambridge, MA
Chapter 5
Seismic Code Developments for Steel
and Composite Structures

Ahmed Y. Elghazouli

Abstract As with other codified guidance, seismic design requirements undergo a


process of continuous evolution and development. This process is usually guided by
improved understanding of structural behaviour based on new research findings,
coupled with the need to address issues identified from the practical application of
code procedures in real engineering projects. Developments in design guidance
however need to balance detailed technical advancements with the desire to main-
tain a level of practical stability and simplicity in codified rules. As a result, design
procedures inevitably incorporate various simplifications and idealisations which
can in some cases have adverse implications on the expected seismic performance
and hence on the rationale and reliability of the design approaches. With a view to
identifying the needs for future seismic code developments, this paper focuses on
assessing the underlying approaches and main procedures adopted in the seismic
design of steel and composite framed structures, with emphasis on the current
European seismic design code, Eurocode 8. Codified requirements in terms of
force reduction factors, ductility considerations, capacity design verifications, and
connection design procedures, are examined. Various requirements that differ
notably from other international seismic codes, particularly those incorporated in
North American provisions, are also pointed out. The paper highlights various
issues related to the seismic design of steel and composite frames that can result
in uneconomical or impractical solutions, and outlines several specific seismic code
development needs.

5.1 Introduction

Steel and composite steel/concrete structures may be designed based on EC8


(Eurocode 8 2005) according to either non-dissipative or dissipative behaviour.
The former is normally limited to areas of low seismicity or to structures of special

A.Y. Elghazouli (*)


Department of Civil and Environmental Engineering, Imperial College London, London, UK
e-mail: [email protected]

© The Author(s) 2015 129


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_5
130 A.Y. Elghazouli

use and importance, although it could also be applied for higher seismicity areas if
vibration reduction or isolation devices are incorporated. Otherwise, the code aims
to achieve economical design by employing dissipative behaviour which, apart
from for special irregular or complex structures, is usually performed by assigning a
structural behaviour factor to reduce the code-specified forces resulting from
idealised elastic response spectra. This is carried out in conjunction with the
capacity design concept which requires an appropriate determination of the capac-
ity of the structure based on a pre-defined plastic mechanism, coupled with the
provision of sufficient ductility in plastic zones and adequate over-strength factors
for other regions.
This paper examines the dissipative seismic design provisions for steel and
composite framed structures, which are mainly covered in Part 1 (general rules,
seismic actions and rules for buildings) of Eurocode 8 (2005). General provisions in
other sections of EC8 Part 1 are also referred to where relevant. Additionally, where
pertinent, reference is made to US procedures for the seismic design of steel and
composite structures (ASCE7 2010; AISC341 2010). The assessment focuses on
the behaviour factors, ductility considerations, capacity design rules and connection
design requirements stipulated in EC8. Particular issues that warrant clarification or
further developments are highlighted and discussed.

5.2 Behaviour Factors

EC8 focuses essentially on three main structural steel frame systems, namely
moment resisting, concentrically braced and eccentrically braced frames. Other
systems such as hybrid and dual configurations are referred to in EC8, but limited
information is provided. It should also be noted that additional configurations such
as those incorporating buckling restrained braces, truss moment frames or special
plate shear walls, which are covered in recent US provisions, are not directly
addressed in the current version of EC8.
The behaviour factors are typically recommended by codes of practice based on
background research involving extensive analytical and experimental investiga-
tions. The reference behaviour factors (q) stipulated in EC8 for steel-framed
structures are summarised in Table 5.1. These are upper values of q allowed for
each system, provided that regularity criteria and capacity design requirements are
met. For each system, the dissipative zones are specified in the code (e.g. beam
ends, diagonals, link zones in moment, concentrically braced and eccentrically
braced frames, respectively). The multiplier αu/α1 depends on the failure/first
plasticity resistance ratio of the structure, and can be obtained from push-over
analysis (but should not exceed 1.6). Alternatively, default code values can be used
to determine q (as given in parenthesis in Table 5.1).
5 Seismic Code Developments for Steel and Composite Structures 131

Table 5.1 Behaviour factors in European and US Provisions


European Provisions Ductility class q qd
Non-dissipative DCL 1.5 1.5
Moment frames DCM 4.0 4.0
DCH 5 αu/α1 5 αu/α1
(5.5–6.5) (5.5–6.5)
Concentric braced DCM 4.0 4.0
DCH 4.0 4.0
V-braced DCM 2.0 2.0
DCH 2.5 2.5
Eccentrically braced DCM 4.0 4.0
DCH 5 αu/α1 (6.0) 5 αu/α1 (6.0)
Dual moment-concentric DCM 4.0 4.0
braced DCH 4 αu/α1 (4.8) 4 αu/α1 (4.8)
US Provisions Frame type R Cd
Non-dissipative Non-seismic detailing 3.0 3.0
Moment frames (steel) OMF 3.5 3.0
IMF 4.5 4.0
SMF 8.0 5.5
Moment frames (composite) C-OMF 3.0 2.5
C-IMF 5.0 4.5
C-SMF 8.0 5.5
C-PRMF 6.0 5.5
Concentric braced (steel) OSCBF 5.0 4.5
SSCBF 6.0 5.0
Concentric braced (composite) C-OCBF 3.0 3.0
C-SCBF 5.0 4.5
Eccentrically braced EBF(MCa) 8.0 4.0
EBF(non-MCa) 7.0 4.0
Eccentrically braced C-EBF 8.0 4.0
(composite)
Dual moment-braced Various detailed 4.0–8.0 3.0–6.5
systems
a
MC refers to moment beam-to-column connections away from the links

The same upper limits of the reference behaviour factors specified in EC8 for
steel framed structures are also employed for composite structures. This applies to
composite moment resisting frames, composite concentrically braced frames and
composite eccentrically braced frames. However, a number of additional composite
structural systems are also specified, namely: steel or composite frames with
connected infill concrete panels, reinforced concrete walls with embedded vertical
steel members acting as boundary/edge elements, steel or composite coupling beams
in conjunction with reinforced concrete or composite steel/concrete walls, and
composite steel plate shear walls. These additional systems are beyond the scope
of the discussions in this paper which focuses on typical frame configurations.
132 A.Y. Elghazouli

Although a direct comparison between codes can only be reliable if it involves


the full design procedure, the reference q factors in EC8 appear generally lower
than R values in US provisions for similar frame configurations as depicted in
Table 5.1. It is also important to note that the same force-based behaviour factors
(q) are typically proposed as displacement amplification factors (qd) in EC8. This is
not the case in US provisions where specific seismic drift amplification factors (Cd)
are suggested; these values appear to be generally lower than the corresponding R
factors for most frame types. Recent research studies on inelastic seismic drift
demands in moment frames (Kumar et al. 2013; Elghazouli et al. 2014) suggest that
the EC8 approach is generally over-conservative compared to the US provisions in
most cases, and improved prediction methods which account for earthquake char-
acteristics are proposed.
It is also noteworthy that US provisions include the use of a ‘system over-
strength’ parameter (Ωo, typically 2.0–3.0) as opposed to determining the level of
over-strength within the capacity design procedures in the case of EC8. Other
notable differences include the relatively low q assigned to V-braced frames in
EC8, in contrast with the US provisions which adopt the same R values used for
conventional concentric bracing. To this end, there seems to be a need to improve
the guidance provided in EC8 on behaviour factors, particularly for braced and dual
frames, and to extend it to other forms such as ‘zipper’ and ‘buckling restrained’
configurations.

5.3 Local Ductility

EC8 explicitly stipulates three ductility classes, namely DCL, DCM and DCH
referring to low, medium and high dissipative structural behaviour, respectively.
For DCL, global elastic analysis can be adopted alongside non-seismic detailing.
The recommended reference ‘q’ factor for DCL is 1.5–2.0. In contrast, structures in
DCM and DCH need to satisfy specific requirements primarily related to ensuring
sufficient ductility in the main dissipative zones. The application of a behaviour
factor larger than 1.5–2.0 must be coupled with sufficient local ductility within the
critical dissipative zones. For buildings which are not seismically isolated or
incorporating effective dissipation devices, design to DCL is only recommended
for low seismicity areas. It should be noted however that this recommendation can
create difficulties in practice (ECCS 2013), particularly for special or complex
structures. Although suggesting the use of DCM or DCH for moderate and high
seismicity often offers an efficient approach to providing ductility reserve against
uncertainties in seismic action, achieving a similar level of reliability could be
envisaged through the provision of appropriate levels of over-strength, possibly
coupled with simple inherent ductility provisions where necessary.
5 Seismic Code Developments for Steel and Composite Structures 133

5.3.1 Steel Sections

For steel elements in compression or bending, local ductility is ensured in EC8 by


restricting the width-to-thickness (c/t or b/t) ratios within the section to avoid local
buckling and hence reduce the susceptibility to low cycle fatigue and fracture. The
classification used in EC3 (Eurocode 3 2005) is adopted but with restrictions related
to the value of the q factor (DCM: Class 1, 2, 3 for 1.5 < q  2.0, or Class 1, 2 for
2.0 < q  4; DCH: Class 1 for q > 4).
Comparison between width-to-thickness limits in EC8 and AISC reveals some
notable differences (Elghazouli 2010). Figure 5.1, compares the ‘seismically-com-
pact’ limits (λps) in AISC with Class 1 width-to-thickness requirements in
EC3/EC8. Whilst the limits for flange outstands in compression are virtually
identical, there are significant differences for circular (CHS) and rectangular
(RHS) hollow sections, which are commonly used for bracing and column mem-
bers. For both CHS and RHS, the limits of λps are significantly more stringent than
Class 1, with the limit being nearly double in the case of RHS. Although the
q factors for framed systems are generally lower than R factors in most cases, the
differences in cross-section limits in the two codes are significantly more severe.
This suggests that tubular members satisfying the requirements of EC8 are likely to
be more vulnerable to local buckling and ensuing fracture in comparison with those
designed to AISC. There seems to be a need for further assessment of the adequacy
of various EC3 section classes in satisfying the cyclic demands imposed under
realistic seismic conditions.

5.3.2 Composite Sections

EC8 refers to three general design concepts for composite steel/concrete structures:
(i) Concept a: low-dissipative structural behaviour – which refers to DCL in the
same manner as in steel structures; (ii) Concept b: dissipative structural behaviour
with composite dissipative zones for which DCM and DCH design can be adopted
with additional rules to satisfy ductility and capacity design requirements; Concept
c: dissipative structural behaviour with steel dissipative zones, and therefore spe-
cific measures are stipulated to prevent the contribution of concrete under seismic
conditions; in this case, critical zones are designed as steel, although other ‘non-
seismic’ design situations may consider composite action to Eurocode 4 (2004).
For dissipative composite zones (i.e. Concept b), the beneficial presence of the
concrete parts in delaying local buckling of the steel components is accounted for
by relaxing the width-to-thickness ratio as indicated in Table 5.2 which is adapted
from EC8. In the table, partially encased elements refer to sections in which
concrete is placed between the flanges of I or H sections, whilst fully encased
elements are those in which all the steel section is covered with concrete. The cross-
section limit c/tf refers to the slenderness of the flange outstand of length c and
134 A.Y. Elghazouli

Fig. 5.1 Comparison of width-to-thickness requirements for high ductility

Table 5.2 Cross-section limits for composite sections in EC8


Partially or fully Concrete filled Concrete filled
Ductility classes encased sections rectangular sections circular sections
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
DCM c/tf  20 235= f y h/t  52 235= f y d/t  90 (235/fy)
(q  1.5–2.0)
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
DCM c/tf  14 235= f y h/t  38 235= f y d/t  85 (235/fy)
(1.5–2.0  q  4.0)
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
DCM (q > 4.0) c/tf  9 235= f y h/t  24 235= f y d/t  80 (235/fy)

thickness tf. The limits in hollow rectangular steel sections filled with concrete are
represented in terms of h/t, which is the ratio between the maximum external
dimension h and the tube thickness t. Similarly, for filled circular sections, d/t is
the ratio between the external diameter d and the tube thickness t. As in the case of
steel sections, notable differences also exist between the limits in EC8 for compos-
ite sections when compared with equivalent US provisions. Also, it should be noted
that the limits in Table 5.2 for partially encased sections (Elghazouli and Treadway
2008) may be relaxed even further if special additional details are provided to delay
or inhibit local buckling as indicated in Fig. 5.2 (Elghazouli 2009).
For beams connected to slabs, a number of requirements are stipulated in EC8 in
order to ensure satisfactory performance as dissipative composite elements (i.e. for
Concept b). These requirements comprise several criteria including those related to
the degree of shear connection, ductility of the cross-section and effective width
assumed for the slab. As in other codes, EC8 aims to ensure ductile behaviour in
composite sections by limiting the maximum compressive strain that can be
imposed on concrete in the sagging moment regions of the dissipative zones. This
5 Seismic Code Developments for Steel and Composite Structures 135

Fig. 5.2 Partially encased composite sections: (a) conventional, (b) with welded bars

Fig. 5.3 Ductility and effective width of composite beam sections

is achieved by limiting the maximum ratio of x/d, as shown in Fig. 5.3. Limiting
ratios are provided as a function of the ductility class (DCM or DCH) and yield
strength of steel ( fy). Close observation suggests that these limits are derived based
on assumed values for εcu2 of 0.25 % and εa of q  εy, where εy is the yield strain of
steel.
For dissipative zones of composite beams within moment frames, EC8 requires
the inclusion of ‘seismic bars’ in the slab at the beam-to-column connection region.
The objective is to incorporate ductile reinforcement detailing to ensure favourable
dissipative behaviour in the composite beams. The detailed rules are given in
Annex C of Part 1 and include reference to possible mechanisms of force transfer
in the beam-to-column connection region of the slab. The provisions are largely
based on background European research involving detailed analytical and experi-
mental studies (Plumier et al. 1998). It should be noted that Annex C of the code
only applies to frames with rigid connections in which the plastic hinges form in the
beams; the provisions in the annex are not intended, and have not been validated,
for cases with partial strength beam-to-column connections.
Another important consideration related to composite beams is the extent of the
effective width beff assumed for the slab, as indicated also in Fig. 5.3. EC8 includes
two tables for determining the effective width. These values are based on the
condition that the slab reinforcement is detailed according to the provisions of
Annex C since the same background studies (Plumier et al. 1998; Doneux and
136 A.Y. Elghazouli

Plumier 1999) were used for this purpose. The first table gives values for negative
(hogging) and positive (sagging) moments for use in establishing the second
moment of area for elastic analysis. These values vary from zero to 10 % of the
beam span depending on the location (interior or exterior column), the direction of
moment (negative or positive) and existence of transverse beams (present or not
present). On the other hand, the second table in the code provides values for use in
the evaluation of the plastic moment resistance. The values in this case are as high
as twice those suggested for elastic analysis. They vary from zero to 20 % of the
beam span depending on the location (interior or exterior column), the sign of
moment (negative or positive), existence of transverse beams (present or not
present), condition of seismic reinforcement, and in some cases on the width and
depth of the column cross-section. Clearly, design cases other than the seismic
situation would require the adoption of the effective width values stipulated in EC4.
Therefore, the designer may be faced with a number of values to consider for
various scenarios. Nevertheless, since the sensitivity of the results to these varia-
tions may not be significant (depending on the design check at hand), some
pragmatism in using these provisions appears to be warranted. Detailed research
studies (Castro et al. 2007) indicate that the effective width is mostly related to the
full slab width, although it also depends on a number of other parameters such as the
slab thickness, beam span and boundary conditions.

5.4 Capacity Design Requirements

5.4.1 Moment Frames

As in other seismic codes, EC8 aims to satisfy the ‘weak beam/strong column’
concept in moment frames, with plastic hinges allowed at the base of the frame, at
the top floor of multi-storey frames and for single-storey frames. To obtain ductile
plastic hinges in the beams, checks are made that the full plastic moment resistance
and rotation are not reduced by coexisting compression and shear forces. To satisfy
capacity design, columns should be verified for the most unfavourable combination
of bending moments MEd and axial forces NEd (obtained from MEd ¼ MEd,G
+ 1.1γ ovΩMEd,E, and similarly for axial loads), where Ω is the minimum over-
strength in the connected beams (Ωi ¼ Mpl,Rd/MEd,i). The parameters MEd,G and
MEd,E are the bending moments in the seismic design situation due to the gravity
loads and lateral earthquake forces, respectively, as shown in Fig. 5.4 (Elghazouli
2009).
The beam over-strength parameter (Ω ¼ Mpl,Rd/MEd) as adopted in EC8 involves
a major approximation as it does not account accurately for the influence of gravity
loads on the behaviour (Elghazouli 2010). This issue becomes particularly pro-
nounced in gravity-dominated frames (i.e. with large beam spans) or in low-rise
configurations (since the initial column sizes are relatively small), in which the
5 Seismic Code Developments for Steel and Composite Structures 137

Fig. 5.4 Moment action under gravity and lateral components in the sesimic situation

beam over-strength may be significantly underestimated. The extent of the problem


depends on the unclear interpretation of the code and whether Ω is used in isolation
or in combination with an additional capacity design criterion based on a limiting
ratio of 1.3 on the column-to-beam capacity. It is also important to note that whilst
codes aim to achieve a ‘weak-beam/strong-column’ behaviour, some column hing-
ing is often unavoidable. In the inelastic range, points of contra-flexure in members
change and consequently the distribution of moments vary considerably from
idealised conditions assumed in design. The benefit of meeting code requirements
is to obtain relatively strong columns such that beam rather than column yielding
dominates over several stories, hence achieving adequate overall performance.
The above-noted issue becomes more significant in composite moment frames
where relatively large spans are typical. Detailed studies on composite frames
(Elghazouli et al. 2008) indicate that design to EC8 can result in significant column
hinging. Full beam hinging is also significantly hampered by the difference between
the sagging and hogging moment capacities in composite sections. Another uncer-
tainty in composite moment frames is related to the effective slab width as
discussed before. Whilst US provisions employ the same approaches used in
non-seismic design, EC8 suggests more involved procedures for seismic design in
which this width varies depending on the direction of moment, location of beam,
and whether the check is for resistance or capacity design. This adds to the
complexity of the design and can have a notable influence on capacity design
procedures. To this end, it is important to note that the dissipative zones at the
beam ends of composite moment frames can be considered as steel-only sections in
EC8 (i.e. following Concept c). To achieve this, the slab needs to be ‘totally
disconnected’ from the steel members in a circular zone with a diameter of at
least 2beff around the columns, with beff determined on the basis of the larger
effective width of the connected beams. This ‘total disconnection’ also implies
that there is no contact between the slab and the sides of any vertical element such
as the columns, shear connectors, connecting plates, corrugated flange, etc.
The above consideration, of disregarding the composite action and designing for
steel-only dissipative zones, can be convenient in practical design. Clearly, two EI
values for the beams need to be accounted for in the analysis: composite in the
middle and steel at the ends. The beams are composite in the middle, hence
providing enhanced stiffness and capacity under gravity loading conditions. On
the other hand, in the seismic situation, the use of steel dissipative zones avoids the
138 A.Y. Elghazouli

Fig. 5.5 Axial action under gravity and lateral components in the seismic situation

need for detailed considerations in the slab, including those related to seismic
rebars, effective width and ductility criteria associated with composite dissipative
sections. This consideration also implies that the connections would be designed on
the plastic capacity of the steel beams only. Additionally, the columns need to be
capacity designed for the plastic resistance of steel instead of composite beam
sections, which avoids over-sizing of the column members.

5.4.2 Braced Frames

Whilst for moment frames, the dissipative zones may be steel or composite, the
dissipative zones in braced frames are typically only allowed to be in steel
according to EC8. In other words, the diagonal braces in concentrically braced
frames, and the bending/shear links in eccentrically braced frames, should typically
be designed and detailed such that they behave as steel dissipative zones. This
limitation is adopted in the code as a consequence of the uncertainty associated with
determining the actual capacity and ductility properties of composite steel/concrete
elements in these configurations. As a result, the design of composite braced frames
follows very closely those specified for steel, an issue which merits further assess-
ment and development.
Capacity design of concentrically braced frames in EC8 is based on ensuring
yielding of the diagonals before yielding or buckling of the beams or columns and
before failure of the connections. Due to buckling of the compression braces,
tension braces are considered to be the main ductile members, except in V and
inverted-V configurations. According to EC8, columns and beams should be capac-
ity designed for the seismic combination actions. The design resistance of the beam
or column under consideration NEd,(MEd) is determined (i.e. NEd,(MEd)  NEd,G
+ 1.1γ ovΩ NEd,E) with due account of the interaction with the bending moment
MEd, where NEd,G and NEd,E, are the axial loads due to gravity and lateral actions,
respectively, in the seismic design situation, as illustrated in Fig. 5.5 (Elghazouli
2009); Ω is the minimum value of axial brace over-strength over all the diagonals of
the frame and γov is the material over-strength. However, Ω of each diagonal should
not differ from the minimum value by more than 25 % in order to ensure reasonable
distribution of ductility. It is worth noting that unlike in moment frames, gravity
5 Seismic Code Developments for Steel and Composite Structures 139

Fig. 5.6 Forces developing in columns of concentrically braced frames

loading does not normally have an influence on the accuracy of Ω. It should also be
noted that the 25 % limit can result in difficulties in practical design; it can be
shown (Elghazouli 2010) that this limit can be relaxed or even removed if measures
related to column continuity and stiffness are incorporated in design.
As mentioned previously, US provisions (AISC341 2010) for braced frames
differ from those in EC8 in terms of the R factors recommended as well as cross-
section limits for some section types. However, the most significant difference is
related to the treatment of the brace buckling in compression which may lead to
notably dissimilar seismic behaviour depending mainly on the slenderness of the
braces. This has been examined in detail in recent studies (Elghazouli 2010), and
has significant implications on the frame over-strength as well as on the applied
forces and ductility demand imposed on various frame components.
As expected, in the design of the diagonal members in concentrically braced
frames, the non-dimensional slenderness λ used in EC3 plays an important role in
the behaviour (Elghazouli 2003). In earlier versions of EC8, an upper limit of 1.5
was proposed to prevent elastic buckling. However, further modifications have
been made in subsequent versions of EC8 and the upper limit has been revised to
a value of 2.0 which results in a more efficient design. On the other hand, in frames
with X-diagonal braces, EC8 stipulates that λ should be between 1.3 and 2.0. The
lower limit is specified in order to avoid overloading columns in the pre-buckling
stage of diagonals. Satisfying this lower limit can however result in significant
difficulties in practical design (Elghazouli 2009). It would be more practical to
avoid placing such limits, yet ensure that forces applied on components other than
the braces are based on equilibrium at the joints, with due account of the relevant
actions in compression. Figure 5.6 illustrates, for example, the compression force
F (normalised by Npl sinϕ) developing in a column of X and decoupled brace
140 A.Y. Elghazouli

configurations (Elghazouli 2010), where Npl is the axial plastic capacity of the brace
cross-section and ϕ is the brace angle. These actions can be based on the initial
buckling resistance (Nb) or the post-buckling reserve capacity (Npb) depending on
the frame configuration and design situation. Based on available experimental
results (Goggins et al. 2005; Elghazouli et al. 2005), a realistic prediction of Npb
can be proposed (Elghazouli 2010) accounting for brace slenderness as well as
expected levels of ductility.

5.4.3 Material Considerations

In addition to conforming to the requirements of EC3 and EC4, EC8 stipulates


further criteria related to structural steel, connection components, and reinforce-
ment types as well as lower and upper bounds for concrete strength, amongst others.
A key consideration is determining a realistic value for the over-strength of steel
material (γ ov) for use in capacity design checks. A number of conditions are given in
EC8 (Elghazouli 2009), but the suggested default value of 1.25 is typically adopted
in practice. It is however recognised (ECCS 2013) that the level of over-strength
varies significantly depending on the type and grade of steel, with the over-strength
expected to be more pronounced in lower grades. As a consequence, US codes
(AISC341 2010) adopt factors varying between 1.1 and 1.6, depending on the type
and grade of steel. Some National Annexes to EC8 also already suggest a deviation
from the recommended value of 1.25 as a function of the steel grade. Another
solution would be to produce seismic steel grades with specified upper bound
strength, as adopted in Japan, although this may not be practical for European
manufacturers. Overall, there seems to be a need for more reliable guidance in EC8
on the levels and sources of over-strength that should be adopted in practice.
Another area that requires clarification and development in EC3 and EC8 is related
to the steel material toughness for application in seismic design (ECCS 2013),
although this has been addressed in the National Annexes of several European
countries. Specific guidance appears to be needed particularly in relation to refer-
ence temperatures and strain rates that would be appropriate to employ in seismic
design situations.

5.5 Lateral Over-Strength

An important factor influencing seismic response is the over-strength exhibited by


the structure. There are several sources that can introduce over-strength, such as
material effects caused by a higher yield stress compared to the characteristic value
as discussed in the previous section, or size effects due to the selection of members
from standard lists, as in those used for steel sections. Additional factors include
contribution of non-structural elements, or increase in member sizes due to other
5 Seismic Code Developments for Steel and Composite Structures 141

normally For q>3, strength normally determined


by inter-storey drift limits
determined by (solid lines shown are for 0.5%h limit;
sizing for gravity lower values are obtained for relaxed
situation or q drift limits as indicated by dashed lines)
6
Actual frame overstrength (Vy /Vd)

relaxed stringent
drift limits drift limits
q=8

4
q=6

q=4
implies that
2 actual strength
q <_ 3
is larger than
Ve (i.e. at q=1) _ mainly governed by material
For q<3,
& reditribution considerations
(i.e. 1.1g ov a u / a1)
0
0 0.2 0.4 0.6 0.8 1
Elastic spectral acceleration (Se/g)

Fig. 5.7 Expected levels of lateral over-strength in moment frames

load cases or architectural considerations. Most notably, over-strength is often a


direct consequence of the application of drift related requirements or inherent
idealisations and simplifications within the design approaches and procedures.

5.5.1 Stability and Drift Implications

It can be shown that, in comparison with North American and other international
provisions, drift-related requirements in EC8 are significantly more stringent
(Elghazouli 2010). This is particularly pronounced in case of the stability coeffi-
cient θ, which is a criterion that warrants further detailed consideration. As a
consequence of the stern drift and stability requirements and the relative sensitivity
of framed structures, particularly moment frames, to these effects, they can often
govern the design leading to considerable over-strength, especially if a large
behaviour factor is assumed. This over-strength (represented as the ratio of the
actual base shear Vy to the design value Vd) is also a function of the normalised
elastic spectral acceleration (Sa/g) and gravity design, as illustrated in Fig. 5.7
(Elghazouli 2010).
Whereas the presence of over-strength reduces the ductility demand in dissipa-
tive zones, it also affects forces imposed on frame and foundation elements. A
rational application of capacity design necessitates a realistic assessment of lateral
142 A.Y. Elghazouli

5
Lower limit
V in EC8
4 for X-diagonals
Over-strength Vy/Vd

Upper limit
in EC8
3
V
2 Compression design

1
Tension design

0
0 0.5 1 1.5 2 2.5 3

Slenderness λ
Fig. 5.8 Lateral frame over-strength arising from tension and compression design

capacity after the satisfaction of all provisions, followed by a re-evaluation of


global over-strength and the required ‘q’. Although high ‘q’ factors are allowed
for moment frames, in recognition of their ductility and energy dissipation capa-
bilities, it should be noted that such a choice is often unnecessary and could lead to
undesirable effects.

5.5.2 Influence of Design Idealisations

As noted above, simplifications in the design procedure can result directly in


considerable levels of structural over-strength. A most significant source of over-
strength in concentrically braced frames arises from the simplification associated
with the treatment of brace buckling in compression. To enable the use of linear
elastic analysis tools, commonly employed in design practice, two different
approaches are normally adopted in design methods. Whereas several codes, such
as US provisions (AISC341 2010), base the design strength on the brace buckling
capacity in compression (with a few exceptions), European provisions are largely
based on the brace plastic capacity in tension (except for V and inverted-V
configurations).
Whilst both the tension and compression based approaches lead to frame over-
strength, they have directly opposite trends with the respect to the brace slenderness
(Elghazouli 2003), as illustrated in Fig. 5.8. The over-strength arising from the
tension-based idealisation is insignificant for relatively slender braces but
approaches a factor of two for relatively stocky braces. In contrast, the over-
5 Seismic Code Developments for Steel and Composite Structures 143

strength arising from the compression design is insignificant for stocky members
but increases steadily with the slenderness ratio. As noted previously, it is important
to quantify the level of over-strength in a frame and assess the actual forces
sustained by the braces in compression. Depending on the specific design situation
and frame configuration, it may be necessary to estimate either the maximum or
minimum forces attained in compression members in a more realistic manner as
opposed to the idealised approaches currently adopted in seismic codes.

5.6 Connection Design

5.6.1 Steel Moment Connections

Steel moment frames have traditionally been designed with rigid full-strength
connections, usually of fully-welded or hybrid welded/bolted configuration. Typi-
cal design provisions ensured that connections are provided with sufficient over-
strength such that dissipative zones occur mainly in the beams. However, the
reliability of commonly-used forms of full-strength beam-to-column connection
has come under question following poor performance in large seismic events,
particularly in Northridge and Kobe earthquakes (SAC 1995). The extent and
repetitive nature of damage observed in several types of welded and hybrid
connections have directed considerable research effort not only to repair methods
for existing structures but also to alternative connection configurations to be
incorporated in new designs.
Observed seismic damage to welded and hybrid connections was attributed to
several factors including defects associated with weld and steel materials, welding
procedures, stress concentration, high rotational demands, scale effects, as well as
the possible influence of strain levels and rates (FEMA 2000). In addition to the
concerted effort dedicated to improving seismic design regulations for new con-
struction, several proposals have been forwarded for the upgrading of existing
connections. As shown schematically in Fig. 5.9 (Elghazouli 2009), this may be
carried out by strengthening of the connection through haunches, cover or side
plates, or other means. Alternatively, it can be achieved by weakening of the beam
by trimming the flanges (i.e. reduced beam section ‘RBS’ or ‘dog-bone’ connec-
tions), perforating the flanges, or by reducing stress concentrations through slots in
beam webs, enlarged access holes, etc. In general, the design can be based on either
prequalified connections or on prototype tests. Prequalified connections have been
proposed in the US (AISC358 2010), and a similar European activity is currently
underway. It should be noted however that most prequalification activities have
been focusing on connections to open section columns, with comparatively less
attention given to connections to tubular columns (Elghazouli and Packer 2014).
144 A.Y. Elghazouli

Fig. 5.9 Examples of modified moment beam-to-column connection configurations: (a) with
haunches, (b) with cover plates; (c) reduced beam section

Another important aspect of connection behaviour is related to the influence of


the column panel zone. This has direct implications on the ductility of dissipative
zones as well as on the overall frame performance. Recent research studies (Castro
et al. 2008), involved the development of realistic modelling approaches for panel
zones within moment frames as well as assessment of current design procedures.
One important issue is related to the treatment of the two yield points corresponding
to the onset of plasticity in the column web and surrounding components, respec-
tively. Another key design consideration is concerned with balancing the extent of
plasticity between the panel zone and the connected beams, an issue which can be
significantly affected by the level of gravity applied on the beams. On the one hand,
allowing a degree of yielding in the panel reduces the plastic hinge rotations in the
beams yet, on the other hand, relatively weak panel zone designs can result in
excessive distortional demands which can cause unreliable behaviour of other
connection components particularly in the welds. The approaches used in
European guidance, through the combined provisions of EC3 or EC4 with EC8,
appear to lead to significantly different design in comparison with that adopted in
US provisions, an issue which requires further examination and development.
Bolted connections, which can be designed as rigid or semi-rigid, can alleviate
many of the drawbacks of welded forms (Elghazouli 2009). However, the guidance
for semi-rigid bolted connections varies in detail between US and EC8 procedures.
In AISC, partially-restrained (PR) connections are not permitted for intermediate or
special moment frames connections. They can only be used in ordinary moment
frames, provided the nominal connection strength is not less than 50 % of the plastic
moment capacity of the beam, and the stiffness, strength and deformation capacity
of the PR moment connections are considered in the design including the effect on
overall frame stability. On the other hand, EC8 permits in principle the use of
partial strength (i.e. dissipative) connections in primary lateral load-resisting sys-
tems provided that: (i) all connections have rotation capacity consistent with global
deformations, (ii) members framing into connections are stable at the ultimate limit
5 Seismic Code Developments for Steel and Composite Structures 145

state, and (iii) connection deformation is accounted for through nonlinear analysis.
Unlike in AISC, there is no limit given in EC8 on the minimum moment ratio, nor
on the use with different ductility classes. Dissipative connections should satisfy the
rotational demand implied for plastic hinge zones, irrespective of whether the
connections are partial or full strength; these are specified as 25 and 35 mrad for
DCM and DCH, respectively, which are broadly similar to the demands in IMF and
SMF in AISC 341 (total drift of 0.02 and 0.04 rad, for IMF and SMF, respectively).

5.6.2 Composite Moment Connections

As discussed previously, EC8 permits three general design concepts for composite
structures (low dissipative behaviour, dissipative composite zones or dissipative
steel zones). On the other hand, AISC refers to specific composite systems as
indicated in Table 5.1 (e.g. C-OMF, C-IMF, C-SMF). In principle, this classifica-
tion applies to systems consisting of composite or reinforced concrete columns and
structural steel, concrete-encased composite or composite beams. The use of PR
connections (C-PRMF) is included, and is applicable to moment frames that consist
of structural steel columns and composite beams that are connected with partially
restrained (PR) moment connections. Similar to PR steel connections, they should
have strengths of at least 0.5Mp but additionally should exhibit a rotation capacity
of at least 0.02 rad. It should be noted that, as mentioned previously, Annex C in
EC8 for the detailing of slabs only applies to frames with rigid connections in which
the plastic hinges form in the beams. However, guidance on the detailing of
composite joints using partial strength connections are addressed in the commen-
tary of AISC 341 for C-PRMF systems.
The use of composite connections can often simplify some of the challenges
associated with traditional steel and concrete construction, such as minimizing field
welding and anchorage requirements. Given the many alternative configurations of
composite structures and connections, there are few standard details for connections
in composite construction. In most composite structures built to date, engineers
have designed connections using basic mechanics, equilibrium models
(e.g. classical beam-column, truss analogy, strut and tie, etc.), existing standards
for steel and concrete construction, test data, and good judgment. As noted above,
however, engineers do face inherent complexities and uncertainties when dealing
with composite dissipative connections, which can often counterbalance the merits
of this type of construction when choosing the structural form. In this context, the
‘total disconnection’ approach permitted in EC8 (i.e. Concept c) offers a practical
alternative in order to use standard or prequalified steel-only beam-to-column
connections. This status can also be achieved using North American codes provided
the potential plastic hinge regions are maintained as pure steel members. A similar
approach has also been recently used in hybrid flat slab-tubular column connections
(Eder et al. 2012), hence enabling the use of flat slabs in conjunction with steel-only
dissipative members.
146 A.Y. Elghazouli

Bracing Member

2t
Gusset Plate (thickness = t)
Fold Line

Fig. 5.10 Gusset plate connections in concentrically braced frames

5.6.3 Bracing Connections

Issues related to connection performance and design are clearly not only limited to
moment connections, but also extend to other configurations such as connections to
bracing members. Many of the failures reported in concentrically braced frames due
to strong ground motion have been in the connections. In principle, bracing
connections can be designed as rotationally restrained or unrestrained, provided
that they can transfer the axial cyclic tension and compression effectively. The in-
and out-of-plane behaviour of the connection, and their influence on the beam and
column performance, should be carefully considered in all cases. For example,
considering gusset plate connections, as shown in Fig. 5.10 (Elghazouli 2009),
satisfactory performance can be ensured by allowing the gusset plate to develop
plastic rotations. This requires that that the free length between the end of the brace
and the assumed line of restraint for the gusset can be sufficiently long to permit
plastic rotations, yet short enough to preclude the occurrence of plate buckling prior
to member buckling. Alternatively, connections with stiffness in two directions,
such as crossed gusset plates, can be detailed. The performance of bracing connec-
tions, such as those involving gusset plate components, has attracted significant
research interest in recent years (e.g. Lehman et al. 2008). Alternative tri-linear and
nonlinear fold-line representations have been proposed and validated. A recent
European research programme has also examined the performance of alternative
5 Seismic Code Developments for Steel and Composite Structures 147

forms of gusset-plate bracing connections and provided recommendations on opti-


mum configurations for use in design (Broderick et al. 2013).
Design examples for bracing-to-gusset plate connections in concentrically and
eccentrically braced frames are given in the AISC Seismic Design Manual (2012),
in accordance with AISC 341 and ASCE7, and typically require many consider-
ations and design checks. In contrast, as for moment connections, the design of
connections between bracing members and beams/columns is only dealt with in a
conceptual manner in EC8. Accordingly, designers can adopt details available from
the literature, or based on prototype testing.
Designing bracing connections in an efficient and practical manner can be
complex and time-consuming, and requires significant expertise (Elghazouli and
Packer 2014). This has led to the development of ‘pre-engineered’ proprietary
solutions using ‘off-the-shelf’ cast steel connections (Herion et al. 2010). A sub-
stantially more compact field-bolted connection is achieved than would otherwise
be possible with typical bolted connections using splice plates. Other proprietary
connections include yielding ‘fuses’ such as the Yielding Brace System (YBS)
(Gray et al. 2014). In this case, dissipation is provided by flexural yielding of parts
of the YBS while the bracing member and other frame elements remain essentially
elastic. Another ‘off-the-shelf’ solution is also provided through Buckling
Restrained Braces which, as noted before, are not currently directly addressed in
EC8. It should be noted that AISC358 is limited to prequalified solutions for steel
moment connections, and does not prequalify connections for braced frames. At
present, ‘pre-engineered’ bracing connections can perhaps be treated in a compa-
rable manner to qualification of custom seismic products which require proof
testing. Overall, compared to self-designed connections, proprietary seismic con-
nections could offer improved performance, additional quality assurance, and the
potential for savings in cost and construction time.

5.7 Concluding Remarks

This paper highlights various issues related to the seismic design of steel and
composite frames that would benefit from further assessment and code develop-
ment, with particular focus on the provisions of EC8. Since the European seismic
code is in general relatively clear in its implementation of the underlying capacity
design principles as well as the purpose of the parameters adopted within various
procedures, its rules can be readily adapted and modified based on new research
findings and improved understanding of seismic behaviour.
Comparison of EC8 provisions with those in AISC in terms of structural
configurations and associated behaviour factors highlights a number of issues that
are worthy of further development. Several lateral resisting systems that are cur-
rently dealt with in AISC are not incorporated in EC8 including steel-truss moment
frames, steel-plate walls and buckling-restrained braces. It is anticipated that these
will be considered in future revisions of the code. Another notable difference is the
148 A.Y. Elghazouli

relatively low q assigned to V-braced frames in EC8 compared to AISC, which


highlights the need for further assessment of behaviour factors particularly for
braced and dual frames in EC8, and to extend it to other forms such as ‘zipper’
and ‘buckling restrained’ configurations. It is also shown that whilst EC8 typically
adopts the equal-displacement approach for predicting inelastic drift, US provisions
employ specific seismic drift amplification factors. It is however noted that there is
a need for seismic codes to adopt improved prediction methods which account for
earthquake characteristics.
In terms of local ductility, comparison of the width-to-thickness limits in EC8
and AISC reveals considerable differences, particularly in the case of rectangular
and circular tubular members. Since the ductility capacity and susceptibility to
fracture are directly related to the occurrence of local buckling, it seems necessary
to conduct further assessment of the adequacy of Class 1 sections to satisfy the
cyclic demands imposed under prevalent seismic conditions. For composite dissi-
pative sections, the requirements in EC8 for determining the effective width and the
detailing in the slab is intricate, and some pragmatism and simplification in its
application may be necessary, unless the option of ‘disconnection’ is adopted. It is
also noted that allowing DCL or modified-DCL detailing in EC8 for moderate
seismicity, with an appropriate reserve capacity, may be desirable particularly for
special or complex structures.
It is observed that in EC8 the capacity-design application rules for columns
ignore the important influence of gravity loads on the over-strength of beams. This
issue becomes particularly pronounced in gravity-dominated frames or in low-rise
configurations. The extent of the problem depends on the interpretation of the code
and whether Ω is used in isolation or in combination with an additional capacity
design criterion based on a limiting ratio of 1.3 on the column-to-beam capacity.
The above-noted issue becomes more significant in composite moment frames
where relatively large spans are typical. This is also added to the problem of
achieving full beam hinging in dissipative composite frames due to the difference
between the sagging and hogging moment capacities in composite sections.
In order to mitigate the vulnerability of braced frames to the concentration of
inelastic demand within critical storeys, EC8 introduces a 25 % limit on the
maximum difference in brace over-strength (Ωi) within the frame. Detailed studies
show that this may not eliminate the problem and can impose additional design
effort and difficulties in practical design. Instead, this limit can be significantly
relaxed or even removed if measures related to column continuity and stiffness are
incorporated in design. Another issue related to concentrically braced frames is the
lower slenderness limit of 1.3 imposed in EC8 for X-bracing, in order to limit the
compression force in the brace. Satisfying this limit can result in significant
difficulties in practical design. It would be more practical to avoid placing such
limits, yet ensure that forces applied on components other than the braces are based
on equilibrium at the joints, with due account of the relevant actions in compres-
sion. Improved procedures that account for brace slenderness as well as expected
levels of ductility could be adopted.
5 Seismic Code Developments for Steel and Composite Structures 149

For the purpose of capacity design checks, it is important to determine a realistic


value for the over-strength of steel material. Unlike AISC, EC8 suggests a default
value of 1.25. It is recognised however that the level of over-strength varies
significantly depending on the type and grade of steel, with the over-strength
expected to be more pronounced in lower grades. There seems to be a need for
more reliable guidance in EC8 on the levels and sources of material over-strength
that should be adopted in practice. Another area that requires clarification and
development in EC3 and EC8 is related to the steel material toughness for appli-
cation in seismic design. Specific guidance appears to be needed particularly in
relation to reference temperatures and strain rates that would be appropriate to
employ in seismic design situations.
Apart from over-strength arising from the material, lateral frame over-strength
can be a direct result of design idealisations or the application of drift-related
criteria. A significant design idealisation in concentrically braced frames is related
to the treatment of buckling of the compression braces. Whereas AISC largely
bases the design strength on the brace buckling capacity in compression, EC8
adopts the brace plastic capacity in tension with few exceptions. Whilst both
simplifications lead to frame over-strength, they have directly opposite trends
with respect to the brace slenderness. Depending on the specific design situation
and frame configuration, it may be necessary to estimate either the maximum or
minimum forces attained in compression members in a more realistic manner as
opposed to the idealised approaches currently adopted in seismic codes.
The other key consideration influencing lateral frame over-strength is related to
drift criteria. In comparison with other seismic codes, drift and stability require-
ments in EC8 are significantly more stringent. As a consequence, these checks can
often govern the design, leading to considerable over-strength, especially if a high
‘q’ is assumed. Whereas the presence of over-strength reduces the ductility demand
in dissipative zones, it also affects forces imposed on frame and foundation
elements. A rational application of capacity design necessitates a realistic assess-
ment of lateral capacity after the satisfaction of all provisions, followed by a
re-evaluation of global over-strength and the required ‘q’. Although high ‘q’ factors
are allowed for various frame types in EC8, such a choice is often unnecessary and
undesirable.
In terms of beam-to-column connections, there is clearly a need for a concerted
effort to develop European guidance, in conjunction with the principles of EC8, on
appropriate connection detailing using representative sections, materials and detail-
ing practices. There is also a need for reviewing the design of column panel zones in
moment frames, resulting from the combined application of the rules in EC3 and
EC8. In particular, the definition of the yield point as well as the balance of
plasticity between the panel and connected beams require further consideration.
In general, it seems logical for future activities to promote the development of
‘prequalified’ or ‘pre-engineered’ seismic connections that satisfy the requirements
of EC8, and to provide supporting design procedures and associated simplified
analytical tools. These should not be limited to welded moment connections, but
150 A.Y. Elghazouli

should extend to bolted rigid and semi-rigid configurations as well as joints of


bracing members and link zones in braced frames.

Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.

References

AISC (2012) Seismic design manual, 2nd edn. American Institute of Steel Construction Inc.,
AISC, Chicago
AISC 341 (2010) Seismic provisions for structural steel buildings. ANSI/AISC 341–10 American
Institute of Steel Construction Inc., AISC, Chicago
AISC 358 (2010) Prequalified connections for special and intermediate steel moment frames for
seismic applications. ANSI/AISC 358–10, American Institute of Steel Construction Inc.,
AISC, Chicago
ASCE7 (2010) ASCE/SEI – ASCE 7–10 – minimum design loads for buildings and other
structures. American Society of Civil Engineers/Structural Engineering Institute, Reston
Broderick BM, Hunt A, Mongabure P, LeMaoult A, Goggins JM, Salawdeh, S, O’Reilly G, Beg D,
Moze P, Sinur F, Elghazouli AY, and Plumier A (2013) Assessment of the seismic response of
concentrically-braced frames. SERIES Concluding Workshop, Earthquake Engineering
Research Infrastructures, European Commissions, JRC-Ispra, Italy
Castro JM, Elghazouli AY, Izzuddin BA (2007) Assessment of effective slab widths in composite
beams. J Constr Steel Res 63(10):1317–1327
Castro JM, Davila-Arbona FJ, Elghazouli AY (2008) Seismic design approaches for panel zones in
steel moment frames. J Earthq Eng 12(S1):34–51
Doneux C, Plumier A (1999) Distribution of stresses in the slab of composite steel-concrete
moment resistant frames submitted to earthquake action. Stahlbau 68(6):438–447
ECCS (2013) Assessment of EC8 provisions for seismic design of steel structures. In: Landolfo R
(ed) European convention for constructional steelwork, Brussels
Eder MA, Vollum RL, Elghazouli AY (2012) Performance of ductile RC flat slab-to-steel column
connections under cyclic loading. Eng Struct 36(1):239–257
Elghazouli AY (2003) Seismic design procedures for concentrically braced frames. Struct Build
156:381–394
Elghazouli AY (ed) (2009) Seismic design of buildings to Eurocode 8. Taylor and Francis/Spon
Press, London
Elghazouli AY (2010) Assessment of European seismic design procedures for steel framed
structures. Bull Earthq Eng 8(1):65–89
Elghazouli AY, Packer JA (2014) Seismic design solutions for connections to tubular members. J
Steel Constr 7(2):73–83
Elghazouli AY, Treadway J (2008) Inelastic behaviour of composite members under combined
bending and axial loading. J Constr Steel Res 64(9):1008–1019
Elghazouli AY, Broderick BM, Goggins J, Mouzakis H, Carydis P, Bouwkamp J, Plumier A
(2005) Shake table testing of tubular steel bracing members. Struct Build 158:229–241
Elghazouli AY, Castro JM, Izzuddin BA (2008) Seismic performance of composite moment
frames. Eng Struct 30(7):1802–1819
Elghazouli AY, Kumar M, Stafford PJ (2014) Prediction and optimisation of seismic drift
demands incorporating strong motion frequency content. Bull Earthq Eng 12(1):255–276
Eurocode 3 (2005) Design of steel structures – Part 1.1: General rules and rules for buildings. EN
1993–1: 2005, European Committee for Standardization, CEN, Brussels
5 Seismic Code Developments for Steel and Composite Structures 151

Eurocode 4 (2004) Design of composite steel and concrete structures – Part 1.1: General rules and
rules for buildings. EN 1994–1: 2004, European Committee for Standardization, CEN,
Brussels
Eurocode 8 (2005) Design of structures for earthquake resistance – Part 1: General rules, seismic
actions and rules for buildings. EN 1998–1: 2004, European Committee for Standardization,
Brussels
FEMA (2000) Federal Emergency Management Agency. Recommended seismic design criteria
for new steel moment-frame buildings. Program to reduce earthquake hazards of steel moment-
frame structures, FEMA-350, FEMA, Washington, DC
Goggins JM, Broderick BM, Elghazouli AY, Lucas AS (2005) Experimental cyclic response of
cold-formed hollow steel bracing members. Eng Struct 27(7):977–989
Gray MG, Christopoulos C, Packer JA (2014) Cast steel yielding brace system (YBS) for
concentrically braced frames: concept development and experimental validations. J Struct
Eng (American Society of Civil Engineers) 140(4):pp.04013095
Herion S, de Oliveira JC, Packer JA, Christopoulos C, Gray MG (2010) Castings in tubular
structures – the state of the art. Struct Build (Proceedings of the Institution of Civil Engineers)
163(SB6):403–415
Kumar M, Stafford PJ, Elghazouli AY (2013) Influence of ground motion characteristics on drift
demands in steel moment frames designed to Eurocode 8. Eng Struct 52:502–517
Lehman DE, Roeder CW, Herman D, Johnson S, Kotulka B (2008) Improved seismic performance
of gusset plate connections. J Struct Eng, ASCE 134(6):890–901
Plumier A, Doneux C, Bouwkamp JG, Plumier C (1998) Slab design in connection zones of
composite frames. Proceedings of the 11th ECEE Conference, Paris
SAC (1995) Survey and assessment of damage to buildings affected by the Northridge Earthquake
of January 17, 1994, SAC95-06, SAC Joint Venture, Sacramento
Chapter 6
Seismic Analyses and Design of Foundation
Soil Structure Interaction

Alain Pecker

Abstract The topic of this paper is to illustrate on a real project one aspect of soil
structure interaction for a piled foundation. Kinematic interaction is well recog-
nized as being the cause of the development of significant internal forces in the piles
under seismic loading. Another aspect of kinematic interaction which is often
overlooked is the modification of the effective foundation input motion. As
shown in the paper such an effect may however be of primary importance.

6.1 Introduction

Kinematic interaction is well recognized as being the cause of the development of


significant internal forces in the piles under seismic loading. These internal forces
are developed as the consequence of the ground displacement induced by the
passage of the seismic waves. These displacements are imposed to the piles
which may, or may not, follow the soil displacements depending on the bending
stiffness of the piles relative to the soil shear stiffness (e.g. Kavvadas and Gazetas
1993). For flexible piles, the internal forces, i.e. pile bending moments and shear
forces, can be computed by simply imposing the soil displacements to the pile; for
stiff piles a soil structure analysis shall be conducted with proper modelling of the
soil-pile interaction. Obviously, kinematic effects are more pronounced when the
piles are stiff relative to the surrounding soil and when they cross consecutive layers
of sharply different stiffnesses because the soil curvature is very large at such
interfaces. This aspect of kinematic interaction is well understood and correctly
accounted for in seismic design of piled foundations; for instance the European
Seismic code (CEN 2004) requires that kinematic bending moments be computed
whenever the two following conditions occur simultaneously:

A. Pecker (*)
Géodynamique et Structure, Bagneux, France
Ecole Nationale des Ponts ParisTech, Champs-sur-Marne, France
e-mail: [email protected]

© The Author(s) 2015 153


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_6
154 A. Pecker

• The ground profile has an average shear wave velocity smaller than 180 m/s
(ground type D) and contains consecutive layers of sharply differing stiffness;
consecutive layers of sharply differing stiffness are defined as layers with a ratio
for the shear moduli greater than 6.
• The zone is of moderate or high seismicity, i.e. presents a ground surface
acceleration larger than 0.1 g, and the category of importance of the structure
is higher than normal (importance category III or IV).
There is another aspect of kinematic interaction often overlooked, even in
seismic building codes, which is the modification of the effective foundation
input motion. For example the European Seismic code (CEN 2004) does not
mention it, nor does the ASCE 41-13 standard (2014) which however dedicates
several pages to the effect of kinematic interaction for shallow or embedded
foundations.
This issue might be critical when substructuring is used and the global soil-
structure-interaction problem is solved in several steps. However, when a global
model including both the soil and the superstructure is contemplated, kinematic
interaction is accounted for in the analysis, provided the global model correctly
reflects the physical character of the problem. These aspects are illustrated below on
a real bridge project.

6.2 Soil Structure Interaction Modelling

As opposed to spread footings, for which a single method of analysis to determine


the forces transmitted by the foundation emerges in practice (based on a
substructuring approach and the definition of the foundation stiffness matrix and
damping), several modeling techniques are used to model piled foundations for
seismic response studies; the most common methods are the simplified beam on
Winkler foundation model and the coupled foundation stiffness matrix
(substructuring). These two modeling techniques are illustrated in Fig. 6.1 for the
global model and in Fig. 6.2 for the substructure model (Lam and Law 2000).

6.2.1 Global SSI Model for Piled Foundations

In the global model, piles are represented by beam elements supported by linear or
nonlinear, depth-varying, Winkler springs. In the case of earthquake excitation,
ground motion would impart different loading at each soil spring and these motions
need to be calculated from a separate analysis (site response analysis). Kinematic
interaction is therefore correctly accounted for. However, the main drawback of this
modeling technique is the large number of degrees of freedom needed to formulate
the complete system.
6 Seismic Analyses and Design of Foundation Soil Structure Interaction 155

Superstructure

Depth Varying
Free Field
Pile Cap
Motions
kh1
Horizontal Motion 1
kv1 Vertical Motion 1
kh2
Horizontal Motion 2
kv2 Vertical Motion 2

kh3
Horizontal Motion 3
kv3 Vertical Motion 3
Pile Foundation
kh4
Horizontal Motion 4
kv4 Vertical Motion 4

khn
Horizontal Motion n
kvn Vertical Motion n

Fig. 6.1 Global pile-structure model

Fig. 6.2 Substructure model

The p-y relation, representing the nonlinear spring stiffness, is generally devel-
oped on the basis of a semi-empirical curve, which reflects the nonlinear resistance
of the local soil surrounding the pile at specified depths. A number of p-y models
156 A. Pecker

have been proposed by different authors for different soil conditions. The two most
commonly used p-y models are those proposed by Matlock et al. (1970) for soft clay
and by Reese et al. (1974) for sand. These models are essentially semi-empirical
and have been developed on the basis of a limited number of full-scale lateral load
tests on piles of small diameters ranging from 0.30 to 0.40 m. To extrapolate the
p-y criteria to conditions that are different from the one from which the p-y models
were developed requires some judgment and consideration. For instance in Slove-
nia, values of the spring stiffnesses are derived from the static values, increased by
30 %. Based on some field test results, there are indications that stiffness and
ultimate lateral load carrying capacity of a large diameter drilled shaft are larger
than the values estimated using the conventional p-y criteria. Pender (1993) sug-
gests that the subgrade modulus used in p-y formulation would increase linearly
with pile diameter.
Studies have shown that Matlock and Reese p-y criteria give reasonable pile
design solutions. However, the p-y criteria were originally conceived for design
against storm wave loading conditions based on observation of monotonic static
and cyclic pile load test data. Therefore, Matlock and Reese’s static p-y curves can
serve to represent the initial monotonic loading path for typical small diameter
driven isolated piles. If a complete total system of a bridge is modeled for seismic
response study, individual piles and p-y curves can be included in the analytical
model.
However, for a large pile group, group effects become important. An example is
given in Fig. 6.3 which presents the results of horizontal impedance calculations of
the group of piles of half the foundation (22 piles) of one of the pylon of the Vasco
da Gama bridge in Lisbon (Pecker 2003); the group efficiency, computed from
elastodynamic theory, is of the order of 1/6 at low frequencies and decreases with
frequency due to the constructive interference of diffracted waves from adjacent
piles. Typically, for large pile groups it is not uncommon to calculate group
efficiency in the range 1/3 to 1/6.
Although group effect has been a popular research topic within the geotechnical
community, currently there is no common consensus on the design approach to
incorporate group effects. Full scale and model tests by a number of authors show
that in general, the lateral capacity of a pile in a pile group is less than that of a
single isolated pile due to so-called group efficiency. The reduction is more
pronounced as the pile spacing is reduced. Other important factors that affect the
efficiency and lateral stiffness of the pile are the type and strength of soil, number of
piles, type and level of loading. In the past, analyses of group effects were based
mostly on elastic halfspace theory due to the absence of costly full-scale pile
experiments. In addition to group effect, gapping and potential cyclic degradation
have been considered in the recent studies. It has been shown that a concept based
on p-multiplier applied on the standard static loading p-y curves works reasonably
well to account for pile group and cyclic degradation effects (Brown and Bollman
1996). The p-multiplier is a reduction factor that is applied to the p-term in the p-y
curve for a single pile to simulate the behavior of piles in the group.
6 Seismic Analyses and Design of Foundation Soil Structure Interaction 157

41.95 m
5.80 m

5.00 m

20.20 m 5.80 m

44 piles
f = 2.20m

1000
Real part (MN/m)

750
Pile group

500

Isolated pile
250

0
0.0 0.2 0.4 0.6 0.8 1.0

Frequency (Hz)
Fig. 6.3 Horizontal pile group impedance for the Vasco da Gama bridge (Pecker 2003)

6.2.2 Substructure Model for Piled Foundations

A direct (or global) interaction analysis in which both the soil and the structure are
modelled with finite elements is very time demanding and not well suited for
design, especially in 3D. The alternative approach employing a substructure system
in which the foundation element is modeled by a condensed foundation stiffness
matrix and mass matrix along with equivalent forcing function represented by the
kinematic motion, may be more attractive; in addition, it more clearly separates the
role of the geotechnical engineer and of the structural engineer. The substructuring
approach is based on a linear superposition principle and therefore linear soil
behavior is more appropriate. In that case, the condensed stiffness matrix may be
obtained either from the beam on Winkler springs model or from continuum
impedance solutions (Gazetas 1991). When nonlinear soil behavior is considered,
the condensed stiffness matrix is generally evaluated by a pushover analysis of the
pile group and linearization at the anticipated displacement amplitude of the
pile head.
158 A. Pecker

Fig. 6.4 Substructuring approach for soil structure interaction

Substructuring reduces the problem to more amenable stages and does not
necessarily require that the whole solution be repeated again if modifications
occur in the superstructure. It is of great mathematical convenience and rigor
which stem, in linear systems, from the superposition theorem (Kausel
et al. 1974). This theorem states that the seismic response of the complete system
can be computed in two stages (Fig. 6.4)
• Determination of the kinematic interaction motion, involving the response to
base acceleration of a system which differs from the actual system in that the
mass of the superstructure is equal to zero;
• Calculation of the inertial interaction effects, referring to the response of the
complete soil-structure system to forces associated with base accelerations equal
to the accelerations arising from the kinematic interaction.
The second step is further divided into two subtasks:
• computation of the dynamic impedances at the foundation level; the dynamic
impedance of a foundation represents the reaction forces acting under the
foundation when it is directly loaded by harmonic forces;
• analysis of the dynamic response of the superstructure supported on the dynamic
impedances and subjected to the kinematic motion, also called effective foun-
dation input motion.
Although the substructure approach described above is rigorous for the treatment
of linear SSI, its practical implementation is subject to several simplifications:
• full linear behavior of the system is assumed; it is well recognized that this
assumption is a strong one since nonlinearities occur in the soil and at the soil
pile interface. Soil nonlinearities can be partly accounted for, as recommended
6 Seismic Analyses and Design of Foundation Soil Structure Interaction 159

in Eurocode 8 – Part 5, by choosing for the calculation of the impedance matrix


reduced soil properties, calculated from 1D site response analyses (Idriss and
Sun 1992), that reflect the soil nonlinear behavior in the free field. This implic-
itly assumes that additional nonlinearities taking place at the soil pile interface,
along the pile shaft, do not contribute significantly to the overall seismic
response.
• kinematic interaction is usually not considered. Very often flexural piles are
flexible with respect to the surrounding soil and the soil displacement is not
altered by the presence of the pile group. In that case, provided the foundation
embedment can be neglected, step 1 is straightforward: the kinematic interaction
motion, or foundation effective input motion, is simply the freefield motion. No
additional burden is imposed to the analyst since the freefield motion is a given
input data.

6.3 Kinematic Interaction Motion

In the remaining of the paper we will focus on the first step of the substructure
analysis described above with illustration of two foundations responses of the same
bridge.
Foundation 1 is composed of 18 concrete piles, 1,800 mm in diameter, 20 m
long, penetrating a 2.50 m thick layer of a residual soil with a shear wave velocity
300 m/s, overlying a 10 m thick weathered layer of the rock formation with a shear
wave velocity of 580 m/s; the rock formation is found at 12.50 m below the ground
surface. Site response analyses were carried out with the software SHAKE (linear
equivalent viscoelastic model) and for seven time histories spectrally matched to the
design spectrum; these time histories were input at an outcrop of the rock formation.
The foundation response was modeled with the software SASSI-2010; (Ostadan et al.
2010) the model includes the 18 piles, a massless pile cap and the soil layers; the
strain compatible properties retrieved from the SHAKE analyses are used for each
soil layer and the input motion is represented by the seven ground surface time
histories computed in the SHAKE analyses. Figure 6.5 compares the freefield ground
surface spectrum to the foundation response spectra calculated at the same elevation.
Note that because of the asymmetric pile layout the motion in the X-direction is
different from the motion in the Y-direction. As expected since the soil profile is
stiffer than the piles in flexure, both the freefield motion and the foundation motions
are very close to each other. For that configuration, using the freefield motion for the
effective foundation input motion would not be a source of error.
Foundation 2 of the same bridge is composed of 35 large diameter concrete piles
(2.5 m), 49 m long, crossing a very soft mud layer, 11 m thick, with a shear wave
velocity of the order of 100 m/s; the piles go through a residual soil (VS ¼ 250–400-
m/s) and reach the competent rock formation at 25 m depth (Fig. 6.6). Freefield and
foundation response spectra are compared in Fig. 6.7 The free-field ground
response spectrum determined from a site specific response analysis has a smooth
shape; the kinematic interaction motion, i.e. the motion of the piled foundation,
160 A. Pecker

1.0E+01

Foundation X-direction
1.0E+00 Foundation Y-direction
Pseudo acceleration (g)

Freefield H-direction

1.0E-01

1.0E-02

1.0E-03
0.0 0.1 1.0 10.0 100.0
Period (s)

Fig. 6.5 Kinematic interaction motion for “flexible” piled foundation 1

0
Very soft clay

5
Depth below ground surface (m)

10
Reidual
soil

15
Weathered rock

20

25
Rock

30
0 200 400 600 800 1000
Shear wave velocity (m/s)

Fig. 6.6 Soil profile at location of foundation 2


6 Seismic Analyses and Design of Foundation Soil Structure Interaction 161

1.0E+01

Foundation X-direction
Foundation Y-direction
1.0E+00
Freefield H-direction
Pseudo acceleration (g)

1.0E-01

1.0E-02

1.0E-03
0.0 0.1 1.0 10.0 100.0
Period (s)

Fig. 6.7 Kinematic interaction motion for “stiff” piled foundation 1

exhibits a marked peak at 0.5 s and a significant deamplification with respect to the
free-field motion between 0.8 and 3.0 s. This phenomenon is due to the inability of
the piled foundation to follow the ground motion because of the piles stiffnesses.
Obviously, in that case, using the freefield motion for the foundation input
motion would be strongly misleading and may produce an unconservative design.
These two examples, drawn from a real project clearly illustrate the need for a
careful examination of the relative foundation-soil profile stiffness before deciding
whether or not there is a chance that the freefield motion be modified by the
foundation. When faced to that latter situation, it is mandatory to correctly evaluate
the effective foundation input motion to obtain meaningful results.

6.4 Conclusions

Experience gained from several projects involving piled foundation in a seismic


environment shows that the most amenable and versatile approach to soil structure
interaction is the substructuring technique. It presents several advantages like a
correct treatment of the pile group effect, which is not the case with a global model
where the piles are modelled as beams on Winkler foundations, the need for
calculating the foundation input motions and foundation impedances only once as
long as the foundation is not modified, the reduced size of the structural model,
especially for extended structures like bridges, etc.. . .The main drawback of this
approach lies in its restriction to linear, or moderately nonlinear, systems. Since it is
162 A. Pecker

attractive, the method is often used with approximations in its implementation and
the designer must be fully aware of those shortcuts. In this paper, one such
approximation, which consists in taking the freefield motion for the effective
foundation input motion, has been illustrated on a real project. It has been shown
that significant differences may take place between both motions when the piled
foundation cannot be considered flexible with respect to the soil profile. If this
situation is faced, rigorous treatment of soil-structure interaction requires that the
effective foundation input motion be calculated, an additional step in the design.

Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.

References

ASCE/SEI 41–13 (2014) Chapter 8: Foundations and geologic site hazards. In: Seismic evaluation
and retrofit of existing buildings, vol 52. American Society of Civil Engineers, Reston, pp 1–8
Brown DA, Bollman HT (1996) Lateral load behavior of pile group in sand. J Geotech Eng ASCE
114(11):1261–1276
CEN (2004) European Standard EN 1998-5: 2004 Eurocode 8: Design of structures for earthquake
resistance. Part 5: Foundations, retaining structures, geotechnical aspects. Comité Europeen de
Normalisation, Brussels
Gazetas G (1991) Foundation vibrations. In: Fang HY (ed) Foundation engineering handbook, 2nd
edn. Van Nostrand Rheinhold, New York
Idriss IM, Sun JI (1992) SHAKE 91: a computer program for conducting equivalent linear seismic
response analyses of horizontally layered soil deposits. Program modified based on the original
SHAKE program published in December 1972 by Schnabel, Lysmer and Seed, Center of
Geotechnical Modeling, Department of Civil Engineering, University of California, Davis
Kausel E, Roesset JM (1974) Soil structure interaction for nuclear containment structures. Pro-
ceedings ASCE, power division specialty conference, Boulder
Kavvadas M, Gazetas G (1993) Kinematic seismic response and bending of free head piles in
layered soils. Geotechnique 43(2):207–222
Lam PI, Law H (2000) Soil structure interaction of bridges for seismic analysis. Technical report
MCEER-00-008
Matlock H (1970) Correlation for design of laterally loaded piles in soft clay. 2nd Annual Offshore
Technology Conference. Paper No 1204
Ostadan F, Nan D (2012) SASSI 2010 – a system for analysis of soil-structure interaction.
Geotechnical Engineering Division, Civil Engineering Department, University of California,
Berkeley
Pecker A (2003) Aseismic foundation design process – lessons learned from two major projects:
the Vasco da Gama and the Rion-Antirion bridges. Proceedings 5th ACI international confer-
ence on seismic bridge design and retrofit for earthquake resistance, La Jolla
Pender MJ (1993) Aseismic pile foundation design and analysis. Bull N Z Soc Earthq Eng 26
(1):49–160
Reese L, Cox W, Koop R (1974) Analysis of laterally load piles in sand. 6th Annual Offshore
Technology Conference. Paper No. 2080
Chapter 7
Performance-Based Seismic Design
and Assessment of Bridges

Andreas J. Kappos

Abstract Current trends in the seismic design and assessment of bridges are
discussed, with emphasis on two procedures that merit some particular attention,
displacement-based procedures and deformation-based procedures. The available
performance-based methods for bridges are critically reviewed and a number of
critical issues are identified, which arise in all procedures. Then two recently pro-
posed methods are presented in some detail, one based on the direct displacement-
based design approach, using equivalent elastic analysis and properly reduced dis-
placement spectra, and one based on the deformation-based approach, which involves
a type of partially inelastic response-history analysis for a set of ground motions and
wherein pier ductility is included as a design parameter, along with displacement
criteria. The current trends in seismic assessment of bridges are then summarised and
the more rigorous assessment procedure, i.e. nonlinear dynamic response-history
analysis, is used to assess the performance of bridges designed to the previously
described procedures. Finally some comments are offered on the feasibility of
including such methods in the new generation of bridge codes.

7.1 Introduction

Performance-based seismic design (PBD) procedures, in particular displacement-


based ones (DBD), are now well-established for buildings (Kappos 2010); however
application of these concepts to bridges has been more limited, despite the fact that
studies on the so-called ‘direct’ displacement-based design (DDBD) of bridge piers
(Kowalsky et al. 1995) or even entire bridges (Calvi and Kingsley 1995) appeared
in the mid-1990s. Notwithstanding the now recognised advantages of the DDBD
procedure (Priestley et al. 2007), the fact remains that, in its current form, the
procedure suffers from two significant disadvantages:

A.J. Kappos (*)


Civil Engineering Department, City University London, London EC1V 0HB, UK
e-mail: [email protected]

© The Author(s) 2015 163


A. Ansal (ed.), Perspectives on European Earthquake Engineering and Seismology,
Geotechnical, Geological and Earthquake Engineering 39,
DOI 10.1007/978-3-319-16964-4_7
164 A.J. Kappos

• it is applicable to a class of bridges only, i.e. those that can be reasonably


approximated by an equivalent single-degree-of-freedom (SDOF) system for
calculating seismic demand
• even for this class the procedure is not deemed appropriate for the final design of
the bridge (whereas it is a powerful tool for its preliminary design)
A key source of these disadvantages is the important role that higher modes play
in the transverse response of bridges, even of some relatively short ones (Paraskeva
and Kappos 2010), which complicates the proper assessment of the displaced shape
of the bridge and the target displacement. It is noted that for systems such as multi-
span bridges, the DDBD approach requires that the engineer properly define a target
displacement profile (duly accounting for inelastic response), rather than just a
single target displacement (as in the case of single-column bridges); this usually
requires a number of iterations, which inevitably increases the complexity of the
procedure.
There is little doubt that the aforementioned disadvantages are the key reason
why, even today (about 20 years after they first appeared) DBD/DDBD procedures
are not formally adopted by current codes; interestingly, in Appendix I of the
SEAOC 1999 Blue Book (Ad Hoc Committee 1999), the first one to provide
guidance for DBD of buildings (there are still no guidelines for DBD of bridges),
it is explicitly required to carry out a verification of the initial displacement-based
design through nonlinear static (pushover) analysis.
In the light of the above, it can be claimed that the current trend in performance-
based seismic design of bridges is to make the attractive concept of DBD more
suitable for the final design of a sufficiently broad class of bridges, so that it can be
deemed suitable for practical application. It is worth recalling here that, as correctly
pointed out in one of the first papers on DDBD (Calvi and Kingsley 1995), the
concept of the equivalent elastic structure (based on member secant stiffness at
target displacement) is feasible and preferable in the preliminary design of the
bridge, whereas more sophisticated tools (like nonlinear analysis) are
recommended at the final design stage. As will be discussed in more detail in
Sect. 7.3, the currently available DDBD procedures work well for the preliminary
design of first-mode-dominated bridges in high seismic hazard areas, but present
problems in several cases that are common in practice, like bridges with some
degree of irregularity, while they are simply not applicable in low and moderate
seismic hazard regions.
In Sect. 7.2 a brief overview of available PBD/DBD methods for bridges is
critically presented, focussing on the new contributions made by each study, rather
than summarising the entire procedures (which are similar in many methods). The
key issues involved in developing an appropriate PBD procedure are identified and
discussed in the light of the available procedures.
In Sect. 7.3, a PBD procedure is presented based on elastic analysis and the use
of the secant stiffness approach and ‘over-damped’ elastic spectra, i.e. the ‘direct
displacement based design approach’, pioneered by Priestley and Kowalsky (Priest-
ley et al. 2007; Kowalsky et al. 1995), is extended with a view to making it
7 Performance-Based Seismic Design and Assessment of Bridges 165

applicable to a broad spectrum of bridge systems, including those affected by


higher modes, and also introducing additional design criteria not previously used
in this method.
In Sect. 7.4 an alternative, more rigorous, method is presented that involves
more advanced analysis tools, i.e. response-history analysis (for different levels of
ground motion intensity) of bridge models wherein any regions that are expected to
yield under the selected seismic actions are modelled as inelastic, whereas the rest
of the bridge is modelled as elastic; the initial analysis (relevant to service condi-
tions) is an elastic one. A critical aspect of this (currently under development)
procedure is the a-priori definition of the inelastic behaviour of dissipating zones,
by exploiting the deformation limits for the specific performance level, which are
related to the damage level of the structural members.
Section 7.5 first summarises the current trends worldwide in seismic assessment
of bridges and applies the more rigorous assessment procedure, i.e. nonlinear
dynamic response-history analysis, to assess the performance of bridges designed
to the procedures described in Sects. 7.3 and 7.4. Moreover, comparisons are made
between these performance–based designed bridges and similar ones designed to a
current international code, namely Eurocode 8.
Finally, in Sect. 7.6, some general conclusions are drawn, regarding the feasi-
bility of using new procedures that aim at a better control of the seismic perfor-
mance of bridges under different levels of seismic loading.

7.2 Overview of PBD Methods for Bridges

A DDBD procedure was proposed by Kowalsky and his co-workers (Kowalsky


2002; Dwairi and Kowalsky 2006), incorporating basic concepts of the DDBD
approach like the target displacement and the displacement profile that should
account for inelastic effects, without carrying out an inelastic analysis; the proce-
dure is applicable to multi-degree-of-freedom (MDOF) continuous concrete brid-
ges with flexible or rigid superstructures (decks). A key feature of the method is the
EMS (effective mode shape) approach wherein account is taken of higher mode
effects by determining the mode shapes of an equivalent elastic model of the bridge
based on the column and abutment secant stiffness values at maximum response. A
similar version of the method was included in the book by Priestley et al. (2007) on
DDBD; this version of the method is simpler than the previously mentioned one
(no use of EMS in the design of piers) but also addresses design in the