10.1007/978 3 319 16964 4
10.1007/978 3 319 16964 4
Perspectives
on European
Earthquake
Engineering and
Seismology
Volume 2
Perspectives on European Earthquake Engineering
and Seismology
GEOTECHNICAL, GEOLOGICAL AND
EARTHQUAKE ENGINEERING
Volume 39
Series Editor
€ gin University, Istanbul, Turkey
Atilla Ansal, School of Engineering, Ozye
Perspectives on European
Earthquake Engineering
and Seismology
Volume 2
Editor
Atilla Ansal
School of Engineering
Özyeǧin University
Istanbul, Turkey
v
vi Preface
by H.F. Karado gan, I.E. Bal, E. Yüksel, S. Z. Yüce, Y.Durgun, and C. Soydan;
“Developments in Seismic Design of Tall Buildings: Preliminary Design of Coupled
Core Wall Systems” by M. Nuray Aydınoglu and Eren Vuran; “Seismic Response of
Underground Lifeline Systems” by Selçuk Toprak, Engin Nacaroglu, and A. Cem
Koç; “Seismic Performance of Historical Masonry Structures Through Pushover
and Nonlinear Dynamic Analyses” by Sergio Lagomarsino and Serena Cattari;
“Developments in Ground Motion Predictive Models and Accelerometric Data
Archiving in the Broader European Region” by Sinan Akkar and Özkan Kale;
and “Towards the ‘Ultimate Earthquake-Proof’ Building: Development of an Inte-
grated Low-Damage System” by Stefano Pampanin.
The remaining six chapters are the ESC Theme Lectures “Archive of Historical
Earthquake Data for the European-Mediterranean Area” by Andrea Rovida and
Mario Locati; “A Review and Some New Issues on the Theory of the H/V Technique
for Ambient Vibrations” by Enrico Lunedei and Peter Malischewsky;
“Macroseismic Intervention Group: the Necessary Field Observation” by Chris-
tophe Sira; “Bridging the Gap Between Nonlinear Seismology as Reality and
Earthquake Engineering” by Gheorghe Marmureanu, Carmen-Ortanza Cioflan,
Alexandru Marmureanu, Constantin Ionescu, and Elena-Florinela Manea; “The
Influence of Earthquake Magnitude on Hazard Related to Induced Seismicity” by
Benjamin Edwards; and “On the Origin of Mega-Thrust Earthquakes” by Kuvvet
Atakan.
The Editor and the Advisory Committee of the Second European Conference on
Earthquake Engineering and Seismology appreciate the support given by the
Istanbul Governorship, Istanbul Project Coordination Unit, for the publication of
the Perspectives on European Earthquake Engineering and Seismology volumes as
Open Access books.
vii
viii Contents
Shamita Das
1.1 Introduction
Seismologists now know that one of the important parameters controlling earth-
quake damage is the fault rupture speed, and changes in this rupture speed
(Madariaga 1977, 1983). The changes in rupture speed generate high-frequency
damaging waves Thus, the knowledge of how this rupture speed changes during
earthquakes and its maximum possible value are essential for reliable earthquake
hazard assessment. But how high this rupture speed can be has been understood
only relatively recently. In the 1950–1960s, it was believed that earthquake ruptures
could only reach the Rayleigh wave speed. This was based partly on very idealized
models of fracture mechanics, originating from results on tensile crack propagation
velocities which cannot exceed the Rayleigh wave speed and which were simply
S. Das (*)
Department of Earth Sciences, University of Oxford, Oxford OX1 3AN, UK
e-mail: [email protected]
length of the section rupturing at supershear speeds being about 45 km. This study
was based on two components of near-fault accelerograms recorded at one station
(SKR). Then two larger supershear earthquakes occurred, namely, the 2001 Mw 7.8
Kunlun, Tibet earthquake (Bouchon and Vallée 2003; Antolik et al. 2004; Robinson
et al. 2006b; Vallée et al. 2008; Walker and Shearer 2009), and the 2002 Mw 7.9
Denali, Alaska earthquake (Dunham and Archuleta 2004; Ellsworth et al. 2004;
Frankel 2004; Ozacar and Beck 2004; Walker and Shearer 2009). Both were very
long, narrow intra-plate strike-slip earthquakes, with significantly long sections of
the faults propagating at supershear speeds. At last, clear evidence of supershear
rupture speeds was available. Moreover, by analysing body wave seismograms very
carefully, Robinson et al. (2006b) showed that not only did the rupture speed
exceed the shear wave speed of the medium; it reached the compressional wave
speed, which is about 70 % higher than the shear wave speed in crustal rocks.
Once convincing examples of supershear rupture speeds started to be found,
theoretical calculations were carried out (Bernard and Baumont 2005; Dunham and
Bhat 2008) and these suggested that the resulting ground shaking can be much
higher for such rapid ruptures, due to the generation of Mach wave fronts. Such
wave fronts, analogous to the “sonic boom” from supersonic jets, are characteristics
and their amplitudes decrease much more slowly with distance than usual spherical
waves do. Of course, much work still remains to be done in this area. Figure 1.1
shows a schematic illustrating that formulae from acoustics cannot be directly
transferred to seismology. The reason is that many regions of the fault area are
simultaneously moving at these high speeds, each point generating a Mach cone,
Fig. 1.1 Schematic representation of the leading edges of the multiple S-wave Mach cones
generated by a planar fault spreading out in many directions, along the black arrows, from the
hypocenter (star). The pink shaded region is the region of supershear rupture. The thick black
arrows show the direction of the applied tectonic stress across the x–y plane. Supershear speeds
cannot be reached in the y- direction (that is, by the Mode III or the anti-plane shear mode).
The higher the rupture speed, the narrower each cone would be. Dunham and Bhat (2008) showed
that additional Rayleigh wave Mach fronts would be generated along the Earth’s surface during
supershear earthquake ruptures
4 S. Das
and resulting in a the Mach surface. Moreover, different parts of the fault could
move at different supershear speeds, again introducing complexity into the shape
and amplitudes of the Mach surface. Finally, accounting for the heterogeneity of the
medium surrounding the fault through which these Mach fronts propagate would
further modify the Mach surface. There could be special situations where the
individual Mach fronts comprising the Mach surface could interfere to even
lower, rather than raise, the resulting ground shaking. Such studies would be of
great interest to the earthquake engineering community.
1.2 Theory
Since damaging high-frequency waves are generated when faults change speed
(Madariaga 1977, 1983), the details of how faults start from rest and move at
increasing speeds is very important. Though in-plane shear faults (primarily strike–
slip earthquakes) can not only exceed the shear wave speed of the medium, but can
even reach the compressional wave speed, steady-state (constant speed) calcula-
tions on singular cracks (with infinite stress at the fault edges) had shown that
speeds between the Rayleigh and shear wave speeds were not possible, due to the
fact that in such a case there is negative energy flux into the fault edge from the
surrounding medium, that is, such a fault would not absorb elastic strain-energy but
generate it (Broberg 1989, 1994, 1999). Theoretical studies by Andrews (1976) and
Burridge et al. (1979) using the non-singular slip-weakening model (Fig. 1.2),
introduced by Ida (1972) suggested that even for such 2-D in-plane faults which
start from rest and accelerate to some terminal velocity, such a forbidden zone does
exist.
Recent work of Bizzari and Das (2012) showed that for the 3-D mixed in-plane
and anti-plane shear mode fault, propagating under this slip-weakening law, the
rupture front actually does pass smoothly through this forbidden zone, but very fast.
The width of the cohesive zone initially decreases, then increases as the rupture
exceeds the shear wave speed and finally again decreases as the rupture accelerates
to a speed of ~90 % of the compressional wave speed. The penetration of the
‘forbidden zone’ has very recently also been confirmed for the 2-D in-plane shear
fault for the same linear slip-weakening model by Liu et al. (2014). To reiterate, this
is important as this smooth transition from sub- to super- shear wave speeds would
reduce damage.
by some synthetic tests, as discussed, for example by Das and Suhadolc (1996), Das
et al. (1996), and Sarao et al. (1998) for inversions using strong ground motion data
and by Henry et al. (2000, 2002) for teleseismic data inversions. The fault area and
the total source duration are not assigned a priori but determined as part of the
inversion process. An initial fault area is assigned based on the aftershock area and
then refined. An initial value of the finite source duration is estimated, based on the
fault size and a range of average rupture speeds, and it cannot be longer than the
longest record used. The integral equation then takes the form of a system of linear
equations A x b, where A is the kernel matrix obtained by integrating it over each
cell, each column of A corresponding to different cells and time instants of the
source duration, ordered in the same way as the vector of observed seismograms b,
and x is vector of unknown slip rates on the different cells on the fault at different
source time-steps. The no back-slip constraint then becomes x 0. In order to
reduce the number of unknowns, a very weak causality condition could be intro-
duced, for example, x’s beyond the first compressional wave from the hypocenter
could be set to 0. If desired, the seismic moment could be required to be equal to
that obtained say, from the centroid-moment tensor (CMT) solution. With the high-
quality of broadband data now available, this constraint is not necessary and it is
found that when stations are well distributed in azimuth around the earthquake, the
seismic moment obtained by the solution is close to the CMT moment. In addition,
Das and Kostrov (1990, 1994) permitted the entire fault behind the rupture front to
slip, if the data required it, unlike studies where slipping is confined only to the
vicinity of the rupture front. If there is slippage well behind the main rupture front
in some earthquake, then this method would find it whereas others would not. Such
a case was found by Robinson et al. (2006a) for the 2001 Mw 8.4 Peru
earthquake.
Thus, the inverse problem is the solution of the linear system of equations under
one or more constraints, in which the number of equations m is equal to the total
number of samples taken from all the records involved and the number of unknowns
n is equal to the number of spatial cells times on the fault times the number of time
steps at the source. Taking m > n, the linear system is over determined and a
solution x which provides a best fit to the observations is obtained. It is well
known that the matrix A is often ill-conditioned which implies that the linear
system admits more than one solution, equally well fitting the observations. The
introduction of the constraints reduces the set of permissible (feasible) solutions.
Even when an unique solution does exist, there may be many other solutions that
almost satisfy the equations. Since the data used in geophysical applications often
contain experimental noise and the models used are themselves approximations to
reality, solutions almost satisfying the data are also of great interest.
Finally, for the system of equations together with the constraints to comprise a
complete mathematical problem, the exact form of what the “best fit” to observa-
tions means has to be stated. For this problem, we have to minimize the vector of
residuals, r ¼ b A x, and some norm of the vector r must be adopted. One may
choose to minimize minimize the ‘1, the ‘2 or the ‘1 norm (see Tarantola 1987 for a
discussion of different norms), all three being equivalent in the sense that they tend
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 7
to zero simultaneously. Das and Kostrov (1990, 1994) used the linear programming
method to solve the linear system and minimized the ‘1 norm subject to the
positivity constraint, using programs modified from Press et al. (1986). In various
studies, they have evaluated the other two norms of the solution to investigate how
they behave, and find that when the data is fitted well, the other two norms are also
small. A method with many similarities to that of Das and Kostrov (1990, 1994)
was developed by Hartzell and Heaton (1983). Hartzell et al. (1991) also carried out
a comprehensive study comparing the results of using different norms in the
inversion. Parker (1994) has discussed the positivity constraint in detail.
In order to confirm that the solution obtained is reliable Das and Kostrov (1994),
introduced additional levels of optimization. For example, if a region with high or
low slip was found, fitting the data by lowering or raising the slip in that region was
attempted to see if the data was still well fitted. If it did not, then the features were
considered robust. If high rupture speed was found in some portion of the fault, its
robustness was treated similarly. All features interpreted geophysically can be
tested in this way. Some examples can be found in Das and Kostrov (1994),
Henry et al. (2000), Henry and Das (2002), Robinson et al. (2006a, b).
This >400 km long earthquake was, at the time of its occurrence, the longest known
strike-slip earthquake, on land or underwater, since the 1906 California earthquake.
The earthquake occurred on a left-lateral fault, propagating unilaterally from west
to east, on one of the great strike-slip faults of Tibet, along which some of the
northward motion of the Indian plate under Tibet is accommodated by lateral
extrusion of the Tibetan crust. It produced surface ruptures, reported from field
observations, with displacements as high as 7–8 m (Xu et al. 2002), [initially even
larger values were estimated by Lin et al. (2002) but these were later revised down],
this large value being supported by interferometric synthetic aperture radar
(InSAR) measurements (Lasserre et al. 2005), as well as the seismic body wave
studies referred to below. Bouchon and Vallée (2003) used mainly Love waves
from regional seismograms to show that the average rupture speed was ~3.9 km/s,
exceeding the shear wave speed of the crustal rocks, and P-wave body wave studies
confirmed this (Antolik et al. 2004; Ozacar and Beck 2004). More detailed analysis
of SH body wave seismograms, using the inversion method of Kostrov and Das
(1990, 1994), showed that the rupture speed on the Kunlun fault during this
earthquake was highly variable and the rupture process consisted of three stages
(Robinson et al. 2006b). First, the rupture accelerated from rest to an average
speed of 3.3 km/s over a distance of 120 km. The rupture then propagated for
another 150 km at an apparent rupture speed exceeding the P wave speed, the
8 S. Das
Fig. 1.3 Schematic showing the final slip distribution for the 2001 Kunlun, Tibet earthquake, with
the average rupture speeds in 3 segments marked. Relocated aftershocks for the 6 month period
following the earthquake (Robinson et al. 2006a, b) are shown as red dots, with the symbol size
scaling with earthquake magnitude. The maximum slip is ~6.95 m. The centroid-moment tensor
solution for the main shock (star denotes the epicenter, its cmt is in red) and those available for the
larger aftershocks (cmts in black) are shown. The longitude (E) and latitude (N) are marked. The
impressive lack of aftershocks, both in number and in size, for such a large earthquake was shown
by Robinson et al. (2006b)
longest known segment propagating at such a high speed for any earthquake fault
(Fig. 1.3). Finally, the fault bifurcated and bent, the rupture front slowed down, and
came to a stop at another sharper bend, as shown in Robinson et al. (2006b). The
region of the highest rupture velocity coincided with the region of highest fault slip,
highest fault slip rate, highest stress drop (stress drop is what drives the earthquake
rupture), the longest fault slipping duration and had the greatest concentration of
aftershocks. The location of the region of the large displacement has been inde-
pendently confirmed from satellite measurements (Lasserre et al. 2005). The fault
width (in the depth direction) for this earthquake is variable, being no more than
10 km in most places and about 20 km in the region of highest slip.
Field observations, made several months later, showed a ~25 km wide region to
the south of the fault in the region of supershear rupture speed, with many off-fault
open (tensile) cracks. These open cracks are confined only to the off-fault section of
high speed portion of the fault, and were not seen off-fault of the lower rupture
speed portions of the fault, though those regions were also visited by the scientists
(Bhat et al. 2007). Theoretical results show that as the rupture moves from sub- to
super- shear speeds, large normal stresses develop in the off-fault regions close to
the fault, as the Mach front passes through. Das (2007) has suggested that obser-
vations of such off-fault open cracks could be used as an independent diagnostic
tool for identifying the occurrence of supershear rupture and it would be useful to
search for and document them in the field for large strike-slip earthquakes.
The special faulting characteristics (Bouchon et al. 2010) and the special pattern
of aftershocks for this and other supershear earthquakes (Bouchon and Karabulut
2008) has been recently been noted.
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 9
A striking observation for the 2001 Kunlun earthquake is that that the portion of the
fault where rupture propagated at supershear speeds is very long and very straight.
Bouchon et al. (2001) showed that for the 1999 Izmit, Turkey earthquake fault the
supershear eastern segment of the fault was very straight and very simple, with no
changes in fault strike, say, jogs, bends, step-overs, branching etc. Examination of
the 2002 Denali, Alaska earthquake fault shows the portion of the fault identified by
Walker and Shearer (2009) as having supershear rupture speeds is also long and
straight. The Kunlun earthquake showed that a change in fault strike direction slows
the fault down, and a large variation in strike stops the earthquake (Robinson
et al. 2006b). Based on these, we can say that necessary (though not sufficient)
conditions for supershear rupture to continue for significant distances are: (i) The
strike-slip fault must be very straight (ii) The longer the straight section, the more
likely is supershear speed, provided: (a) fault friction is low (b) no other impedi-
ments or barriers exist on the fault. Of course, very locally short sections could
reach supershear speeds, but the resulting Mach fronts would be small and local,
and thus less damaging. It is the sustained supershear wave speed over long
distances that would create large Mach fronts.
Earthquakes start from rest and need to propagate for some distance to reach their
maximum speed (Kostrov 1966). Once the maximum speed is reached, the earth-
quake could continue at this speed, provided the fault is straight, and no other
barriers exist on it, as mentioned above. Faults with many large changes in strike, or
large step-overs, would thus be less likely to reach very high rupture speeds as this
would cause rupture on such faults to repeatedly slow down, before speeding up
again, if the next segment is long enough. The distance necessary for ruptures to
propagate in order to attain supershear speeds is called the transition distance and is
currently still a topic of vigorous research and depends on many physical param-
eters of the fault, such as the fault strength to stress-drop ratio, the critical fault
length required to reach supershear speeds, etc. (Andrews 1976; Dunham 2007;
Bizzari and Das 2012; Liu et al. 2014).
Motivated by the observation that the rare earthquakes which propagated for
significant distances at supershear speeds occurred on very long straight segments
of faults, we examined every known major active strike-slip fault system on land
worldwide and identified those with long (>100 km) straight portions capable not
only of sustained supershear rupture speeds but having the potential to reach
compressional wave speeds over significant distances, and call them “fault super-
highways”. Detailed criteria for each fault chosen to be considered a superhighway
are discussed in Robinson et al. (2010), including when a fault segment is consid-
ered to be straight. Every fault selected, except one portion of the Red River fault
and the Dead Sea Fault has had earthquakes of magnitude >7 on it in the last
150 years. These superhighways, listed in Table 1.3, include portions of the
1,000 km long Red River fault in China and Vietnam passing through Hanoi, the
1,050 km long San Andreas fault in California passing close to Los Angeles, Santa
Barbara and San Francisco, the 1,100 km long Chaman fault system in Pakistan
north of Karachi, the 700 km long Sagaing fault connecting the first and second
cities of Burma (Rangoon and Mandalay), the 1,600 km Great Sumatra fault, and
the 1,000 km Dead Sea fault. Of the 11 faults classified as ‘superhighways’, 9 are in
Asia and 2 in North America, with 7 located near areas of very dense population.
Based on the population distribution within 50 km of each fault superhighway,
obtained from the United Nations database for the Year 2005 (Gridded Population
of the World 2007), we find that more than 60 million people today have increased
seismic hazards due to such faults. The main aim of this work was to identify those
sections of faults where additional studies should be targeted for better understand-
ing of earthquake hazard for these regions. Figure 1.4 shows the world map, with
the locations of the superhighways marked, and the world population density.
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 11
Fig. 1.4 Location of earthquake superhighways worldwide, shown as green stars, numbered as in
Table 1.3. The world population (Gridded Population of the World 2007), in inhabitants per
300 300 , is coloured as per the key. The zigzag band has no superhighways in it
Since we consider this to be the most dangerous fault in the world (Robinson
et al. 2010), as well as one less well studied compare to some other faults,
particularly the San Andreas fault, it is discussed here in detail, in order to
encourage more detailed studies there. The Red River fault runs for about
1,000 km, through one of the most densely populated regions of the world, from
the south-eastern part of Tibet through Yunnan and North Vietnam to the South
China Sea. Controversy exists regarding total geological offsets, timing of initiation
and depth of the Red River fault. Many authors propose that it was a long-lasting
plate boundary (between Indochina and South China ‘blocks’) initiated ~35 Ma
ago, accommodating between 500 and 1,050 km of left-lateral offset, and extending
down into the mantle. Many others propose that it is only a crustal scale fault,
~29–22 Myold. Although mylonites along the metamorphic complexes show ubiq-
uitous left-lateral shear fabrics, geodetic data confirm that recent motion has been
right-lateral. Seismic sections across the Red River delta in the Gulf of Tonkin
clearly suggest that at least offshore of Vietnam the fault is no longer active.
Although the Red River fault system is highly complex, Robinson et al. (2010)
were able to identify three sections of it as having potential for supershear rupture
(Fig. 1.5). In Vietnam, the Red River fault branches into numerous strands as it runs
through the thick sediments of the Red River delta near Hanoi. Although there is no
known record of recent major earthquakes on the main Red River fault in Vietnam
(Utsu 2002), two sub-parallel strands of this fault near Hanoi appear remarkably
straight, hence we identify two ~250 km sections here as being superhighways. The
consequences of a long supershear rupture in this area would be catastrophic. A
second, 280 km long, segment is identified in the Chuxiong Basin section of the
12 S. Das
Fig. 1.5 Map of southeastern China, Vietnam and Myanmar showing the 700 km superhighway
of the 1,000 km long Sagaing fault, Myanmar, and the 280 and 250 km superhighways of the
800 km Red River (Honghe) fault. Known faults (Yeats et al. 1997) are shown as white lines, with
superhighways shown in black. The world population (Gridded Population of the World)
(in inhabitants per 300 300 ,) is shown, according to the colour key shown in Fig. 1.4, with
populations less than 100 people per 300 300 shown as transparent, overlain on a digital elevation
map of the region. Locations of known large earthquakes on these faults (Table 1.3) are marked
fault, where it appears to be straight and simple. This area has a long history of
documented significant earthquakes on nearby faults (Yeats et al. 1997; Fig. 8.12 of
Yeats 2012).
The second-most dangerous superhighway in Table 1.3 is the San Andreas fault in
California but since it has been very heavily discussed in the literature we do not
discuss it here. Instead, we discuss the third-most dangerous superhighway. This
1,100 km long right-lateral strike-slip fault in Myanmar (Burma) forms the present-
day eastern plate boundary of India (Fig. 1.5). Estimates of long-term geological
offsets along the fault range from 100 to 150 km to ~450 km, motion along the
Sagaing Fault probably initiating ~22 Ma. The Sagaing fault is very continuous
between Mandalay and Rangoon, with the central 700 km from (17 to 23 N) being
“remarkably linear” (Vigny et al. 2003). It is the longest, continuous linear strike-
slip fault identified globally. North of 23 N, the fault begins to curve slightly but it
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 13
is still possible that supershear rupture could proceed for a considerable distance.
We have identified about 700 km of this fault as having the potential for sustained
supershear rupture (Fig. 1.5). There were large earthquakes on the fault in 1931,
1946, 1839, 1929, and two in 1930 (Yeats et al. 1997). With the cities of Rangoon
(Yangon) (population exceeding five million) and Mandalay (population
approaching one million) at, respectively, the southern and northern ends of this
straight portion, supershear earthquakes propagating either northwards or south-
wards could focus energy on these cities. In addition, the highly populated off-fault
regions would have increased vulnerability due to the passing Mach fronts, thereby
exacerbating the hazard.
1.8 Discussion
Tables 1.1 and 1.2 show that it is only in the last 2 years that we have found the first
example of two under-water earthquakes reaching supershear speeds, showing that
this is even rarer for marine earthquakes than ones on continents. Very recently, a
deep earthquake at ~650 km depth has been inferred to have had supershear speed
(Zhan et al. 2014).
Sometimes earthquakes in very different parts of the world in very different
tectonic regimes have remarkable similarities. Das (2007) has compared the 2001
Tibet earthquake and the 1906 California earthquake, the repeat of which would be
a far greater disaster, certainly in financial terms, than the 2004 Sumatra-Andaman
earthquake and tsunami! They are both vertical strike-slip faults, have similar Mw,
fault length and width, and hence similar average slip and average stress drop. The
right-lateral 1906 earthquake rupture started south of San Francisco, and propa-
gated bilaterally, both to the northwest and to the southeast. Geodetic measure-
ments showed that the largest displacements were on the segment to the north of
San Francisco, which is in agreement with results obtained by inversion of the very
few available seismograms. It has recently been suggested that this northern
segment may have reached supershear rupture speeds (Song et al. 2008). The fact
that the high fault displacement region is where the fault is very straight, would
provide additional support to this, if the 1906 and the 2001 earthquakes behaved
similarly. Unfortunately, due to heavy rains and rapid rebuilding following the
1906 earthquake, no information is available on whether or not off-fault cracks
appeared in this region. The cold desert climate of Tibet had preserved the off-fault
open cracks from the 2001 earthquake, un-eroded during the winter months, till the
scientists visited in the following spring. Similar considerations deserve to be made
for other great strike-slip faults around the world, for example, along the
Himalayan-Alpine seismic belt, New Zealand, Venezuela, and others, some of
which are discussed next.
14 S. Das
Table 1.1 Recent large strike-slip earthquakes without supershear rupture speed
Fault length On land or
Date Location Mw (km) underwater References
1989 Macquarie 8.0 200 Underwater Das (1992, 1993))
Ridge
1998 Antarctic 8.1 140, 60a ” Henry et al. (2000)
Ocean
2000 Wharton Basin 7.8 80 ” Robinson
et al. (2001)
2004 Tasman Sea 8.1 160, 100a Robinson (2011)
a
Two sub-events
Table 1.2 Strike-slip earthquakes known to have reached supershear rupture speeds
Supershear Type of data
segment used to study the Land
Year Mw Location length (km) quake or sea Reference
1979 6.5 Imperial 35 Strong ground Land Archuleta (1984),
Valley, motion Spudich and
California Cranswick (1984)
1999 7.6 Izmit, 45 ” ” Bouchon et al. (2002)
Turkey
1999 7.2 Duzce, 40 ” ” Bouchon et al. (2001)
Turkey
2001 7.8 Kunlun, >400 Teleseismic ” Robinson
Tibet et al. (2006a, b)
2002 7.9 Denali, 340 ” ” Walker and Shearer
Alaska (2009)
2012 8.6 N. Sumatra 200, 400, 400 ” Sea Wang et al. (2012)
2013 7.5 Craig, 100 ” ” Yue et al. (2013)
Alaska
There are several other faults with shorter straight segments, which may or may not
be long enough to reach supershear speeds. Although we do not identify them as
fault superhighways, they merit mention. Of these, the 1,400 km long North
Anatolian fault in Turkey is the most particularly note-worthy, since supershear
(though not near-compressional wave speed) rupture has actually been inferred to
have occurred on it (Bouchon et al. 2001). The fault is characterized by periods of
quiescence (of about 75–150 years) followed by a rapid succession of earthquakes,
the most famous of these is the “unzipping” of the fault starting in 1939. For the
most part the surface expression of the fault is complex, with many segments and
en-echelon faults. It seems that large earthquakes (e.g., 1939, 1943, 1944) are able
to rupture multiple segments of these faults but it is unlikely that in jumping from
one segment to another, they will be able to sustain rupture velocities in excess of
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 15
the shear wave velocity. The longest “straight, continuous” portion of the North
Anatolian Fault lies in the rupture area of the 1939 Erzincan earthquake, to the west
of its epicenter, just prior to a sharp bend of the fault trace to the south (Yeats
et al. 1997). This portion of fault is approximately 80 km long. Additionally, this
branch which continues in the direction of Ankara (the Sungurlu fault zone) appears
to be very straight. However, the Sungurlu fault zone is characterized by very low
seismicity and is difficult to map due to its segmentation. Thus it is unlikely that
supershear rupture speeds could be maintained on this fault for a significant
distance. Since the North Anatolian fault runs close to Ankara and Istanbul, it is a
candidate for further very detailed in-depth studies.
Another noteworthy fault is the Wairarapa fault in New Zealand, which is
reported to have the largest measured coseismic strike-slip offset worldwide during
the 1855 earthquake, with an average offset of ~16 m (Rodgers and Little 2006), but
this high displacement is estimated over only 16 km of its length. Although a
~120 km long fault scarp was produced in the 1855 earthquake, the Wairarapa fault
is complex for much of its length as a series of splay faults branch off it. One
straight, continuous, portion of the fault is seen in the Southern Wairarapa valley,
but this is only ~40 km long. Thus it is less likely that this fault could sustain
supershear rupture over a considerable distance.
It is interesting to note that since the mid-1970s, when very accurate magnitudes
of earthquakes became available, no strike-slip earthquake on land appears to have
Mw >7.9 (two earthquakes in Mongolia in 1905 are supposed to have been >8, but
the magnitudes of such old earthquakes are not reliably known), even some with
rupture lengths >400 km. Yet they can produce surprisingly large damage. Perhaps
this could be explained by the multiple shock waves, carrying large ground veloc-
ities and accelerations, generated by supershear ruptures. A good example is the
1812 Caracas, Venezuela earthquake, described by John Milne (see Milne and Lee
1939), which devastated the city with more than 10,000 killed in 1 min. The
earthquake is believed to be of magnitude about 7.5, and to have occurred on the
Bocono fault, which is ~125 km away (Perez et al. 1997), but there is no known
local geological feature, such as a sedimentary basin, to amplify the motion. So one
could suggest either that the fault propagated further towards Caracas than previ-
ously believed, or reached supershear rupture speeds, or both.
1.10 Conclusions
of people affected. Another interesting example is the 2002 Denali, Alaska fault,
which intersects the Trans-Alaska pipeline. Due to extreme care in the original
construction (Pers. Comm., Lloyd Cluff), it was not damaged, but the environmen-
tal catastrophe for an oil spill in the pristine national park would have had indirect
financial consequences, the most important being the possible prevention of it being
ever allowed to re-open again. In many places of low population density, Govern-
ments may consider placing power plants (nuclear or otherwise), and such instal-
lations need to be built keeping in mind the possibility of supershear rupture on
nearby faults. Clearly, many other major strike-slip faults worldwide, not classed as
a superhighway yet, deserve much closer inspection with very detailed studies to
fully assess their potential to reach supershear rupture speeds.
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 17
Acknowledgements I would like to thank two distinguished colleagues, Raul Madariaga and
Michel Bouchon, for reading the manuscript and providing many useful comments, which
improved and clarified it.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
Aki K, Richards P (1989) Quantitative seismology: theory and methods. WH Freeman and
Company, San Francisco
Aki K, Richards P (2002) Quantitative seismology: theory and methods. University Science,
Sausalito
Antolik M, Abercrombie RE, Ekstr€ om G (2004) The 14 November 2001 Kokoxili (Kunlunshan),
Tibet, earthquake: rupture transfer through a large extensional step-over. Bull Seismol Soc Am
94:1173–1194
Andrews DJ (1976) Rupture velocity of plane strain shear cracks. J Geophys Res 81:5679–5687
Archuleta R (1984) Faulting model for the 1979 Imperial Valley earthquake. J Geophys Res
89:4559–4585
Benioff H (1952) Mechanism and strain characteristics of the White Wolf fault as indicated by the
aftershock sequence, Earthquakes in Kern County, California during 1952. Bull Calif Div
Mines Geology 171:199–202
Ben-Menahem A, Toks€ oz MN (1962) Source mechanism from spectra of long-period seismic
surface waves 1. The Mongolian earthquake of December 4, 1957. J Geophys Res
67:1943–1955
Ben-Menahem A, Toks€ oz MN (1963a) Source mechanism from spectrums of long-period surface
waves: 2. The Kamchatka earthquake of November 4, 1952. J Geophys Res 68:5207–5222
Ben-Menahem A, Toks€ oz MN (1963b) Source mechanism from spectrums of long-period seismic
surface waves. Bull Seismol Soc Am 53:905–919
Bernard P, Baumont D (2005) Shear Mach wave characterization for kinematic fault rupture
models with constant supershear rupture velocity. Geophys J Int 162:431–447
Bhat HS, Dmowska R, King GCP, Klinger Y, Rice JR (2007) Off-fault damage patterns due to
supershear ruptures with application to the 2001 Mw 8.1 Kokoxili (Kunlun) Tibet earthquake. J
Geophys Res 112:B06301
Bizzari A, Das S (2012) Mechanics of 3-D shear cracks between Rayleigh and shaer wave speeds.
Earth Planet Sci Lett 357–358:397–404
Bouchon M, Toks€oz MN, Karabulut H, Bouin MP, Dietrich M, Aktar M, Edie M (2002) Space and
time evolution of rupture and faulting during the 199 Izmit (Turkey) earthquake. Bull Seismol
Soc Am 92:256–266
Bouchon M, Vallée M (2003) Observation of long supershear rupture during the magnitude 8.1
Kunlunshan earthquake. Science 301:824–826
Bouchon M et al (2010) Faulting characteristics of supershear earthquakes. Tectonophysics
493:244–253
Bouchon M, Karabulut H (2008) The aftershock signature of supershear earthquakes. Science
320:1323–1325
Bouchon M, Bouin MP, Karabulut H, Toks€ oz MN, Dietrich M, Rosakis AJ (2001) How fast is
rupture during an earthquake? New insights from the 1999 Turkey earthquakes. Geophys Res
Lett 28:2723–2726
18 S. Das
Bouchon M, Toksoz MN, Karabulut H, Bouin MP, Dietrich M, Aktar M, Edie M (2000) Seismic
imaging of the Izmit rupture inferred from the near-fault recordings. Geophys Res Lett
27:3013–3016
Broberg KB (1989) The near-tip field at high crack velocities. Int J Fract 39:1–13
Broberg KB (1994) Intersonic bilateral slip. Geophys J Int 119:706–714
Broberg KB (1999) Cracks and fracture. Academic, New York
Brune JN (1961) Radiation pattern of Rayleigh waves from the Southeast Alaska earthquake of
July 10, 1958. Publ Dom Observ 24:1
Brune JN (1962) Correction of initial phase measurements for the Southeast Alaska earthquake of
July 10, 1958, and for certain nuclear explosions. J Geophys Res 67:3463
Burridge R (1973) Admissible speeds for plane-strain self-similar shear crack with friction but
lacking cohesion. Geophys J Roy Astron Soc 35:439–455
Burridge R, Conn G, Freund LB (1979) The stability of a rapid Mode II shear crack with finite
cohesive traction. J Geophys Res 84:2210–2222
Cruz-Atienza VM, Olsen KB (2010) Supershear Mach-waves expose the fault breakdown slip.
Tectonophysics 493:285–296
Das S (2007) The need to study speed. Science 317:889–890
Das S (1992) Reactivation of an oceanic fracture by the Macquarie Ridge earthquake of 1989.
Nature 357:150–153
Das S (1993) The Macquarie Ridge earthquake of 1989. Geophys J Int 115:778–798
Das S (1976) A numerical study of rupture propagation and earthquake source mechanism DSc
thesis, Massachusetts Institute of Technology, Cambridge
Das S, Aki K (1977) A numerical study of two-dimensional rupture propagation. Geophys J Roy
Astron Soc 50:643–668
Das S, Kostrov BV (1994) Diversity of solutions of the problem of earthquake faulting inversion:
application to SH waves for the great 1989 Macquarie Ridge earthquake. Phys Earth Planet Int
85:293–318
Das S, Kostrov BV (1990) Inversion for slip rate history and distribution on fault with stabilizing
constraints – the 1986 Andreanof Islands earthquake. J Geophys Res 95:6899–6913
Das S, Suhadolc P (1996) On the inverse problem for earthquake rupture. The Haskell-type source
model. J Geophys Res 101:5725–5738
Das S, Suhadolc P, Kostrov BV (1996) Realistic inversions to obtain gross properties of the
earthquake faulting process. Tectonophysics 261:165–177. Special issue entitled Seismic
Source Parameters: from Microearthquakes to Large Events, ed. C. Trifu
Dunham EM (2007) Conditions governing the occurrence of supershear ruptures under slip-
weakening friction. J Geophys Res 112:B07302
Dunham EM, Archuleta RJ (2004) Evidence for a supershear transient during the 2002 Denali fault
earthquake. Bull Seismol Soc Am 94:S256–S268
Dunham EM, Bhat HS (2008) Attenuation of radiated ground motion and stresses from three-
dimensional supershear ruptures. J Geophys Res 113:B08319
Ellsworth WL, Celebi M, Evans JR, Jensen EG, Kayen R, Metz MC, Nyman DJ, Roddick JW,
Spudich P, Stephens CD (2004) Nearfield ground motion of the 2002 Denali Fault, Alaska,
earthquake recorded at Pump Station 10. Earthq Spectra 20:597–615
Frankel A (2004) Rupture process of the M7.9 Denali fault, Alaska, earthquake: subevents,
directivity, and scaling of high-frequency ground motion. Bull Seismol Soc Am 94:S234–S255
Gridded Population of the World, version 3 (GPWv3) (2007) Center for International Earth
Science Information Network (CIESIN), Columbia University; and Centro Internacional de
Agricultura Tropical (CIAT). 2005, Palisades. Available at http://sedac.ciesin.columbia.edu/
gpw
Hamano Y (1974) Dependence of rupture time history on the heterogeneous distribution of stress
and strength on the fault, (abstract). Transact Am Geophys Union 55:352
1 Supershear Earthquake Ruptures – Theory, Methods, Laboratory Experiments. . . 19
Hartzell SH, Heaton TH (1983) Inversion of strong ground motion and teleseismic waveform data
for the fault rupture history of the 1979 Imperial Valley, California, earthquake. Bull Seismol
Soc Am 73:1553–1583
Hartzell SH, Stewart GS, Mendoza C (1991) Comparison of L1 and L2 norms in a teleseismic
waveform inversion for the slip history of the Loma Prieta, California, earthquake. Bull
Seismol Soc Am 81:1518–1539
Ida Y (1972) Cohesive force across the tip of a longitudinal-shear crack and Griffith’s specific
surface energy. J Geophys Res 77:3796–3805
Henry C, Das S (2002) The Mw 8.2 February 17, 1996 Biak, Indonesia earthquake: rupture history,
aftershocks and fault plane properties. J Geophys Res 107:2312
Henry C, Das S, Woodhouse JH (2000) The great March 25, 1998 Antarctic Plate earthquake:
moment tensor and rupture history. J Geophys Res 105:16097–16119
Kostrov BV (1975) Mechanics of the tectonic earthquake focus (in Russian). Nauka, Moscow
Kostrov BV (1966) Unsteady propagation of longitudinal shear cracks. J Appl Math Mech
30:1241–1248
Kostrov BV, Das S (1988) Principles of earthquake source mechanics. Cambridge University
Press, New York
Lin A, Fu B, Guo J, Zeng Q, Dang G, He W, Zhao Y (2002) Co-seismic strike-slip and rupture
length produced by the 2001 Ms 8.1 Central Kunlun earthquake. Science 296:2015–2016
Liu C, Bizzari A, Das S (2014) Progression of spontaneous in-plane shear faults from
sub-Rayleigh up to compressional wave rupture speeds. J Geophys Res Solid Earth 119
(11):8331–8345
Lasserre C, Peltzer G, Cramp F, Klinger Y, Van der Woerd J, Tapponnier P (2005) Coseismic
deformation of the 2001 Mw ¼ 7.8 Kokoxili earthquake in Tibet, measured by synthetic
aperture radar interferometry. J Geophys Res 110:B12408
Madariaga R (1983) High-frequency radiation from dynamic earthquake fault models. Ann
Geophys 1:17–23
Madariaga R (1977) High-frequency radiation from crack (stress drop) models of earthquake
faulting. Geophys J Roy Astron Soc 51:625–651
Milne J, Lee AW (1939) Earthquakes and other earth movements. K Paul, Trench, Trubner and
Co., London
Olson AH, Apsel RJ (1982) Finite faults and inverse theory with applications to the 1979 Imperial
Valley earthquake. Bull Seismol Soc Am 72:1969–2001
Ozacar AA, Beck SL (2004) The 2002 Denali fault and 2001 Kunlun fault earthquakes: complex
rupture processes of two large strike-slip events. Bull Seismol Soc Am 94:S278–S292
Parker RL (1994) Geophysical inverse theory. Princeton University Press, Princeton
Passelègue FX, Schubnel A, Nielsen S, Bhat HS, Madariaga R (2013) From sub-Rayleigh to
supershear ruptures during stick-slip experiments on crustal rock. Science 340
(6137):1208–1211
Perez OJ, Sanz C, Lagos G (1997) Microseismicity, tectonics and seismic potential in southern
Caribbean and northern Venezuela. J Seismol 1:15–28
Press F, Ben-Menahem A, Toks€ oz MN (1961) Experimental determination of earthquake fault
length and rupture velocity. J Geophys Res 66:3471–3485
Press WH, Flannery BP, Teukolsky SA, Vetterling WT (1986) Numerical recipes: the art of
scientific computing. Cambridge University Press, New York
Robinson DP, Das S, Searle MP (2010) Earthquake fault superhighways. Tectonophysics
493:236–243
Robinson DP (2011) A rare great earthquake on an oceanic fossil fracture zone. Geophys J Int
186:1121–1134
Robinson DP, Das S, Watts AB (2006a) Earthquake rupture stalled by subducting fracture zone.
Science 312:1203–1205
Robinson DP, Brough C, Das S (2006b) The Mw 7.8 Kunlunshan earthquake: extreme rupture
speed variability and effect of fault geometry. J Geophys Res 111:B08303
20 S. Das
Robinson DP, Henry C, Das S, Woodhouse JH (2001) Simultaneous rupture along two conjugate
planes of the Wharton Basin earthquake. Science 292:1145–1148
Rodgers DW, Little TA (2006) World’s largest coseismic strike-slip offset: the 1855 rupture of the
Wairarapa Fault, New Zealand, and implications for displacement/length scaling of continental
earth-quakes. J Geophys Res 111:B12408
Rosakis AJ, Samudrala O, Coker D (1999) Cracks faster than the shear wave speed. Science
284:1337–1340
Sarao A, Das S, Suhadolc P (1998) A comprehensive study of the effect of non-uniform station
distribution on the inversion for seismic moment release history and distribution for a Haskell-
type rupture model. J Seismol 2:1–25
Spudich P, Cranswick E (1984) Direct observation of rupture propagation during the 1979
Imperial Valley earthquake using a short baseline accelerometer array. Bull Seismol Soc Am
74:2083–2114
Song SG, Beroza GC, Segall P (2008) A unified source model for the 1906 San Francisco
earthquake. Bull Seismol Soc Am 98:823–831
Tarantola A (1987) Inverse problem theory. Methods for data fitting and model parameter
estimation. Elsevier, New York
Utsu T (2002) A list of deadly earthquakes in the world (1500–2000). In: Lee WHK, Kanamori H,
Jennings PC, Kisslinger C (eds) International handbook of earthquake and engineering seis-
mology part A. Academic, New York, p 691
Vallée M, Landès M, Shapiro NM, Klinger Y (2008) The 14 November 2001 Kokoxili (Tibet)
earthquake: High-frequency seismic radiation originating from the transitions between
sub-Rayleigh and supershear rupture velocity regimes””. J Geophys Res 113:B07305
Vigny C, Socquet A, Rangin Chamot-Rooke N, Pubellier M, Bouin M-N, Bertrand G, Becker M
(2003) Present-day crustal deformation around Sagaing fault, Myanmar. J Geophys Res
108:2533
Walker KT, Shearer PM (2009) Illuminating the near-sonic rupture velocities of the intracon-
tinental Kokoxili Mw 7.8 and Denali fault Mw 7.9 strike-slip earthquakes with global P wave
back projection imaging. J Geophys Res 114:B02304
Wang D, Mori J, Uchide T (2012) Supershear rupture on multiple faults for the Mw 8.6 off
Northern Sumatra, Indonesia earthquake. Geophys Res Lett 39:L21307
Wu FT, Thomson KC, Kuenzler H (1972) Stick-slip propagation velocity and seismic source
mechanism. Bull Seismol Soc Am 62:1621–1628
Xia K, Rosakis AJ, Kanamori H (2004) Laboratory earthquakes: the sub-Rayleigh-to-supershear
transition. Science 303:1859–1861
Xia K, Rosakis AJ, Kanamori H, Rice JR (2005) Laboratory earthquakes along inhomogeneous
faults: directionality and supershear. Science 308:681–684
Xu X, Chen W, Ma W, Yu G, Chen G (2002) Surface rupture of the Kunlunshan earthquake (Ms
8.1), northern Tibet plateau, China. Seismol Res Lett 73:884–892
Yeats RS, Sieh K, Allen CR (1997) The geology of earthquakes. Oxford University Press,
New York
Yeats R (2012) Active faults of the world. Cambridge University Press, New York
Yue H, Lay T, Freymuller JT, Ding K, Rivera L, Ruppert NA, Koper KD (2013) Supershear
rupture of the 5 January 2013 Craig, Alaska (Mw 7.5) earthquake. J Geophys Res
118:5903–5919
Zhan Z, Helmberger DV, Kanamori H, Shearer PM (2014) Supershear rupture in a Mw 6.7
aftershock of the 2013 Sea of Okhotsk earthquake. Science 345:204–207
Chapter 2
Civil Protection Achievements and Critical
Issues in Seismology and Earthquake
Engineering Research
2.1 Introduction
In the last decade, within their activities at the Italian Department of Civil Protec-
tion (DPC), the authors had the opportunity to contribute to develop the relation-
ships between the “Civil Protection” and the “Scientific Community”, especially in
the field of seismic and seismo-induced risks.
During these years, the DPC has faced difficult circumstances, not only in
emergency situations, which have required strong and continuous interactions
with the scientific community. As it can be easily understood in theory, but much
less easily in practice, the civil protection approach to seismic risk problems is
strongly different from the research approach, although important synergies could
arise from a cooperation and a reciprocal understanding. From the DPC point of
view, there are many good reasons for a close connection between civil protection
and research, e.g.: the opportunity to reach a scientific consensus on evaluations
that imply wide uncertainties; a better management of the resource allocation for
risk mitigation; the possibility to make precise and rapid analyses for fast and
effective emergency actions; the optimization of resources and actions for the
emergency overcoming. There are of course positive implications also for the
scientific community, such as, for instance: a clear finalization of the research
activities; wider investigation perspectives, too often strictly focused on the
achievement of specific academic advancements; the ethical value of a research
that has direct and positive social implications (Dolce 2008).
Creating a fruitful connection between the two parts implies a continuous and
dynamic adaptation to the different ways of thinking about how to solve problems.
This involves different fields: the language first of all, including the reciprocal and
outward communication, then the timing for the response, the budget available, the
right balance among the different stakeholders, the scientific consensus on the most
significant achievements and, ultimately, the responsibilities.
A great complexity generally characterizes the relationships between science
and civil protection. As will be shown in the following sections, science attains
advances that can allow civil protection organizations to make decisions and
undertake actions more and more effectively. Provided that these advances are
consolidated and shared by a large part of the scientific community, civil protection
has to take them into account in its operational procedures and in its decision-
making processes, and it has to do this while growing side by side with the scientific
knowledge, avoiding any late pursuit.
Such a complexity is summarized in the scheme of Fig. 2.1, which also repre-
sents the backbone of this paper. The aim of the work here presented, indeed, is
to outline the framework and the boundary conditions, to show the overall model
of such relationships and to describe the current state-of-the-art, focusing on the
major results achieved in Italy and on the many criticalities that still remain to be
solved.
Among the boundary conditions, the question of the different roles and respon-
sibilities in the decision-making process will be addressed, dealing in particular
2 Civil Protection Achievements and Critical Issues in Seismology. . . 23
Fig. 2.1 Chart of the relationships between civil protection and science
with the contribution of scientists and decision-makers, among the others, in the
risk management. In this frame, and given the specific organization of the civil
protection system in Italy, which is the cradle of the experience here presented,
the different kinds of contributions that civil protection receives from the scien-
tific community will then be treated. The collection of these contributions follows
different paths. Some of them are directly planned, asked and funded by civil
protection, although with a different commitment for the scientific institutions or
commissions involved, which especially regards their activity field and the related
duration through times (points i to iv in Fig. 2.1). Some contributions come
instead from research that the scientific community develops in other frame-
works: European projects, Regional funds, etc. (points v to vi in Fig. 2.1). All
of them represent an added value from which civil protection wants to take
advantage for sure, but only after a necessary endorsement by a large part of the
scientific community and an indispensable adaptation to civil protection utiliza-
tion. This is fundamental in order to avoid that any decision and any consequent
action, which could in principle affect the life and property of many citizens, be
undertaken on the basis of non-consolidated and/or minor and/or not shared
scientific achievements.
24 M. Dolce and D. Di Bucci
Table 2.2 Steps of an ideal decision-making process, and role virtually played by the different
participants
Step Description Scientists PDMs TDMs
1 definition of the acceptable level of risk according to x X
established policy (i.e., in a probabilistic framework, of
the acceptable probability of occurrence of quantitatively
estimated consequences for lives and property)
2 allocation of proper budget for risk mitigation X x
3 quantitative evaluation of the risk (considering hazard, X x
vulnerability, and exposure)
4 identification of specific actions capable of reducing the X
risk to the acceptable level
5 cost-benefit evaluation of the possible risk-mitigating X x
actions
6 adoption of the most suitable technical solution, according x x X
to points 1, 4, and 5
7 implementation of risk-mitigating actions X
PDMs political decision-makers, TDMs technical decision-makers, X primary role, x occasional
support
PDMs could:
– decide not to establish the acceptable risk levels for the community they
represent;
– prefer to state that a “zero” risk solution must be pursued, which is in fact a
non-decision;
– not allocate an adequate budget for risk mitigation.
TDMs could tend (or could be forced, in emergency conditions) to make and
implement decisions they are not in charge for, because of the lack of:
– scientific quantitative evaluations;
– acceptable risk statements (or impossibility to get them);
– budget.
A number of examples of individuals usurping or infringing on roles not
assigned to them in the decisional process is reported by Dolce and Di
Bucci (2014).
Other actors, besides scientists and decision makers, play an important role in the
risk cycle management; among them mass media, judiciary, and citizens deserve to
be especially mentioned, because their behaviours can strongly affect the decision-
making process.
26 M. Dolce and D. Di Bucci
Table 2.3 Pros and cons for civil protection in the mass media behaviour
Pros Cons
Spreading knowledge about risks and their Distortion of information due to incompe-
reduction in order to increase people’s awareness tence or to commercial or political purposes
on risks
Disseminating best practices on behaviours to be Accreditation of non-scientific ideas and
adopted both in ordinary and in emergency non-expert opinions
conditions
Spreading civil protection alerts Spreading false alarms
Two main aspects of the relationships between civil protection and science are
relevant from the civil protection point of view:
– scientific advances can allow for more effective civil protection decisions and
actions concerning the entire risk cycle;
2 Civil Protection Achievements and Critical Issues in Seismology. . . 29
– civil protection has to suitably re-shape its activities and operational procedures
to include the scientific advances, as soon as they become available and robust.
In order to fully understand the problems and the possible solutions in the civil
protection – science relationships, it is essential to explain what “having proce-
dures” means for a civil protection system, and to provide an overview of the
possible scientific products for civil protection use and of the organization of the
Italian civil protection system.
Scientific products, i.e., any scientific result, tool or finding, for their intrinsic nature
do not usually derive from an overall view of the reality, but they tend to emphasize
some aspects, while neglecting or oversimplifying some others. Therefore, often
research findings can turn out to be unreliable for practical applications, and
sometimes falsely precise or tackling only part of a problem, whereas they leave
unsolved other important parts. To minimize this contingency, research activities
finalized to civil protection aims should proceed in close cooperation with civil
protection stakeholders in defining objectives and products to achieve, as well as in
validating results and/or tools.
30 M. Dolce and D. Di Bucci
In Italy, civil protection is not just a single self-contained organization but a system,
called National Service of Civil Protection (SNPC), which operates following the
idea that the civil protection is not an administration or an authority, but rather a
function that involves the entire society. Several individuals and organizations
contribute with their own activities and competences to attain the general risk
mitigation objectives of SNPC.
The coordination of this complex system is entrusted to the National Department
of Civil Protection, which acts on behalf of the Prime Minister. The SNPC’s
mandate is the safeguarding of human life and health, property, national heritage,
human settlements and environment from all natural or manmade disasters.
All the ministries, with their national operational structures, including Fire
Brigades, Police, Army, Navy, Air Force, Carabinieri, State Forest Corps and
Financial Police, as well as Prefectures, Regional and local civil protection orga-
nizations, contribute to SNPC actions. Public and private companies of highways,
2 Civil Protection Achievements and Critical Issues in Seismology. . . 31
Science can provide different kinds of contributions to civil protection. They can be
distinguished and classified according to the type of relationship between the
scientific contributors and the civil protection organizations. The main kinds of
contributions can be categorized as follows:
(i) well-structured scientific activities, permanently performed by scientific insti-
tutions on behalf of civil protection organizations, which usually endow them;
(ii) finalized research activities carried out by scientific institutions, funded by
civil protection organizations to provide results and products for general or
specific purposes of civil protection;
(iii) advices regularly provided by permanent commissions or permanent consul-
tants of civil protection organizations;
(iv) advices on specific topics, provided by temporary commissions ad hoc
established by civil protection organizations;
(v) research activities developed in other frameworks and funded by other sub-
jects (European projects, Regional funds, etc.), that achieve results of interest
for civil protection organizations, especially when these latter are involved as
end-users;
(vi) free-standing research works, producing results of potential interest for civil
protection without any involvement of civil protection organizations.
Hereinafter, the above different kinds of scientific contributions are described
and discussed in the light of the experience made by the DPC, devoting a special
concern to the criticalities observed.
32 M. Dolce and D. Di Bucci
Fig. 2.2 Scheme of the relationships management between the Italian Department of Civil
Protection and a Competence Centre
2.4.1.1 INGV
“A-Type” Activities
According to a national law (D. Lgs. 381/99), INGV has in charge the seismic (and
volcanic) monitoring and surveillance of the Italian territory. It manages and
maintains the velocimetric National Seismic Network (more than 300 stations),
whose data are collected and elaborated at the INGV-CNT, providing DPC with
quasi-real-time information on location and magnitude of Italian earthquakes, with
the capability to detect M > 2 earthquakes all over the Italian territory (Sardinia
excluded, in relation to the negligible seismicity of this region) and M > 1 in many
of the most hazardous regions (see Fig. 2.3).
Among the INGV A-type activities, the implementation and maintenance of data
bases that are important for their civil protection applications deserve to be men-
tioned. For instance:
• DISS – The Database of Individual Seismogenic Sources (http://diss.rm.ingv.it/
diss/; Basili et al. 2008; DISS Working Group 2010; Fig. 2.4) is, according to
http://diss.rm.ingv.it/diss/UserManual-Intro.html, a “georeferenced repository
of tectonic, fault and paleoseismological information; it includes individual,
composite and debated seismogenic sources. Individual and composite
seismogenic sources are two alternative seismic source models to choose from.
They are tested against independent geophysical data to ensure the users about
their level of reliability”. Each record in the Database is backed by a Commen-
tary, a selection of Pictures and a list of References, as well as fault scarp or fold
axis data when available (usually structural features with documented Late
Pleistocene – Holocene activity). The Database can be accessed through a web
browser or displayed on Google Earth. DISS was adopted as the reference
catalogue of Italian seismogenic sources by the EU SHARE Project (see below).
2 Civil Protection Achievements and Critical Issues in Seismology. . . 35
Fig. 2.3 (a) Distribution of the Italian seismic network operated by INGV; and (b) example of
magnitude detection threshold on march 16, 2015 (Data provided by INGV to DPC)
Fig. 2.4 DISS website (http://diss.rm.ingv.it/diss/; Basili et al. 2008; DISS Working Group 2010)
Fig. 2.5 Websites of the data bases (a) ISIDE, and (b) ITACA
Fig. 2.6 (a) waveforms extracted from ITACA database, and (b) geographical distribution of the
National Strong-Motion Network (RAN-DPC)
time-series, are available from the download pages, where the parameters of
interest can be set and specific events, stations, waveforms and related metadata
can be retrieved (Fig. 2.6).
“B-Type” Activities
Apart from the actions aimed at improving and developing the operational service
activities (A-type), among the pre-operational and operational implementation of
research achievements for civil protection, there are some activities recently
implemented that deserve to be mentioned.
occurrence. In this case, the activities are aimed at producing and comparing time-
dependent hazard models and maps, and defining a consensus-model or an
ensemble-model that can be useful to set up risk mitigation strategies for the near
future.
For the short-term seismic hazard (also known in the international literature as
Operational Earthquake Forecasting, OEF), that is modelled using time-dependent
processes, the time-window is typically days to months. About its possible out-
comes, Jordan et al. (2014) explain: “We cannot yet predict large earthquakes in the
short term with much reliability and skill, but the strong clustering exhibited in
seismic sequences tells us that earthquake probabilities are not constant in time; . . .
OEF must provide a complete description of the seismic hazard—ground-motion
exceedance probabilities as well as short-term rupture probabilities—in concert
with the long-term forecasts of probabilistic seismic-hazard analysis (PSHA)”.
The CPS activities are carried out by a dedicated working group, which uses a
new technological infrastructure for (i) the computation of the seismic hazard, by
integrating the most recent data and different models, (ii) the management of the
available data bases, and (iii) the representation of the hazard estimation, even using
web applications. Moreover, IT tools are developed to facilitate the preparation,
implementation and comparison of hazard models, according to standard formats
and common procedures, in order to make fast checks of the sensitivity of the
estimations. Synergies with some international activities, like the Collaboratory for
the Study of Earthquake Predictability, CSEP (http://www.cseptesting.org/), and the
Global Earthquake Model, GEM (http://www.globalquakemodel.org/), as well as
with the Italian seismic hazard community, are pursued.
Fig. 2.7 The Italian Tsunami Warning System (Michelini A, personal communication 2014)
“C-Type” Activities
2.4.1.2 ReLUIS
experts (i.e., professionals and local administration officials) on the building char-
acteristics. This approach takes profit of the network organization of ReLUIS, that
involves more than 40 universities all over Italy. It is based on the identification of
the common structural and non-structural features of buildings pertaining to each
district of a given municipality, characterized by a good homogeneity in terms of
age and main characteristics of the building stock (Zuccaro et al. 2014).
2.4.1.3 EUCENTRE
Design in low hazard zones and relevant software implementation DBDsoft, and
the Fragility curves of precast building structures.
The National Commission for forecasting and prevention of Major Risks is the
highest-level, connecting structure between the Italian civil protection system and
the scientific community. It is an independent scientific consultation body of DPC,
but it is not part of the Department itself. The Commission was established by Law
n. 225/1992. Its organization and functions have been re-defined on 2011 (DPCM
7 October 2011).
The Major Risks Commission provides advice on technical-scientific matters,
both autonomously and on request of the Head of the Department of Civil Protec-
tion, and may provide recommendations on how to improve capabilities for eval-
uation, forecasting and prevention of the various risks.
The Commission is structured in a Presidency Office and five sectors relevant to:
– seismic risk,
– volcanic risk,
– weather-hydrogeological, hydraulic and landslide risk,
44 M. Dolce and D. Di Bucci
In the recent past, DPC turned to the advice of high-level international panels of
scientists to deal with specific and delicate questions of civil protection interest.
Two cases related to seismic risk are summarized in this section.
While the answer to the first question was trivial, once verified that there had
been no field research activities at the Rivara site, the answer to the second question
was articulated as follows:
• The study does not indicate that there is evidence which can associate the Emilia
2012 seismic activity to the operation activities in Spilamberto, Recovato,
Minerbio and Casaglia fields,
• it cannot be ruled out that the activities carried out in the Mirandola License area
have had a triggering effect,
• In any case, the whole Apennine orogen under the Po Plain is seismically active
and therefore it is essential that the production activity are accompanied by
appropriate actions, which will help to manage the seismic risk associated with
these activities.
Apart from the specific findings, the importance of the Commission stands in
having addressed the induced/triggered seismicity issue in Italy, a research field still
to be thoroughly explored in this country. As it can be easily understood, however,
not only is this topic of scientific interest, but it has also an impact on the
hydrocarbon E&P and the gas storage activities, due to the increased awareness
of national policy makers, local authorities and population (see, for a review of the
current activities on induced/triggered seismicity in Italy, D’Ambrogi et al. 2014).
In the past, international research projects were little finalized to products for civil
protection use, and the stakeholders’ role, although somehow considered, was not
enough emphasized. Looking at the research funding policy currently undertaken by
the European Union, a more active role is expected from the stakeholders (e.g.,
Horizon 2020, Work Programme 2014–15, 14. Secure societies; http://ec.europa.eu/
research/participants/data/ref/h2020/wp/2014_2015/main/h2020-wp1415-security_
en.pdf) and, among them, from civil protection organizations, as partners or end-user
advisors. Some good cases of EU-funded research projects, finalised to the achieve-
ment of results potentially useful for civil protection can be mentioned, however, also
for the previous EU Seventh Framework Program. Three examples are here discussed,
to show how important is the continuous interaction between scientific community
and civil protection stakeholders to achieve results that can be exploited immediately
or prospectively in practical situations, and how long is the road to get a good
assimilation of scientific products or results within civil protection procedures.
A different case, not dealt in detail, is represented by the GEM Programme and
promoted by the Global Science Forum (OECD). This is a global collaborative
effort in which science is applied to develop high-quality resources for transparent
assessment of earthquake risk and to facilitate their application for risk manage-
ment around the globe (http://www.globalquakemodel.org/). DPC supported the
establishment of GEM in Pavia and currently funds the programme, representing
Italy in the Governing Board.
48 M. Dolce and D. Di Bucci
Systemic Vulnerability
Utility and
Infrastructure Casualties Psychological
Systems Fatalities, Health Care Health Distress,
Disruption Chronic Injury
Fig. 2.9 General graphic layout of the concept and goals of SYNER-G (http://www.vce.at/
SYNER-G/files/project/proj-overview.html)
2.4.4.1 SYNER–G
• to validate the methodology and the proposed fragility functions in selected sites
(at urban scale) and systems, and to implement them in an appropriate open
source and unrestricted access software tool.
DPC acted as an end-user of this project, providing data and expertise; more-
over, one of the authors of the present paper was part of the advisory board. The
comments made in the end-user final report, summarized below, provide an over-
view of the possible interactions and criticalities of this kind of projects with civil
protection organizations. Among the positive aspects:
• the analysis of the systemic vulnerability and risk is a very complex task;
• considerable steps ahead have been made, in Syner-G, both in questions not
dealt with before or in topics that have been better finalized during the project;
• brilliant solutions have been proposed for the problems dealt with and sophis-
ticated models have been utilized;
• of great value is the coordination with other projects, especially with GEM.
It was however emphasized that:
• large gaps still exist between many scientific approaches and practical decision-
makers’ actions;
• the use of very sophisticated approaches and models has often required to
neglect some important factors affecting the real behaviour of some systems;
• when dealing with a specific civil protection issue, all important affecting factors
should be listed, not disregarding any of them, and their influence evaluated,
even though roughly;
• a thorough and clear representation of results is critical for a correct understand-
ing by end-users;
• models and results calibration should be referred to events at different scale, due
to the considerable differences in the system response and in the actions to be
undertaken;
• cases of induced technological risks should be considered as well, since nowa-
days the presence of dangerous technological situations is widespread in the
partner countries.
2.4.4.2 REAKT
REAKT – Strategies and tools for Real time Earthquake risK reducTion (http://
www.reaktproject.eu/) as well is a EU project developed within the Seventh
Framework Programme, Theme 6: Environment. It started on September 2011,
with a 3 years duration. Twenty-three partners from nine European countries and
six from the rest of the world (namely Jamaica, Japan, Taiwan, Trinidad and
Tobago, Turkey, USA) participated to the project, that was coordinated by
AMRA (Italy; http://www.amracenter.com/en/). Many different types of stake-
holders acted as end-users of the Project, among which the Italian DPC, represented
50 M. Dolce and D. Di Bucci
by the authors of this paper. DPC has actively cooperated, by putting at disposal
data and working on application examples.
Among the main objectives of REAKT, one of them deserves specific attention
for the scopes of the present paper, namely: “the definition of a detailed method-
ology to support optimal decision making associated with earthquake early warning
systems (EEWS), with operational earthquake forecasting (OEF) and with real-time
vulnerability and loss assessment, in order to facilitate the end-users’ selection of
risk reduction countermeasures”.
Much in detail, the attention is here focused on the EEWS and, specifically, on
the content of the first version of the “Final Report for Feasibility Study on the
Implementation of Hybrid EEW Approaches on Stations of RAN” (Picozzi
et al. 2014). Actually, during the project, an in-depth study on the possibility of
exploiting for EEW purposes the National Strong-Motion Network RAN was
carried out. It is worth to notice that within the project, consistently with the
purpose of the related task, the attention was exclusively focused on the most
challenging scientific aspects, on which an excellent and exhaustive research
work has been carried out. Summarising, the main outcomes of this work are
related to the reliability of the real-time magnitude computation and to the evalu-
ation of the lead time, i.e., the time needed for the assessment of the magnitude of
the impending earthquake and for the arrival of this information to the site where
some mitigating action has to be undertaken before strong shear waves arrive. Such
evaluation is referred to the performances and the geographical distribution of the
RAN network (see Fig. 2.6b), and to the performances of the algorithm PRESTo
(Satriano et al. 2010) for the fast evaluation of the earthquake parameters. The
knowledge of the lead time allows an evaluation of the so-called blind and safe
zones to be made, where the “blind zone” is the area around the epicentre where the
information arrives after the strong shake starts, while the “safe zone” is the
surrounding area where the information arrives before and where the shake is still
strong enough for the real-time mitigating action to be really useful.
However, neither other technological and scientific requirements that must be
fulfilled have been analysed, nor other components necessary to make a complete
EEW system useful to mitigate risk have been considered, many of which dealing
with civil protection actions. This case appears useful, therefore, to show the
different points of view of science and civil protection and to emphasize again
how important is to consider all the main factors affecting a given problem – in this
case the feasibility and effectiveness of an EEWS – and to evaluate, even roughly,
their influence. At this aim, some of the comments made by DPC to the first draft of
the final report (Picozzi et al. 2014) are summarized below. The main aspects dealt
with are about the effectiveness of EEW systems for real-time risk mitigation. This
latter requires at least that:
• efficiency of all the scientific components is guaranteed,
• efficiency of all the technological components is guaranteed,
• targets and mitigation actions to be carried out are defined,
• time needed for the actions is added to the (scientific) lead time,
2 Civil Protection Achievements and Critical Issues in Seismology. . . 51
Fig. 2.10 Different definitions of blind and safe zone from the scientific and the operational (civil
protection) points of view
the zones of its potential utilization actually correspond to areas where the felt
intensity implies no or negligible structural damage.
From a communication perspective, it has to be noticed that spreading a purely
scientific information that, though correct, neglects a comprehensive analysis
including civil protection issues could determine in the stakeholders and in the
general public undue expectations, beyond the actual EEW potential capabilities in
Italy, if it is based on a regional approach.
2.4.4.3 SHARE
Fig. 2.11 Poster of the SHARE project, which reproduces the 475 return period PGA map of
Europe (http://www.share-eu.org/sites/default/files/SHARE_Brochure_public.web_.pdf)
Fig. 2.12 Official (seismic code) PGA hazard map of Italy (a) vs. SHARE PGA hazard map
(b) for the same area, referred to 10 % probability in 50 years (Maps are taken, respectively, from:
http://zonesismiche.mi.ingv.it/mappa_ps_apr04/italia.html, and http://www.share-eu.org/sites/
default/files/SHARE_Brochure_public.web_.pdf)
54 M. Dolce and D. Di Bucci
Italian official hazard model is not under-conservative, differently from what the
PGA maps would induce to believe, and are instead acceptable from an engineering
point of view.
action to mitigate risk based on them, this will cause a damage to the entire system.
This problem can be overcome only by increasing the awareness that scientists,
media, PDMs and TDMs, all of them compose the same puzzle, and cooperation,
interchange, correct communication are the only way to attain the shared goal of a
more effective civil protection when working for risk mitigation.
2.5 Conclusion
The relationships between science and civil protection, as shown in this paper, are
very complex, but they can imply important synergies if correctly addressed. On the
one hand, scientific advances can allow for more effective civil protection decisions
and actions, although critical issues can arise for the civil protection system, that
has to suitably shape its activities and operational procedures according to these
advances. On the other hand, the scientific community can benefit from the
enlargement of the investigation perspectives, the clear finalisation of the applied
research activities and their positive social implications.
In the past decades the main benefits from civil protection-science interaction in
Italy were a general growth of interest on Seismology and Earthquake Engineering
and a general increase of the amount and of the scientific quality of research in these
fields. But there were also a still inadequate finalisation of the products and some
inconsistencies of the results not solved within and among the research groups (i.e.,
lack of consensus).
Progresses recently achieved, consequent to a re-organization effort that started
in 2004, encompass:
• better structured scientific activities, finalised to civil protection purposes;
• an improved coordination among research units for the achievement of civil
protection objectives;
• the realization of products of ready use (e.g.: tools for hazard analysis, databases
in GIS environment, guidelines);
• a substantial increase of experimental investigations, data exchanging and com-
parisons within large groups, as well as the achievement of a consensus on
results, strictly intended for decisional purposes;
• a renewed cooperation in the divulgation activities aimed at increasing risk
awareness in the population;
• better structured advisory activities of permanent and special commissions.
While important progresses are registered, a further improvement in the coop-
eration can be still pursued, and many problems also remain in case of
non-structured interactions between civil protection and scientific community.
For all the above reasons, a smart interface between civil protection and scien-
tific community continues to be necessary (Di Bucci and Dolce 2011), in order to
identify suitable objectives for the research funded by DPC, able to respond to civil
protection needs and consistent with the state-of-the-art at international level.
56 M. Dolce and D. Di Bucci
After the 2009 L’Aquila and 2012 Emilia earthquakes, the scientific partners
have provided a considerable contribution to the National Service of Civil Protec-
tion in Italy, not only with regard to the technical management of the emergency but
also the divulgation campaigns for the population under the DPC coordination.
However, an even more structured involvement of the CC is envisaged, even in the
emergency phase.
The authors strongly believe in the need and the opportunity that the two worlds,
scientific community and civil protection, carry on cooperating and developing an
interaction capability, focusing on those needs that are a priority for the society and
implementing highly synergic relationships, which favour an optimized use of the
limited resources available. Some positive examples come from the Italian experi-
ence and have been described along with some of the tackled difficulties. They deal
with many different themes and are intended to show the multiplicity and diversity
of issues that have to be considered in a day-by-day work of interconnection
between civil protection and scientific community. These examples can help to
get a more in-depth mutual understanding between these two worlds and provide
some suggestions and ideas for the audience, national and international, which
forms the seismic risk world.
Acknowledgments The Authors are responsible for the contents of this work, which do not
necessarily reflect the position and official policy of the Italian Department of Civil Protection.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
AGU Fall Meeting (2012) Lessons learned from the L’Aquila earthquake verdicts press confer-
ence. http://www.youtube.com/watch?v¼xNK5nmDFgy8
Alexander DE (2014a) Communicating earthquake risk to the public: the trial of the “L’Aquila
Seven”. Nat Hazards. doi:10.1007/s11069-014-1062-2
Alexander DE (2014b) Reply to a comment by Franco Gabrielli and Daniela Di Bucci: “Commu-
nicating earthquake risk to the public: the trial of the ‘L’Aquila Seven”. Nat Hazards. doi:10.
1007/s11069-014-1323-0
Allen CR (1982) Earthquake prediction—1982 overview. Bull Seismol Soc Am 72(6B):S331–S335
Basili R, Valensise G, Vannoli P, Burrato P, Fracassi U, Mariano S, Tiberti MM, Boschi E (2008)
The Database of Individual Seismogenic Sources (DISS), version 3: summarizing 20 years of
research on Italy’s earthquake geology. Tectonophysics. http://dx.doi.org/10.1016/j.tecto.
2007.04.014
Berelson B (1948) Communication and public opinion. In: Schramm W (ed) Communication in
modern society. University of Illinois Press, Urbana
Bretton R (2014) The role of science within the rule of law. “Science, uncertainty and decision
making in the mitigation of natural risks”, Workshop of Cost Action IS1304 “Expert Judgment
Network: Bridging the Gap Between Scientific Uncertainty and Evidence-Based Decision
Making”. Rome, 8-9-10 Oct 2014. Oral presentation
2 Civil Protection Achievements and Critical Issues in Seismology. . . 57
Manfredi G, Dolce M (eds) (2009) The state of the art of earthquake engineering research in Italy:
the ReLUIS-DPC 2005–2008 Project, Doppiavoce, Napoli. http://www.reluis.it/CD/ReLUIS-
DPC/ReLUIS-DPC.htm
Mele F, Riposati D (2007) ISIDe, Italian Seismological Instrumental and parametric Data-basE.
GNGTS 2007
Meletti C, Rovida A, D’Amico V, Stucchi M (2013) Seismic hazard models for the Italian area:
“MPS04-S1” and “SHARE”, Progettazione Sismica – Vol. 5, N. 1, Anno 2014. doi:10.7414/
PS.5.1.15-25 – http://dx.medra.org/10.7414/PS.5.1.15-25
Mucciarelli M (2014) Some comments on the first degree sentence of the “L’Aquila trial”. In:
Peppoloni S, Wyss M (eds) Geoethics: ethical challenges and case studies in earth science.
Elsevier. Publication Date: 21 Nov 2014 | ISBN-10: 0127999353 | ISBN-13: 978–0127999357
| Edition: 1
Pacor F, Paolucci R, Luzi L, Sabetta F, Spinelli A, Gorini A, Marcucci S, Nicoletti M, Filippi L,
Dolce M (2011) Overview of the Italian strong motion database ITACA 1.0. Bull Earthq Eng 9
(6):1723–1739. doi:10.1007/s10518-011-9327-6, Springer Ltd, Dordrecht, The Netherlands
Picozzi M, Zollo A, Brondi P, Colombelli S, Elia L, Martino C (2014) Exploring the feasibility of a
nation-wide earthquake early warning system in Italy, First draft of the final report for the
REAKT Project
Pitilakis K, Crowley E, Kaynia A (eds) (2014a) SYNER-G: typology definition and fragility
functions for physical elements at seismic risk, vol 27, Geotechnical, geological and earth-
quake engineering. Springer Science + Business Media, Dordrecht. ISBN 978-94-007-7872-6
Pitilakis K, Franchin P, Khazai B, Wenzel H (eds) (2014b) SYNER-G: systemic seismic vulner-
ability and risk assessment of complex urban, utility, lifeline systems and critical facilities, vol
31, Geotechnical, geological and earthquake engineering. Springer Science + Business Media,
Dordrecht. ISBN 978-94-017-8835-9
Satriano C, Elia L, Martino C, Lancieri M, Zollo A, Iannaccone G (2010) PRESTo, the earthquake
early warning system for southern Italy: concepts, capabilities and future perspectives. Soil
Dyn Earthq Eng. doi:10.1016/j.soildyn.2010.06.008
Schramm W (1954) How communication works. In: Schramm W (ed) The process and effects of
mass communication. University of Illinois Press, Urbana
Zuccaro G, De Gregorio D, Dolce M, Speranza E, Moroni C (2014) Manuale per la compilazione
della scheda di 1 livello per la caratterizzazione tipologico-strutturale dei comparti urbani
costituiti da edifici ordinari (Manual for the compilation of the 1st level form to characterize
urban districts with respect to the structural types of ordinary building), preliminary draft.
ReLUIS
Chapter 3
Earthquake Risk Assessment: Certitudes,
Fallacies, Uncertainties and the Quest
for Soundness
Kyriazis Pitilakis
Abstract This paper addresses, from engineering point of view, issues in seismic
risk assessment. It is more a discussion on the current practice, emphasizing on the
multiple uncertainties and weaknesses of the existing methods and approaches, which
make the final loss assessment a highly ambiguous problem. The paper is a modest
effort to demonstrate that, despite the important progress made the last two decades or
so, the common formulation of hazard/risk based on the sequential analyses of source
(M, hypocenter), propagation (for one or few IM) and consequences (losses) has
probably reached its limits. It contains so many uncertainties affecting seriously the
final result, and the way that different communities involved, modellers and end users
are approaching the problem is so scattered, that the seismological and engineering
community should probably re-think a new or an alternative paradigm.
3.1 Introduction
Seismic hazard and risk assessments are nowadays rather established sciences, in
particular in the probabilistic formulation of hazard. Long-term hazard/risk assess-
ments are the base for the definition of long-term actions for risk mitigation.
However, several recent events raised questions about the reliability of such
methods. The occurrence of relatively “unexpected” levels of hazard and loss
(e.g., Emilia, Christchurch, Tohoku) and the continuous increase of hazard with
time, basically due to the increase of seismic data, and the increase of exposure,
make loss assessment a highly ambiguous problem.
Existing models present important discrepancies. Sometimes such discrepancies
are only apparent, since we do not always compare two “compatible” values. There
K. Pitilakis (*)
Department of Civil Engineering, Aristotle University of Thessaloniki,
Thessaloniki 54124, Greece
e-mail: [email protected]
are several reasons for this. In general, it is usually statistically impossible to falsify
one model only with one (or too few) datum. Whatever the value of probability for
such an event is, a probability (interpreted as “expected annual frequency”) value
greater than zero means that the occurrence of the event is possible, and we cannot
know how much unlucky we have been. If the probability is interpreted as “degree
of belief”, is instead in principle not testable. In addition, the assessments are often
based on “average” values, knowing that the standard deviations are high. This is
common practice, but this also means that such assessments should be compared to
the average over multiple events, instead of one single specific event. However, we
almost never have enough data to test long-term assessments. This is probably the
main reason why different alternative models exist.
Another important reason why significant discrepancies are expected is the fact
that we do know that many sources of uncertainties do exist in the whole chain from
hazard to risk assessment. However, are we propagating accurately all the known
uncertainties? Are we modelling the whole variability? The answer is that often it is
difficult to define “credible” limits and constraints to the natural variability (alea-
tory uncertainty). One of the consequences is that the “reasonable” assessments are
often based on “conservative” assumptions. However, conservative choices usually
imply subjectivity and statistical biases, and such biases are, at best, only partially
controlled. In engineering practice this is often the rule, but can this be generalized?
And if yes, how can it be achieved? Epistemic uncertainty usually offers a solution
to this point in order to constrain the limits of “subjective” and “reasonable” choices
in the absence of rigorous rules. In this case, epistemic uncertainties are intended as
the variability of results among different (but acceptable) models. But, are we really
capable of effectively accounting for and propagating epistemic uncertainties? In
modelling epistemic uncertainties, different alternative models are combined
together, often arbitrarily, assuming that one true model exists and, judging this
possibility, assigning a weight to each model based on the consensus on its
assumptions. Here, two questions are raised. First, is the consensus a good metric?
Are there any alternatives? How many? Second, does a “true” model exist? Can a
model be only “partially” true, as different models are covering different “ranges”
of applicability? To judge the “reliability” of one model, we should analyze its
coherence with a “target behaviour” that we want to analyze, which is a-priori
unknown and more important it is evolving with time. The model itself is a
simplification of the reality, based on the definition of the main degrees of freedom
that control such “target behaviour”.
In the definition of “target behaviour” and, consequently, in the selection of the
appropriate “degrees of freedom”, several key questions remain open. First, are we
capable of completely defining what the target of the hazard/risk assessments is?
What is “reasonable”? For example, we tend to use the same approach at different
spatiotemporal levels, which is probably wrong. Is the consideration of a “changing
or moving target” acceptable by the community? Furthermore, do we really explore
all the possible degrees of freedom to be accounted for? And if yes, are we able to
do it accurately considering the eternal luck of good and well-focused data? Are we
missing something? For example, in modelling fragility, several degrees of freedom
are missing or over-simplified (e.g., aging effects, poor modelling including the
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 61
absence of soil-structure interaction), while recent results show that this “degree of
freedom” may play a relevant role to assess the actual vulnerability of a structure.
More in general, the common formulation of hazard/risk is based on the sequential
analyses of source (M, hypocenter), propagation (for one or few intensity measures)
and consequences (impact/losses). Is this approach effective, or is it just an easy
way to tackle the complexity of the nature, since it keeps the different disciplines
(like geology, geophysics and structural engineering) separated? Regarding
“existing models”, several attempts are ongoing to better constrain the analyses
of epistemic uncertainties like critical re-analysis of the assessment of all the
principal factors of hazard/risk analysis or proposal of alternative modelling
approaches (e.g., Bayesian procedures instead of logic trees). All these follow the
conventional path to go. Is this enough? Wouldn’t it be better to start criticizing the
whole model? Do we need a change of the paradigm? Or maybe better, can we think
of alternative paradigms? The general tendency is to complicate existent models, in
order to obtain new results, which we should admit are sometimes better correlated
with specific observations or example cases. Is this enough? Have we really deeply
thought that in this way we may build “new” science over not consolidated roots?
Maybe it is time to re-think these roots, in order to evaluate their stability in space,
time and reliability.
The paper that follows is a modest effort to argue on these issues, unfortunately
without offering any idea of the new paradigm.
Seismic hazard and risk assessments are made with models. The biggest problem of
models is the fact that they are made by humans who have a limited knowledge of
the problem and tend to shape or use their models in ways that mirror their own
notion of which a desirable outcome would be. On the other hand, models are
generally addressed to end users with different level of knowledge and perception
of the uncertainties involved. Figure 3.1 gives a good picture of the way that
different communities perceive “certainty”. It is called the “certainty trough”.
In the certainty trough diagram, users are presented as either under-critical or
over-critical, in contrast to producers, who have detailed understanding of the
technology’s strengths and weaknesses. Model producers or modellers are
a-priori aware of the uncertainties involved in their model. At least they should
be. For end-users communities the situation is different. Experienced over-critical
users are generally in better position to evaluate the accuracy of the model and its
uncertainties, while the alienated under-critical users have the tendency to follow
the “believe the brochures” concept. When this second category of end-users uses a
model, the uncertainties are generally increased.
62 K. Pitilakis
The present discussion focuses on the models and modellers and less on the
end-users; however, the criticism will be more from the side of the end users.
All models are imperfect. Identifying model errors is difficult in the case of
simulations of complex and poorly understood systems, particularly when the
simulations extend to hundreds or thousands of years. Model uncertainties are a
function of a multiplicity of factors (degrees of freedom). Among the most impor-
tant are limited availability and quality of empirical-recorded data, the imperfect
understanding of the processes being modelled and, finally, the poor modelling
capacities. In the absence of well-constrained data, modellers often gauge any given
model’s accuracy by comparing it with other models. However, the different
models are generally based on the same set of data, equations and assumptions,
so that agreement among them may indicate very little about their realism.
A good model is based on a wise balance of observation and measurement of
accessible phenomena with informed judgment “theory”, and not in inconvenience.
Modellers should be honestly aware of the uncertainties involved in their models
and of how the end users could make use of them. They should take the models
“seriously but not literally”, avoiding mixing up “qualitative realism” with “quan-
titative realism”. However, modellers typically identify the problem as users’
misuse of their model output, suggesting that the latter interpret the results too
uncritically.
Une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.
Jules Henri Poincare
far. Moreover these curves, normally produced for simplified structures, are used to
estimate physical damages and implicitly the associated losses for a whole city with
a very heterogeneous fabric and typology of buildings. Then aleatory and epistemic
uncertainties are merged.
At the end of the game there is always a pending question: How can we really
differentiate the two sources of uncertainty?
Realizing the importance of all different sources of uncertainties characterizing
each step of the long process from seismic hazard to risk assessment, including all
possible consequences and impact, beyond physical damages, it is understood how
difficult it is to derive a reliable global model covering the whole chain from hazard
to risk. For the moment, scientists, engineers and policy makers are fighting with
rather simple weapons, using simple paradigms. It is time to re-think the whole
process merging their capacities and talents.
A main issue related to the construction and use of fragility curves is the selection of
appropriate earthquake Intensity Measures (IM) that characterize the strong ground
motion and best correlate with the response of each element at risk, for example,
building, pier bridge or pipeline. Several intensity measures of ground motion have
been proposed, each one describing different characteristics of the motion, some of
which may be more adverse for the structure or system under consideration. The use
of a particular IM in seismic risk analysis should be guided by the extent to which
the measure corresponds to damage to the components of a system or the system of
systems. Optimum intensity measures are defined in terms of practicality, effec-
tiveness, efficiency, sufficiency, robustness and computability (Cornell et al. 2002;
Mackie and Stojadinovic 2003, 2005).
Practicality refers to the recognition that the IM has some direct correlation to
known engineering quantities and that it “makes engineering sense” (Mackie and
Stojadinovic 2005; Mehanny 2009). The practicality of an IM may be verified
analytically via quantification of the dependence of the structural response on the
physical properties of the IM such as energy, response of fundamental and higher
modes, etc. It may also be verified numerically by the interpretation of the struc-
ture’s response under non-linear analysis using existing time histories.
Sufficiency describes the extent to which the IM is statistically independent of
ground motion characteristics such as magnitude and distance (Padgett et al. 2008).
A sufficient IM is the one that renders the structural demand measure conditionally
independent of the earthquake scenario. This term is more complex and is often at
odds with the need for computability of the IM. Sufficiency may be quantified via
statistical analysis of the response of a structure for a given set of records.
68 K. Pitilakis
Fig. 3.2 Examples of (a) vulnerability function and (b) fragility function
elicitation, analytical and hybrid. All these approaches have their strengths and
weaknesses. However, analytical methods, when properly validated with large-
scale experimental data and observations from recent strong earthquakes, have
become more popular in recent years. The main reason is the considerable improve-
ment of computational tools, methods and skills, which allows comprehensive
parametric studies covering many possible typologies to be undertaken. Another
equally important reason is the better control of several of the associated
uncertainties.
The two most popular methods to derive fragility (or vulnerability) curves for
buildings and pier bridges are the capacity spectrum method (CSM) (ATC-40 and
FEMA273/356) with its alternatives (e.g., Fajfar 1999), and the incremental
dynamic analysis (IDA) (Vamvatsikos and Cornell 2002). Both have contributed
significantly and marked the substantial progress observed the last two decades;
however they are still simplifications of the physical problem and present several
limitations and weaknesses. The former (CSM) is approximate in nature and is
based on static loading, which ignores the higher modes of vibration and the
frequency content of the ground motion. A thorough discussion on the pushover
approach may be found in Krawinkler and Miranda (2004).
The latter (IDA) is now gaining in popularity because among other advantages it
offers the possibility to select the most relevant to the structural response Engi-
neering Demand Parameters (EDP) (inter-story drifts, component inelastic defor-
mations, floor accelerations, hysteretic energy dissipation etc.). IDA is commonly
used in probabilistic seismic assessment frameworks to produce estimates of the
dynamic collapse capacity of global structural systems. With the IDA procedure the
coupled soil-foundation-structure system is subjected to a suite of multiply scaled
real ground motion records whose intensities are “ideally?” selected to cover the
whole range from elasticity to global dynamic instability. The result is a set of
curves (IDA curves) that show the EDP plotted against the IM used to control the
increment of the ground motion amplitudes. Fragility curves for different damage
states can be estimated through statistical analysis of the IDA results (pairs of EDP
and IM) derived for a sufficiently large number of ground motions (normally
15–30). Among the weaknesses of the approach is the fact that scaling of the real
records changes the amplitude of the IMs but keeps the frequency content the same
throughout the inelastic IDA procedure. In summary both approaches introduce
several important uncertainties, both aleatory and epistemic.
Among the most important latest developments in the field of fragility curves is
the recent publication “SYNER-G: Typology Definition and Fragility Functions for
Physical Elements at Seismic Risk”, Pitilakis K, Crowley H, Kaynia A (Eds)
(2014a).
Several uncertainties are introduced in the process of constructing a set of
fragility curves of a specific element at risk. They are associated to the parameters
describing the fragility curves, the methodology applied, as well as to the selected
damage states and the performance indicators (PI) of the element at risk. The
uncertainties may again be categorized as aleatory and epistemic. However, in
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 71
this case epistemic uncertainties are probably more pronounced, especially when
analytical methods are used to derive the fragility curves.
In general, the uncertainty of the fragility parameters is estimated through the
standard deviation, βtot that describes the total variability associated with each
fragility curve. Three primary sources of uncertainty are usually considered,
namely the definition of damage states, βDS, the response and resistance (capacity)
of the element, βC, and the earthquake input motion (demand), βD. Damage state
definition uncertainties are due to the fact that the thresholds of the damage indexes
or parameters used to define damage states are not known. Capacity uncertainty
reflects the variability of the properties of the structure as well as the fact that the
modelling procedures are not perfect. Demand uncertainty reflects the fact that IM
is not exactly sufficient, so different records of ground motion with equal IM may
have different effects on the same structure (Selva et al. 2013). The total variability
is modelled by the combination of the three contributors assuming that they are
stochastically independent and log-normally distributed random variables, which is
not always true.
Paolo Emilio Pinto (2014) in Pitilakis et al. (2014a) provides the general
framework of the treatment of uncertainties in the derivation of the fragility
functions. Further discussion on this issue is made in the last section of this paper.
In principle, the problem of seismic risk assessment and safety is probabilistic and
several sources of uncertainties are involved. However, a full probabilistic
approach is not applied throughout the whole process. For the seismic hazard the
approach is usually probabilistic, at least partially. Deterministic approach, which is
more appreciated by engineers, is also used. Structures are traditionally analyzed in
a deterministic way with input motions estimated probabilistically. PSHA ground
motion characteristics, determined for a selected return period (e.g., 500 or 1,000
years), are traditionally used as input for the deterministic analysis of a structure
(e.g., seismic codes). On the other hand, fragility curves by definition represent the
conditional probability of the failure of a structure or equipment at a given level of
ground motion intensity measure, while seismic capacity of structures and compo-
nents is usually estimated deterministically. Finally, damages and losses are esti-
mated in a probabilistic way, mainly, if not exclusively, because of PSHA and
fragility curves used. So in the whole process of risk assessment, probabilistic and
deterministic approaches are used indifferently without knowing exactly what the
impact of that is and how the involved uncertainties are treated and propagated.
72 K. Pitilakis
modeler’s “authority” and the loneliness and sometime desolation of the end-user in
the decision making procedure.
The important role of site effects in seismic hazard and risk assessment is now well
accepted. Their modelling has been also improved in the last two decades.
In Eurocode 8 (CEN 2004) the influence of local site conditions is reflected with
the shape of the PGA-normalized response spectra and the so-called “soil factor” S,
which represents ground motion amplification with respect to outcrop conditions.
As far as soil categorization is concerned, the main parameter used is Vs,30, i.e., the
74 K. Pitilakis
Table 3.1 Improved soil Type 2 (Ms 5.5) Type 1 (Ms > 5.5)
factors for EC8 soil classes
Soil class Improved EC8 Improved EC8
(Pitilakis et al. 2012)
B 1.40 1.35 1.30 1.20
C 2.10 1.50 1.70 1.15
D 1.80a 1.80 1.35a 1.35
E 1.60a 1.60 1.40a 1.40
a
Site specific ground response analysis required
time-based average value of shear wave velocity in the upper 30 m of the soil
profile, first proposed by Borcherdt and Glassmoyer (1992). Vs,30 has the advantage
that it can be obtained easily and at relatively low cost, since the depth of 30 m is a
typical depth of geotechnical investigations and sampling borings, and has defi-
nitely provided engineers with a quantitative parameter for site classification. The
main and important weakness is that the single knowledge of the Vs profile at the
upper 30 m cannot quantify properly the effects of the real impedance contrast,
which is one of the main sources of the soil amplification, as for example in case of
shallow (i.e., 15–20 m) loose soils on rock or deep soil profiles with variable
stiffness and contrast. Quantifying site effects with the simple use of Vs,30 intro-
duces important uncertainties in the estimated IM.
Pitilakis et al. (2012) used an extended strong motion database compiled in the
framework of SHARE project (Giardini et al. 2013) to validate the spectral shapes
proposed in EC8 and to estimate improved soil amplification factors for the existent
soil classes of Eurocode 8 for a potential use in an EC8 update (Table 3.1). The soil
factors were estimated using a logic tree approach to account for the epistemic
uncertainties. The major differences in S factor values were found for soil category
C. For soil classes D and E, due to the insufficient datasets, the S factors of EC8
remain unchanged with a prompt for site-specific ground response analyses.
In order to further improve design spectra and soil factors Pitilakis et al. (2013)
proposed a new soil classification system that includes soil type, stratigraphy,
thickness, stiffness and fundamental period of soil deposit (T0) and average shear
wave velocity of the entire soil deposit (Vs,av). They compiled an important subset
of the SHARE database, containing records from sites, which dispose a well-
documented soil profile concerning dynamic properties and depth up to the “seis-
mic” bedrock (Vs > 800 m/s). The soil classes of the new classification scheme are
illustrated in comparison to EC8 soil classes in Fig. 3.3.
The proposed normalized acceleration response spectra were evaluated by
fitting the general spectral equations of EC8 closer to the 84th percentile, in
order to account as much as possible for the uncertainties associated with the
nature of the problem. Figure 3.4 is a representative plot, illustrating the median,
16th and 84th percentiles, and the proposed design normalized acceleration
spectra for soil sub-class C1. It is obvious that the selection of a different
percentile would affect dramatically the proposed spectra and consequently the
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 75
Fig. 3.3 Simplified illustration of ground types according to (a) EC8 and (b) the new classifica-
tion scheme of Pitilakis et al. (2013)
Fig. 3.4 Normalized elastic acceleration response spectra for soil class C1 of the classification
system of Pitilakis et al. (2013) for Type 2 seismicity (left) and Type 1 seismicity (right). Red lines
represent the proposed spectra. The range of the 16th to 84th percentile is illustrated as a gray area
demand spectra, the performance points and the damages. While there is no
rigorous argument why the median should be chosen, 84th percentile or close to
this sounds more reasonable.
The proposed new elastic acceleration response spectra, normalized to the
design ground acceleration at rock-site conditions PGArock, are illustrated in
Fig. 3.5. Dividing the elastic response spectrum of each soil class with the
corresponding response spectrum for rock, period-dependent amplification factors
can be estimated.
76 K. Pitilakis
Fig. 3.5 Type 2 (left) and Type 1 (right) elastic acceleration response spectra for the classification
system of Pitilakis et al. (2013)
Nature and earthquakes are unpredictable both in short and long term especially in
case of extreme or “rare” events. Traditionally seismic hazard is estimated as time
independent, which is probably not true. We all know that after a strong earthquake
it is rather unlikely that another strong earthquake will happen in short time on the
same fault. Exceptions like the sequence of Christchurch earthquakes in
New Zealand or more recently in Cephalonia Island in Greece are rather exceptions
that prove the general rule, if there is any.
Exposure is certainly varying with time, normally increasing. The vulnerability
is also varying with time, increasing or decreasing (for example after mitigation
countermeasures or post earthquake retrofitting have been undertaken). On the
other hand aging effects and material degradation with time increase the vulnera-
bility (Pitilakis et al. 2014b). Consequently the risk cannot be time independent.
Figure 3.6 sketches the whole process.
For the time being time dependent seismic hazard and risk assessment are in a
very premature stage. However, even if in the near future rigorous models should be
developed, the question still remains: is it realistic to imagine that time dependent
hazard could be ever introduced in engineering practice and seismic codes? If it
ever happens, it will have a profound political, societal and economic impact.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 77
Fig. 3.6 Schematic illustration of time dependent seismic hazard, exposure, vulnerability and risk
(After J. Douglas et al. in REAKT)
Fig. 3.7 Conceptual relationship between seismic hazard intensity and structural performance
(From Krawinkler and Miranda (2004), courtesy W. Holmes, G. Deierlein)
structure type. These thresholds are qualitative and are given as general outline
(Fig. 3.7). The user could modify them accordingly, considering the particular
conditions of the structure, the network or component under study. The selection
of any value of these thresholds inevitably introduces uncertainties, which are
affecting the target performance and finally the estimation of damages and losses.
Methods for deriving fragility curves generally model the damage on a discrete
damage scale. In empirical procedures, the scale is used in reconnaissance efforts to
produce post-earthquake damage statistics and is rather subjective. In analytical
procedures the scale is related to limit state mechanical properties that are described
by appropriate indices, such as for example displacement capacity (e.g., inter-story
drift) in the case of buildings or pier bridges. For other elements at risk the
definition of the performance levels or limit states may be more vague and follow
other criteria related, for example in the case of pipelines, to the limit strength
characteristics of the material used in each typology.
The definition and consequently the selection of the damage thresholds, i.e.,
limit states, are among the main sources of uncertainties because they rely on rather
subjective criteria. A considerable effort has been made in SYNER-G (Pitilakis
et al. 2014a) to homogenize the criteria as much as possible.
Measuring seismic performance (risk) through economic losses and downtime
(and business interruption), introduces the idea of measuring risk through a new
more general concept: the resilience.
Resilience referring to a single element at risk or a system subjected to natural
and/or manmade hazards usually goes towards its capability to recover its func-
tionality after the occurrence of a disruptive event. It is affected by attributes of the
system, namely robustness (for example residual functionality right after the dis-
ruptive event), rapidity (recovery rate), resourcefulness and redundancy (Fig. 3.8).
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 79
Fig. 3.8 Schematic representation of seismic resilience concept (Bruneau et al. 2003)
It is also obvious that resilience has very strong societal, economic and political
components, which amplify the uncertainties.
Accepting the resilience to measure and quantify performance indicators and
implicitly fragility and vulnerability, means that we introduce a new complicated
world of uncertainties, in particular when from the resilience of a single asset e.g., a
building, we integrate the risk in a whole city, with all its infrastructures, utility
systems and economic activities.
Fig. 3.9 Schematic example of estimated damages when using the median for UHS for rock, soil
amplification factors, capacity curve and fragility curves
Fig. 3.10 Schematic example of estimated damages when using the median 1 standard devia-
tion (depending on which one is the more “conservative” or reasonable) for UHS for rock, soil
amplification factors, capacity curve and fragility curves
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 81
To further highlight the inevitable scatter in the current risk assessment of physical
assets we use as example the seismic risk assessment and the damages of building
stock in an urban area and in particular the city of Thessaloniki, Greece.
Thessaloniki is the second largest city in Greece with about one million inhabitants.
It has a long seismic history of devastating earthquakes, with the most recent one
occurring in 1978 (Mw ¼ 6.5, R ¼ 25 km). Since then a lot of studies have been
performed in the city to estimate the seismic hazard and to assess the seismic risk.
Due to the very good knowledge of the different parameters, the city has been
selected as pilot case study in several major research projects of the European
Union (SYNER-G, SHARE, RISK-UE, LessLoss etc.)
The study area considered in the present application (Fig. 3.11) covers the central
municipality of Thessaloniki. With a total population of 380,000 inhabitants and
about 28,000 buildings of different typologies (mainly reinforced concrete), it is
divided in 20 sub-city districts (SCD) (http://www.urbanaudit.org). Soil conditions
are very well known (e.g., Anastasiadis et al. 2001). Figures 3.12 and 3.13 illustrate
the classification of the study area based on the classification schemes of EC8 and
Pitilakis et al. (2013) respectively. The probabilistic seismic hazard (PSHA) is
estimated applying SHARE methodology (Giardini et al. 2013), with its rigorous
treatment of aleatory and epistemic uncertainties. The PSHA with a 10 % proba-
bility of exceedance in 50 years and the associated UHS have been estimated for
outcrop conditions. The estimated rock UHS has been then properly modified to
account for soil conditions applying adequate period-dependent amplification fac-
tors. Three different amplification factors have been used: the current EC8 factors
(Hazard 1), the improved ones (Pitilakis et al. 2012) (Hazard 2) and the new ones
based on a more detailed soil classification scheme (Pitilakis et al. 2013) (Hazard 3)
(see Sect. 3.7.3). Figure 3.14 presents the computed UHS for soil type C (or C1
according to the new classification scheme). Vulnerability is expressed through
appropriate fragility curves for each building typology (Pitilakis et al. 2014a).
Damages and associated probability of a building of a specific typology to exceed
a specific damage state have been calculated with the Capacity Spectrum Method
(Freeman 1998; Fajfar and Gaspersic 1996).
The detailed building inventory for the city of Thessaloniki, which includes
information about material, code level, number of storeys, structural type and
volume for each building, allows a rigorous classification in different typologies
according to SYNER-G classification and based on a Building Typologies Matrix
representing practically all common RC building types in Greece (Kappos
82 K. Pitilakis
Fig. 3.11 Municipality of Thessaloniki. Study area; red lines illustrate Urban Audit Sub-City
Districts (SCDs) boundaries
et al. 2006). The building inventory comprises 2,893 building blocks with 27,738
buildings, the majority of which (25,639) are reinforced concrete (RC) buildings.
The buildings are classified based on their structural system, height and level of
seismic design (Fig. 3.15). Regarding the structural system, both frames and frame-
with-shear walls (dual) systems are included, with a further distinction based on the
configuration of the infill walls. Regarding the height, three subclasses are consid-
ered (low-, medium- and high-rise). Finally, as far as the level of seismic design is
concerned, four different levels are considered:
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 83
• No code (or pre-code): R/C buildings with very low level of seismic design and
poor quality of detailing of critical elements.
• Low code: R/C buildings with low level of seismic design.
• Medium code: R/C buildings with medium level of seismic design (roughly
corresponding to post-1980 seismic code and reasonable seismic detailing of
R/C members).
84 K. Pitilakis
9000
8000
Number of buildings
7000
6000
5000
4000
3000
2000
1000
0
RC3.1LL
RC3.1ML
RC3.1HL
RC3.2LL
RC3.2ML
RC3.2HL
RC4.2LL
RC4.2ML
RC4.2HL
RC4.2LM
RC4.2MM
RC4.2HM
RC4.2LH
RC4.2MH
RC4.2HH
RC4.3LL
RC4.3ML
RC4.3HL
RC4.3LM
RC4.3MM
RC4.3HM
RC4.3MH
RC4.3HH
Building Type
Fig. 3.15 Classification of the RC buildings of the study area (Kappos et al. 2006). The first letter
of each building type refers to the height of the building (L low, M medium, H high), while the
second letter refers to the seismic code level of the building (N no, L low, M medium, H high)
• High code: R/C buildings with enhanced level of seismic design and ductile
seismic detailing of R/C members according to the new Greek Seismic Code
(similar to Eurocode 8).
The fragility functions used (in terms of spectral displacement Sd) were derived
though classical inelastic pushover analysis. Bilinear pushover curves were
constructed for each building type, so that each curve is defined by its yield and
ultimate capacity. Then they were transformed into capacity curves (expressing
spectral acceleration versus spectral displacement). Fragility curves were finally
derived from the corresponding capacity curves, by expressing the damage states in
terms of displacements along the capacity curves (See Sect. 3.6 and in D’Ayala
et al. 2012).
Each fragility curve is defined by a median value of spectral displacement and a
standard deviation. Although the standard deviation of the curves is not constant,
for the present application a standard deviation equal to 0.4 was assigned to all
fragility curves, due to a limitation of the model used to perform the risk analyses.
This hypothesis will be further discussed later in this section.
Five damage states were used in terms of Sd: DS1 (slight), DS2 (moderate), DS3
(substantial to heavy), DS4 (very heavy) and DS5 (collapse) (Table 3.2). According
to this classification a spectral displacement of 2 cm or even lower can bring
ordinary RC structures in the moderate (DS2) damage state, which is certainly a
conservative assumption and in fact is penalizing, among other things, seismic risk
assessment.
The physical damages of the buildings have been estimated using the open-
source software EarthQuake Risk Model (EQRM http://sourceforge.net/projects/
eqrm, Robinson et al. 2005), developed by Geoscience Australia. The software is
based on the HAZUS methodology (FEMA and NIBS 1999; FEMA 2003) and has
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 85
Table 3.2 Damage states and spectral displacement thresholds (D’Ayala et al. 2012)
Bare frames Bare dual
Infilled frames Infilled frames Infilled dual-
Damage with Sdu,bare < with Sdu,bare Infilled dual – shear infill walls
state 1.1Sdu 1.1Sdu wall drop strength failure
DS1 0.7Sdy 0.7Sdy
DS2 Sdy + 0.05 (Sdu Sdy) Sdy + 0.05 (Sdu Sdy)
DS3 Sdy + (1/3) (Sdu Sdy + (1/2) (Sdu Sdy + (1/2) (Sdu 0.9Sdu
Sdy) Sdy) Sdy)
DS4 Sdy + (2/3) (Sdu Sdu Sdu Sdu,bare
Sdy)
DS5 Sdu Sdu,bare 1.3Sdu 1.3Sdu,bare
Sdy spectral displacement for yield capacity
Sdu spectral displacement for ultimate capacity
been properly modified so that it can be used for any region of the world (Crowley
et al. 2010). The method is based on the Capacity Spectrum Method. The so called
“performance points”, after properly adjusted to account for the elastic and hyster-
etic damping of each structure, have been overlaid with the relevant fragility curves
in order to compute the damage probability in each of the different damage states
and for each building type.
The method relies on two main parameters: The demand spectra (properly
modified to account for the inelastic behaviour of the structure), which are driven
from the hazard analysis, and the capacity curve. The latter is not user-defined and it
is automatically estimated by the code using the building parameters supplied by
the user. The capacity curve is defined by two points: the yield point (Sdy, Say) and
the ultimate point (Sdu, Sdy) and is composed of three parts: a straight line to the
yield point (representing elastic response of the building), a curved part from the
yield point to the ultimate point expressed by an exponential function and a
horizontal line starting from the ultimate point (Fig. 3.16). The yield point and
ultimate point are defined in terms of the building parameters (Robinson et al. 2005)
introducing inevitably several extra uncertainties, especially in case of existing
buildings, designed and constructed several decades ago. In overall the following
data are necessary to implement the Capacity Spectrum Method in EQRM: height
of the building, natural elastic period, design strength coefficient, fraction of
building weight participating in the first mode, fraction of the effective building
height to building displacement, over-strength factors, ductility factor and damping
degradation factors for each building or building class. All these introduce several
uncertainties, which are difficult to be quantified in a rigorous way mainly because
the uncertainties are mostly related to the difference between any real RC structure
belonging in a certain typology and the idealized model.
86 K. Pitilakis
Fig. 3.16 Typical capacity curve in EQRM software, defined by the yield point (Sdy, Say) and the
ultimate point (Sdu, Sdy) (Modified after Robinson et al. (2005))
For each building type in each building block, the probabilities for slight, moderate,
extensive and complete damage were calculated. These probabilities were then
multiplied with the total floor area of the buildings of the specific building block
that are classified to the specific building type in order to estimate for this building
type the floor area, which will suffer each damage state. Repeating this for all
building blocks which belong to the same sub-city district (SCD) and for all
building types, the total floor area of each building type that will suffer each damage
state in the specific SCD can be calculated (Fig. 3.17). The total percentages of
damaged floor area per damage state for all SCD and for the three hazard analyses
illustrated in the previous figures are given in Table 3.3.
The economic losses were estimated through the mean damage ratio (MDR)
(Table 3.4), multiplying then this value with an estimated replacement cost of
1,000 €/m2 (Table 3.5).
The observed differences in the damage assessment and losses are primarily
attributed to the numerous uncertainties associated to the hazard models, to the
way the uncertainties are treated and to the number of standard deviations accepted
in each step of the analysis. Higher site amplification factors associated for example
to median value plus one standard deviation, result in increasing building damages
and consequently economic losses. The way inelastic demand spectra are estimated
and the difference between computed UHS and a real earthquake records may also
affect the final result (Fig. 3.18).
Despite the important influence of the hazard parameters, there are several other
sources of uncertainties related mainly to the methods used. The effect of some of
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 87
Table 3.3 Percentages of damaged floor area per damage state for hazard cases 1–3, for a mean
return period of 475 years
Hazard 1 (%) Hazard 2 (%) Hazard 3 (%)
No 7.4 6.4 4.3
Slight [D1] 17.6 12.9 11.1
Moderate [D2] 54.4 43.9 42.2
Extensive [D3] 18.9 22.4 20.3
Complete [D5] 1.7 14.4 22.1
Table 3.4 Mean damage ratios for hazard cases 1–3, for a mean return period of 475 years
Hazard 1 (%) Hazard 2 (%) Hazard 3 (%)
MDR 7.94 18.28 23.87
Table 3.5 Economic losses for hazard cases 1–3, for a mean return period of 475 years, assuming
an average replacement cost equal to 1,000 €/m2 (in billions €)
Hazard 1 Hazard 2 Hazard 3
Economic losses 2.7 6.2 8.1
Table 3.6 Inelastic displacement demand computed with different methods and total physical
damages for SCD16 and Hazard 3, for a mean return period of 475 years in terms of the percentage
of damage per damage state using various methodologies for the reduction of the elastic spectrum
dPP DS1 DS2 DS3 DS4 DS5
(cm) (%) (%) (%) (%) (%)
ATC-40_Hazus, k ¼ 0.33 8.0 0.00 0.00 0.94 35.99 63.08
(Hazus_k ¼ 0.333)
ATC-40_Hazus, k ¼ 1 4.2 0.00 0.04 22.85 66.98 10.13
(Hazus_k ¼ 1)
Newmark and Hall (1982) (NH) 2.5 0.02 1.90 68.95 28.60 0.53
Krawinkler and Nassar (1992) (KN) 2.2 0.10 5.01 78.54 16.21 0.14
Vidic et al. (1994) (VD) 2.2 0.06 3.83 76.86 19.06 0.20
Miranda and Bertero (1994) (MB) 1.8 0.31 9.99 81.14 8.53 0.04
Duration of shaking
The effect of the duration of shaking is introduced through the k factor. It is
supposed that the shorter the duration is, the higher the damping value should
be. Applying this approach to the study case it is found that the effective damping
for short earthquake duration is equal to 45 % while the effective damping for
moderate earthquake duration is equal to 25 %. The differences are too high to
underestimate the importance of the rigorous selection of this single parameter.
Figure 3.20 presents the damages for SCD16 in terms of the percentage of damage
per damage state considering short, moderate or long duration of the ground
shaking.
EQRM versus N2 method (Fajfar 1999)
There are various methodologies that can be used for the vulnerability assessment
and thus for building damage estimation (e.g., Capacity Spectrum Method, N2
Method). CSM (ATC-40 1996) that is also utilized in EQRM, evaluates the seismic
performance of structures by comparing structural capacity with seismic demand
curves. The key to this method is the reduction of 5 %-damped elastic response
spectra of the ground motion to take into account the inelastic behaviour of the
structure under consideration using appropriate damping based reduction factors.
This is the main difference of EQRM methodology compared to “N2” method
(Fajfar 1999, 2000), in which the inelastic demand spectrum is obtained from code-
based elastic design spectra using ductility based reduction factors. The computed
damages in SCD16 for Hazard 3 using EQRM and N2 methodology are depicted in
Fig. 3.21. It is needless to comment on the differences.
Uncertainties in the Fragility Curves
Figure 3.22 shows the influence of beta (β) factor of the fragility curves. EQRM
considers that beta factor is equal to 0.4. However the selection of a different,
equally logical value, results in a very different damage level.
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 91
a b
Hazus_k=0.333 Hazus_k=1
NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE
c d
NH KN
NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE
e f
VD MB
NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE
Fig. 3.19 Seismic risk (physical damages) in SCD16 for Hazard 3 and mean return period of
475 years in terms of the percentage of damage per damage state using (a) ATC-40 methodology
combined with Hazus for k ¼ 0.333 (b) ATC-40 methodology combined with Hazus for k ¼ 1 (c)
Newmark and Hall (1982) (d) Krawinkler and Nassar (1992) (e) Vidic et al. (1994) and (f)
Miranda and Bertero (1994)
The main conclusion that one could make from this short and fragmented discus-
sion is that we need a re-thinking of the whole analysis chain from hazard assess-
ment to consequences and loss assessment. The uncertainties involved in every step
of the process are too important, affecting the final result. Probably it is time to
change the paradigm because so far we just use the same ideas and models trying to
92 K. Pitilakis
NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE
c long (k=0.1)
NO DAMAGE
SLIGHT
MODERATE
EXTENSIVE
COMPLETE
Fig. 3.20 Computed damages for SCD16 for Hazard 3 and mean return period of 475 years in
terms of the percentage of damage per damage state considering (a) short (b) moderate and (c)
long duration of the ground shaking
EQRM N2
NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE
Fig. 3.21 Computed damages in SCD16 for Hazard 3 and mean return period of 475 years in
terms of the percentage of damage per damage using EQRM and N2 methodology
NO DAMAGE NO DAMAGE
SLIGHT SLIGHT
MODERATE MODERATE
EXTENSIVE EXTENSIVE
COMPLETE COMPLETE
Fig. 3.22 Seismic risk for SCD16 for Hazard 3 and mean return period of 475 years in terms of the
percentage of damage per damage state using EQRM with different β factor
improve them (often making them very complex), not always satisfactorily. Con-
sidering the starting point of the various models and approaches and the huge
efforts made so far, the progress globally is rather modest. More important is that
in many cases the uncertainties are increased, not decreased, a fact that has a serious
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 93
implication to the reliability and efficiency of the models regarding the assessment
of the physical damages in particular in large scale e.g., city scale. Alienated
end-users are more apt to serious mistakes and wrong decisions; wrong in the
sense of extreme conservatism, high cost or unacceptable safety margins. It should
be admitted, however, that our know-how has increased considerably and hence
there is the necessary scientific maturity for a qualitative rebound towards a new
global paradigm reducing partial and global uncertainties.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
Abrahamson NA (2006) Seismic hazard assessment: problems with current practice and future
developments. Proceedings of First European Conference on Earthquake Engineering and
Seismology, Geneva, September 2006, p 17
Alexander D (2000) Confronting catastrophe: new perspectives on natural disasters. Oxford
University Press, New York, p 282
Anastasiadis A, Raptakis D, Pitilakis K (2001) Thessaloniki’s detailed microzoning: subsurface
structure as basis for site response analysis. Pure Appl Geophys 158:2597–2633
ATC-40 (1996) Seismic evaluation and retrofit of concrete buildings. Applied Technology Coun-
cil, Redwood City
Bommer JJ, Abrahamson N (2006) Review article “Why do modern probabilistic seismic hazard
analyses often lead to increased hazard estimates?”. Bull Seismol Soc Am 96:1967–1977.
doi:10.1785/0120070018
Borcherdt RD, Glassmoyer G (1992) On the characteristics of local geology and their influence on
ground motions generated by the Loma Prieta earthquake in the San Francisco Bay region,
California. Bull Seismol Soc Am 82:603–641
Bruneau M, Chang S, Eguchi R, Lee G, O’Rourke T, Reinhorn A, Shinozuka M, Tierney K,
Wallace W, Von Winterfelt D (2003) A framework to quantitatively assess and enhance the
seismic resilience of communities. EERI Spectra J 19(4):733–752
CEN (European Committee for Standardization) (2004) Eurocode 8: Design of structures for
earthquake resistance, Part 1: General rules, seismic actions and rules for buildings. EN
1998–1:2004. European Committee for Standardization, Brussels
Cornell CA, Jalayer F, Hamburger RO, Foutch DA (2002) Probabilistic basis for 2000
SAC/FEMA steel moment frame guidelines. J Struct Eng 128(4):26–533
Crowley H, Colombi M, Crempien J, Erduran E, Lopez M, Liu H, Mayfield M, Milanesi (2010)
GEM1 Seismic Risk Report Part 1, GEM Technical Report, Pavia, Italy 2010–5
D’Ayala D, Kappos A, Crowley H, Antoniadis P, Colombi M, Kishali E, Panagopoulos G, Silva V
(2012) Providing building vulnerability data and analytical fragility functions for PAGER,
Final Technical Report, Oakland, California
94 K. Pitilakis
Fajfar P (1999) Capacity spectrum method based on inelastic demand spectra. Earthq Eng Struct
Dyn 28(9):979–993
Fajfar P (2000) A nonlinear analysis method for performance-based seismic design. Earthq
Spectra 16(3):573–592
Fajfar P, Gaspersic P (1996) The N2 method for the seismic damage analysis for RC buildings.
Earthq Eng Struct Dyn 25:23–67
FEMA, NIBS (1999) HAZUS99 User and technical manuals. Federal Emergency Management
Agency Report: HAZUS 1999, Washington DC
FEMA (2003) HAZUS-MH Technical Manual. Federal Emergency Management Agency,
Washington, DC
FEMA 273 (1996) NEHRP guidelines for the seismic rehabilitation of buildings — ballot version.
U.S. Federal Emergency Management Agency, Washington, DC
FEMA 356 (2000) Prestandard and commentary for the seismic rehabilitation of buildings.
U.S. Federal Emergency Management Agency, Washington, DC
Freeman SA (1998) The capacity spectrum method as a tool for seismic design. In: Proceedings of
the 11th European Conference on Earthquake Engineering, Paris
Giardini D, Woessner J, Danciu L, Crowley H, Cotton F, Gruenthal G, Pinho R, Valensise G,
Akkar S, Arvidsson R, Basili R, Cameelbeck T, Campos-Costa A, Douglas J, Demircioglu MB,
Erdik M, Fonseca J. Glavatovic B, Lindholm C, Makropoulos K, Meletti F, Musson R,
Pitilakis K, Sesetyan K, Stromeyer D, Stucchi M, Rovida A (2013) Seismic Hazard Harmo-
nization in Europe (SHARE): Online Data Resource. doi:10.12686/SED-00000001-SHARE
Kappos AJ, Panagopoulos G, Penelis G (2006) A hybrid method for the vulnerability assessment
of R/C and URM buildings. Bull Earthq Eng 4(4):391–413
Krawinkler H, Miranda E (2004) Performance-based earthquake engineering. In: Bozorgnia Y,
Bertero VV (eds) Earthquake engineering: from engineering seismology to performance-based
engineering, chapter 9. CRC Press, Boca Raton, pp 9.1–9.59
Krawinkler H, Nassar AA (1992) Seismic design based on ductility and cumulative damage
demands and capacities. In: Fajfar P, Krawinkler H (eds) Nonlinear seismic analysis and
design of 170 reinforced concrete buildings. Elsevier Applied Science, New York, pp 23–40
LessLoss (2007) Risk mitigation for earthquakes and landslides, Research Project, European
Commission, GOCE-CT-2003-505448
MacKenzie D (1990) Inventing accuracy: a historical sociology of nuclear missle guidance. MIT
Press, Cambridge
Mackie K, Stojadinovic B (2003) Seismic demands for performance-based design of bridges,
PEER Report 2003/16. Pacific Earthquake Engineering Research Center, University of Cali-
fornia, Berkeley
Mackie K, Stojadinovic B (2005) Fragility basis for California highway overpass bridge seismic
decision making. Pacific Earthquake Engineering Research Center, University of California,
Berkeley
Mehanny SSF (2009) A broad-range power-law form scalar-based seismic intensity measure. Eng
Struct 31:1354–1368
Miranda E, Bertero V (1994) Evaluation of strength reduction factors for earthquake-resistant
design. Earthq Spectra 10(2):357–379
Newmark NM, Hall WJ (1982) Earthquake spectra and design. Earthquake Engineering Research
Institute, EERI, Berkeley
Padgett JE, Nielson BG, DesRoches R (2008) Selection of optimal intensity measures in proba-
bilistic seismic demand models of highway bridge portfolios. Earthq Eng Struct Dyn
37:711–725
Pinto PE (2014) Modeling and propagation of uncertainties. In: Pitilakis K, Crowley H, Kaynia A
(eds) SYNER-G: typology definition and fragility functions for physical elements at seismic
risk, vol 27, Geotechnical, geological and earthquake engineering. Springer, Dordrecht. ISBN
978-94-007-7872-6
3 Earthquake Risk Assessment: Certitudes, Fallacies, Uncertainties. . . 95
Pitilakis K, Riga E, Anastasiadis A (2012) Design spectra and amplification factors for Eurocode
8. Bull Earthq Eng 10:1377–1400. doi:10.1007/s10518-012-9367-6
Pitilakis K, Riga E, Anastasiadis A (2013) New code site classification, amplification factors and
normalized response spectra based on a worldwide ground-motion database. Bull Earthq Eng
11(4):925–966. doi:10.1007/s10518-013-9429-4
Pitilakis K, Crowley H, Kaynia A (eds) (2014a) SYNER-G: typology definition and fragility
functions for physical elements at seismic risk, vol 27, Geotechnical, geological and earth-
quake engineering. Springer, Dordrecht. ISBN 978-94-007-7872-6
Pitilakis K, Karapetrou ST, Fotopoulou SD (2014b) Consideration of aging and SSI effects on
seismic vulnerability assessment of RC buildings. Bull Earthq Eng. doi:10.1007/s10518-013-
9575-8
REAKT (2014) Strategies and tools for real time earthquake and risk reduction. Research Project,
European Commission, Theme: ENV.2011.1.3.1-1, Grant agreement: 282862. http://www.
reaktproject.eu
RISK-UE (2004) An advanced approach to earthquake risk scenarios with applications to different
European towns. Research Project, European Commission, DG ΧII2001-2004, CEC: EVK4-
CT-2000-00014
Robinson D, Fulford G, Dhu T (2005) EQRM: Geoscience Australia’s earthquake risk model
technical manual Version 3.0. Geoscience Australia Record 2005/01
Selva J, Argyroudis S, Pitilakis K (2013) Impact on loss/risk assessments of inter-model variability
in vulnerability analysis. Nat Hazards 67(2):723–746. doi:10.1007/s11069-013-0616-z
SHARE (2013) Seismic Hazard Harmonization in Europe. Research Project, European Commis-
sion, ENV.2008.1.3.1.1, Grant agreement: 226769. www.share-eu.org
SYNER-G (2013) Systemic seismic vulnerability and risk analysis for buildings, lifeline networks
and infrastructures safety gain. Research Project, European Commission, ENV-2009-1-244061
Vamvatsikos D, Cornell CA (2002) Incremental dynamic analysis. Earthq Eng Struct Dyn
31:491–514
Vidic T, Fajfar P, Fischinger M (1994) Consistent inelastic design spectra: strength and displace-
ment. Earthq Eng Struct Dyn 23:507–521
Chapter 4
Variability and Uncertainty in Empirical
Ground-Motion Prediction for Probabilistic
Hazard and Risk Analyses
Peter J. Stafford
Abstract The terms aleatory variability and epistemic uncertainty mean different
things to people who routinely use them within the fields of seismic hazard and risk
analysis. This state is not helped by the repetition of loosely framed generic
definitions that actually inaccurate. The present paper takes a closer look at the
components of total uncertainty that contribute to ground-motion modelling in
hazard and risk applications. The sources and nature of uncertainty are discussed
and it is shown that the common approach to deciding what should be included
within hazard and risk integrals and what should be pushed into logic tree formu-
lations warrants reconsideration. In addition, it is shown that current approaches to
the generation of random fields of ground motions for spatial risk analyses are
incorrect and a more appropriate framework is presented.
4.1 Introduction
Over the past few decades a very large number of empirical ground-motion models
have been developed for use in seismic hazard and risk applications throughout the
world, and these contributions to engineering seismology collectively represent a
significant body of literature. However, if one were to peruse this literature it would,
perhaps, not be obvious what the actual purpose of a ground-motion model is. A
typical journal article presenting a new ground-motion model starts with a brief
introduction, proceeds to outlining the dataset that was used, presents the functional
form that is used for the regression analysis along with the results of this analysis,
shows some residual plots and comparisons with existing models and then wraps up
with some conclusions. In a small number of cases this pattern is broken by the
authors giving some attention to the representation of the standard deviation of the
model. Generally speaking, the emphasis is very much upon the development and
behaviour of the median predictions of these models and the treatment of the
standard deviation (and its various components) is very minimal in comparison.
If it is reasonable to suspect that this partitioning of effort in presenting the model
reflects the degree of effort that went into developing the model then there are two
important problems with this approach: (1) the parameters of the model for the
median predictions are intrinsically linked to the parameters that represent the
standard deviation – they cannot be decoupled; and (2) it is well known from
applications of ground-motion models in hazard and risk applications that the
standard deviation exerts at least as much influence as the median predictions for
return periods of greatest interest.
The objective of the present article is to work against this trend by focussing
almost entirely upon the uncertainty associated with ground-motion predictions.
Note that what is actually meant by ‘uncertainty’ will be discussed in detail in
subsequent sections, but the scope includes the commonly referred to components
of aleatory variability and epistemic uncertainty. Furthermore, the important con-
siderations that exist when one moves from seismic hazard analysis into seismic
risk analysis will also be discussed.
As noted in the title of the article, the focus herein is upon empirical ground-
motion models and discussion of the uncertainties associated with stochastic
simulation-based models, or seismological models is not within the present scope.
That said, some of the concepts that are dealt with herein are equally applicable to
ground-motion models in a more general sense.
While at places in the article reference will be made to peak ground acceleration
or spectral acceleration, the issues discussed here at not limited to these intensity
measures. For the particular examples that are presented, although the extent of
various effects will be tied to the choice of intensity measure, the emphasis is upon
the underlying concept rather than the numerical results.
In both hazard and risk applications the objective is usually to determine how
frequently a particular state is exceeded. For hazard, this state is commonly a level
of an intensity measure at a site, while for risk applications the state could be related
to a level demand on a structure, a level of damage induced by this demand, or the
cost of this damage and its repair, among others. In order to arrive at estimates of
these rates (or frequencies) of exceedance it is not currently possible to work with
empirical data related to the state of interest as a result of insufficient empirical
constraint. For example, if one wished to compute an estimate of the annual rate at
which a level of peak ground acceleration is exceeded at a site then an option in an
ideal world would be to assume that the seismogenic process is stationary and that
what has happened in the past is representative of what might happen in the future.
On this basis, counting the number of times the state was exceeded and dividing this
by the temporal length of the observation period would provide an estimate of the
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 99
exceedance rate. Unfortunately, there is not a location on the planet for which this
approach would yield reliable estimates for return periods of common interest.
To circumvent the above problem hazard and risk analyses break down the
process of estimating rates of ground-motions into two steps: (1) estimate the
rates of occurrence of particular earthquake events; and (2) estimate the rate of
exceedance of a particular state of ground motion given this particular earthquake
event. The important point to make here is that within hazard and risk applications
the role of an empirical ground-motion model is to enable this second step in which
the rate of exceedance of a particular ground-motion level is computed for a given
earthquake scenario. The manner in which these earthquake scenarios are (or can
be) characterised has a strong impact upon how the ground-motion models can be
developed. For example, if the scenario can only be characterised by the magnitude
of the event and its distance from the site then it is only meaningful to develop the
ground-motion model as a function of these variables.
To make this point more clear, consider the discrete representation of the
standard hazard integral for a site influenced by a single seismic source:
XK X
J
λY>y* ¼ ν P Y > y*m j , r k P M ¼ m j , R ¼ r k ð4:1Þ
k¼1 j¼1
λY>y* X K X J
P½Y > y* ¼ ¼ P Y > y*m j , r k P M ¼ m j , R ¼ r k
ν
Z 1 k¼1 j¼1
ZZ Z 1 ð4:2Þ
¼ f Y ð yÞdy ¼ f ym, r f M, R ðm; r Þdmdr
Y m, r
y* y*
where the moments of the distribution are specific to the scenario in question, i.e.,
μlnSa μlnSa ðm; r; . . .Þ and σ lnSa σ lnSa ðm; r; . . .Þ. The probability of exceeding a
given level of motion for a scenario is therefore defined using the cumulative
standard normal distribution Φ(z):
lnSa* μlnSa
P Sa > Sa*m, r, . . . ¼ 1 Φ ð4:4Þ
σ lnSa
The logarithmic mean μln Sa and standard deviation σ ln Sa for a scenario would differ
for hazard and risk analyses as in the former case one deals with the marginal
distribution of the motions conditioned upon the given the scenario while in the
latter case one works with the conditional distribution of the motions, conditioned
upon both the given scenario and the presence of a particular event term for the
scenario. That is, in portfolio risk analysis one works at the level of inter-event
variability and intra-event variability while for hazard analysis one uses the total
variability.
An empirical ground-motion model must provide values of both the logarithmic
mean μln Sa and the standard deviation σ ln Sa in order to enable the probability
calculations to be made and these values must be defined in terms of the predictor
variables M and R, among potentially others. Both components of the distribution
directly influence the computed probabilities, but can exert greater or lesser influ-
ence upon the probability depending upon the particular value of ln Sa *.
Equation (4.4) is useful to enable one to understand how the effects of bias in
ground-motion models would influence the contributions to hazard and risk esti-
mates. The computation of probabilities of exceedance is central to both cases.
Imagine that we assume that any given ground-motion model is biased for a
particular scenario in that the predicted median spectral accelerations differ from
an unknown true value by a factor γ μ and that the estimate of the aleatory variability
also differs from the true value by a factor of γ σ . To understand the impact of these
biases upon the probability computations we can express Eq. (4.4) with explicit
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 101
0.200 0.500
Decrease in Hazard
Probability of Exceedance P(ε> ε*)
0.020 0.050
Negative Epsilon Positive Epsilon Negative Epsilon
Scenarios Scenarios Scenarios
0.002 0.005
0.002 0.005
Decrease in Hazard Increase in Hazard Increase in Hazard
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Critical Epsilon ε* Critical Epsilon ε*
Fig. 4.1 Illustration of the effect that a bias in the logarithmic standard deviation has upon the
computation of probabilities of exceedance. The left panel corresponds to γ σ ¼ 2 while the right
panel shows γ σ ¼ 1=2
inclusion of these bias factors as in Eq. (4.5). Now we recognise that the probability
that we compute is an estimate and denote this as P^ .
lnSa* lnγ μ μlnSa
^
P Sa > Sa* m, r, . . . ¼ 1 Φ ð4:5Þ
γ σ σ lnSa
This situation is actually much closer to reality than Eq. (4.4). For many scenarios
predictions of motions will be biased by some unknown degree and it is important
to understand how sensitive our results are to these potential biases. The influence
of the potential bias in the logarithmic standard deviation is shown in Fig. 4.1. The
case shown here corresponds to an exaggerated example in which the bias factor is
either γ σ ¼ 2 or γ σ ¼ 1=2.
What sort of bias could one expect to be reasonable for a given ground-motion
model? This is a very difficult question to answer in any definitive way, but one way
to get a feel for this is to compare the predictions of both median logarithmic
motions and logarithmic standard deviations for two generations of modern ground-
motion models. In particular, the very recent release of the models from the second
phase of the PEER NGA project (NGA West 2) provides one with the ability to
compare the predictions from the NGA West 1 and NGA West 2 studies.
Figures 4.2 and 4.3 show these estimates of the possible extent of bias for the
ground-motion models of Campbell and Bozorgnia (2008, 2014) and Chiou and
Youngs (2008, 2014). It should be noted that the point here is not that these models
are necessarily biased, but that it is reasonable to assume that the 2014 versions are
102 P.J. Stafford
20 40 60 80
Campbell & Bozorgnia Chiou & Youngs
1.2
7.5 1.1
1.0
7.0
0.9
Magnitude
6.5 0.8
0.7
6.0
0.6
0.5
5.5
0.4
20 40 60 80
Distance [km]
Fig. 4.2 Example bias factors computed as the ratios between predictions of two generations of
models from the same developers. The left panel shows ratios between the medians,
SaðT ¼ 0:01sÞ, of Campbell and Bozorgnia (2014, 2008) – 2014:2008, while the right panel is
for Chiou and Youngs (2014, 2008) – 2014:2008
less biased than their 2008 counterparts. Therefore, the typical extent of bias that
has existed through the use of the 2008 NGA models over the past few years can be
characterised through plots like those shown in Figs. 4.2 and 4.3. However, in order
to see how these differences in predicted moments translate into differences in
hazard estimates the following section develops hazard results for a simple aca-
demic example.
20 40 60 80
Campbell & Bozorgnia Chiou & Youngs
1.35
7.5 1.30
1.25
7.0
Magnitude
1.20
6.5
1.15
6.0
1.10
5.5 1.05
1.00
20 40 60 80
Distance [km]
Fig. 4.3 Example bias factors for the logarithmic standard deviations. The left panel shows ratios
between the σ ln Sa predictions of Campbell and Bozorgnia (2014, 2008) – 2014:2008, while the
right panel shows the ratios for Chiou and Youngs (2014, 2008) – 2014:2008. The standard
deviations are for a period of 0.01 s
purposes of this exercise, this departure from a more realistic representation does
not influence the point that is being made.
Hazard curves for spectral acceleration at a response period of 0.01 s are
computed through the use of the standard hazard integral in Eq. (4.6).
X ZZ
λY>y* ¼ νi P Y > y*m, r f M, R ðm; r Þdmdr ð4:6Þ
i¼1
For this particular exercise we have just one source ( i ¼ 1 ) and will also
appreciate that νi simply scales the hazard curve linearly and so using ν1 ¼ 1
enables us to convert the annual rates of exceedance λY>y* directly into annual
probabilities of exceedance.
Hazard curves computed according to this equation are shown in Fig. 4.4. The
curves show that for long return periods the hazard curves predicted by both models
of Campbell and Bozorgnia are very similar while at short return periods there are
significant differences between the two versions of their model. From consideration
of Figs. 4.2 and 4.3 we can see that the biggest differences between the two versions
of the Campbell and Bozorgnia model for the scenarios of relevance to this exercise
(T ¼ 0:01 seconds and V S, 30 ¼ 350 m/s) are at small magnitudes between roughly
Mw5.0 and Mw5.5 where the new model predicts significantly smaller median
motions but also has a much larger standard deviation for these scenarios. As will
be shown shortly, both of these effects lead to a reduction in the hazard estimates for
these short return periods.
In contrast, the two versions of the Chiou and Youngs model compare
favourably for the short return periods but then exhibit significant differences as
104 P.J. Stafford
1e−01
Annual Probabilty of Exceedance
1e−03
Fig. 4.4 Hazard curves computed for the ground-motion models of Campbell and Bozorgnia
(2008, 2014) and Chiou and Youngs (2008, 2014)
one moves to longer return periods. Again making use of Figs. 4.2 and 4.3 we
can see that the latest version of their model provides a relatively consistent, yet
mild (γ μ 1:0 1:1), increase in motions over the full magnitude-distance space
considered here and that we have a 15–20 % increase in the standard deviation over
this full magnitude-distance space. Again, from the developments that follow, we
should expect to observe the differences between the hazard curves at these longer
return periods.
We have just seen how bias factors for the logarithmic mean γ μ and logarithmic
standard deviation γ σ can influence the computation of estimates of the probability
of exceedance for a given scenario. The hazard integral in Eq. (4.6) is simply a
weighted sum over all relevant scenarios as can be seen from the approximation
(that this ceases to be an approximation in the limit as Δm, Δr ! 0):
X XX
λY>y* vi P Y > y* m j ; r k f M, R m j ; r k ΔmΔr ð4:7Þ
i¼1 j k
If we now accept that when using a ground-motion model we will only obtain an
estimate of the annual rate of exceedance we can write:
X XX
λ^ Y>y* vi P^ Y > y* m j ; r k f M, R m j ; r k ΔmΔr ð4:8Þ
i¼1 j k
where now this expression is a function of the bias factors for both the logarithmic
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 105
motions for every scenario. One can consider the effects of systematic bias from the
ground motion model expressed through factors modifying the conditional mean
and standard deviation for a scenario. The biases in this case hold equally for all
scenarios (although this can be relaxed). At least for the standard deviation, this
assumption is not bad given the distributions shown in Fig. 4.3.
Therefore, for each considered combination of mj and rk we can define our
estimate of the probability of exceeding y * from Eq. (4.5). Note that the bias in
the median ground motion is represented by a factor γ μ multiplying the median
motion S^ a ¼ γ μ Sa. This translates to an additive contribution to the logarithmic
mean leading to μlnSa þ lnγ μ representing the biased median motion.
To understand how such systematic biases could influence hazard estimates we
can compute the partial derivatives with respect to these bias factors, considering
one source of bias at a time.
∂λ^ X XX ∂
lny* lnγ μ μ
νi 1Φ f M, R m j ; r k ΔmΔr ð4:9Þ
∂γ μ i¼1 j k
∂γ μ σ
and
∂λ^ X XX ∂
lny* μ
νi 1Φ f M, R m j ; r k ΔmΔr ð4:10Þ
∂γ σ i¼1 j k
∂γ σ γ σ σ
and
ZZ " #
∂λ^ X lny* μ ðμ lny*Þ2
¼ νi pffiffiffiffiffi exp f M, R ðm; r Þdmdr ð4:12Þ
∂γ σ i¼1 γ 2σ σ 2π 2γ 2σ σ 2
When these expressions are evaluated for the hypothetical scenario that we have
considered we obtain partial derivatives as shown in Fig. 4.5. The curves in this
figure show that the sensitivity of the hazard curve to changes in the mean pre-
dictions for the scenarios is most significant when there is relatively weak influence
from the standard deviation. That is, when the hazard curve is dominated by
contributions with epsilon values near zero then biases in the mean predictions
matter most strongly.
The scaling of the partial derivatives with respect to the bias in the standard
deviation is more interesting, and reflects the schematic result previously shown in
Fig. 4.1. We see that we have positive gradients for the larger spectral accelerations
106 P.J. Stafford
CB2008 CY2008
CB2014 CY2014
0.4
0.4
Partial Derivatives of Hazard Curve
0.0
0.0
−0.2
−0.2
Fig. 4.5 Partial derivatives of the hazard curves with respect to the bias factors γ μ and γ σ
while we have negative gradients for weak motions. These ranges effectively
represent the positive and negative epsilon ranges that were shown explicitly in
the previous section. However, in this case we must recognise that when consider-
ing the derivative of the hazard curve that we have many different contributions for
epsilon values corresponding to a given target level of the intensity measure y * and
that the curves shown in Fig. 4.5 reflect a weighted average of the individual curves
that have the form shown in Fig. 4.1.
The utility of the partial derivative curves shown in Fig. 4.5 is that they enable
one to appreciate over which range of intensity measures (and hence return periods)
changes to either the median motion or logarithmic standard deviation will have the
greatest impact upon the shape of the hazard curves. Note that with respect to the
typical hazard curves shown in Fig. 4.4, these derivatives should be considered as
being in some sense orthogonal to the hazard curves. That is, they are not
representing the slope of the hazard curve (which is closely related to the annual
rate of occurrence of a given level of ground-motion), but rather saying that for any
given level of motion, how sensitive is the annual rate of exceedance to a change in
the logarithmic mean and standard deviation. It is clear from Fig. 4.4 that a change
in the standard deviation itself has a strong impact upon the actual nature of the
hazard curve at long return periods, whereas the sensitivity indicated in Fig. 4.5 is
low for the corresponding large motions. However, it should be born in mind that
these partial derivatives are ∂λ^ =∂γ i rather than, say, ∂lnλ^ =∂γ i and that the
apparently low sensitivity implied by Fig. 4.6 should be viewed in terms of the
fact that small changes Δλ^ are actually very significant when the value of λ^ itself is
very small over this range.
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 107
2.0
Campbell & Bozorgnia (2008)
Ratio of Derivatives: ⭸λ/ ⭸γ σ : ⭸λ/ ⭸γ μ Campbell & Bozorgnia (2014)
1.5 Chiou & Youngs (2008)
Chiou & Youngs (2014)
1.0
0.5
0.0
10% in 50 years
2% in 50 years
−0.5
−1.0
Fig. 4.6 Ratios of the partial derivatives with respect to the logarithmic standard deviation and
mean. Vertical lines are shown to indicate the commonly encountered 475 and 2,475 year return
periods
Another way of making use of these partial derivatives is to compare the relative
sensitivity of the hazard curve to changes in the logarithmic mean and standard
deviation. This relative sensitivity can be computed by taking the ratio of the partial
derivatives with respect to both the standard deviation and the mean and then seeing
the range of return periods (or target levels of the intensity measure) for which one
or the other partial derivative dominates. Ratios of this type are computed for this
hypothetical scenario and are shown in Fig. 4.6. When ratios greater than one are
encountered the implication is that the hazard curves are more sensitive to changes
in the standard deviation than they are to changes in the mean. As can be seen from
Fig. 4.6, this situation arises as the return period increases. However, for the
example shown here (which is fairly typical of active crustal regions in terms of
the magnitude-frequency distribution assumed) the influence of the standard devi-
ation tends to be at least as important as the median, if not dominant, at return
periods of typical engineering interest (on the order of 475 years or longer).
The example just presented has highlighted that ground-motion models must
provide estimates of both the logarithmic mean and standard deviation for any
given scenario, and that in many cases the ability to estimate the standard deviation
is at least as important as the estimate of the mean. Historically, however, the
development of ground-motion models has focussed overwhelmingly upon the
scaling of median predictions, with many people (including some ground-motion
model developers) still viewing the standard deviation as being some form of error
in the prediction of the median rather than being an important parameter of the
108 P.J. Stafford
ground-motion distribution that is being predicted. The results presented for this
example here show that ground-motion model developers should shift the balance
of attention more towards the estimation of the standard deviation than what has
historically occurred.
When one moves to seismic risk analyses the treatment of the aleatory variability
can differ significantly. In the case that a risk analysis is performed for a single
structure the considerations of the previous section remain valid. However, for
portfolio risk assessment it becomes important to account for the various correla-
tions that exist with ground-motion fields for a given earthquake scenario. These
correlations are required for developing the conditional ground-motion fields that
correspond to a multivariate normal distribution.
The multivariate normal distribution represents the conditional random field of
relative ground-motion levels (quantified through normalised intra-event residuals)
conditioned upon the occurrence of an earthquake and the fact that this event will
generate seismic waves with a source strength that may vary from the expected
strength. The result of this source deviation is that all locations that register this
ground-motion will have originally had this particular level of source strength. This
event-to-event variation that systematically influences all sites is represented in
ground-motion models by the inter-event variability, while the conditional variation
of motions at a given site is given by the intra-event variability.
For portfolio risk analysis it is therefore important to decompose the total
aleatory variability in ground-motions into a component that reflects the source
strength (the inter-event variability) and a component that reflects the site-specific
aleatory variability (the intra-event variability). It should also be noted in passing
that this is not strictly equivalent to the variance decomposition that is performed
using mixed effects models in regression analysis.
When one considers ground-motion models that have been developed over
recent years it is possible to appreciate that some significant changes have occurred
to the value of the total aleatory variability that is used in hazard analysis, but also
to the decomposition of this total into the inter-event and intra-event components.
For portfolio risk analysis, this decomposition matters. To demonstrate why this is
the case, Fig. 4.7 compares conditional ground-motion fields that have been sim-
ulated for the 2011 Christchurch Earthquake in New Zealand. In each case shown,
the inter-event variability is assumed to be a particular fraction of the total vari-
ability and this fraction is allowed to range from 0 to 100 %. As one moves from a
low to a high fraction it is clear that the within event spatial variation of the ground-
motions reduces.
For portfolio risk assessment, these differences in the spatial variation are
important as the extreme levels of loss correspond to cases in which spatial regions
of high-intensity ground-motion couple with regions of high vulnerability and
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 109
5760 0.0
5750
−0.5
5740
−1.0
5730
5720 −1.5
Northings [km]
5760
−2.5
5750
−3.0
5740
−3.5
5730
5720
−4.0
Fig. 4.7 Impact upon the nature of ground-motion fields generated assuming that the inter-event
variability is a given fraction of the total aleatory variability. The ground-motion fields shown are
possible fields consistent with a repeat of the Christchurch earthquake
exposure. The upper left panel of Fig. 4.7 shows a clear example of this where a
patch of high intensity is located in a region of high exposure.
In addition to ensuring that the total aleatory variability is well-estimated, it is
therefore also very important (for portfolio risk analysis) to ensure that the
partitioning of the total variability between inter- and intra-event components is
done correctly.
such as way that the aleatory variability is supposed to represent inherent random-
ness in nature while epistemic uncertainties represent contributions resulting from
our lack of knowledge. The distinction is made for more than semantic reasons and
the way that each of these components is treated within hazard and risk analysis
differ. Using probabilistic seismic hazard analysis as an example, the aleatory
variability is directly accounted for within the hazard integral while epistemic
uncertainty is accounted for or captured through the use of logic trees.
However, when one constructs a logic tree the approach is to consider alternative
hypotheses regarding a particular effect, or component, within the analysis. Each
alternative is then assigned a weight that has been interpreted differently by various
researchers and practitioners, but is ultimately treated as a probability. No alterna-
tive hypotheses are considered for effects that we do not know to be relevant. That
is, the representation of epistemic uncertainty in a logic tree only reflects our
uncertainty regarding the components of the model that we think are relevant. If
we happen to be missing an important physical effect then we will never think to
include it within our tree and this degree of ignorance is never reflected in our
estimate of epistemic uncertainty.
It is therefore clear that there is a component of the overall uncertainty in our
analyses that is not currently accounted for. This component is referred to as
Ontological Uncertainty (Elms 2004) and represents the unknown unknowns
from the famous quote of Donald Rumsfeld.
These generic components of uncertainty are shown schematically in Fig. 4.8.
The actual numbers that are shown in this figure are entirely fictitious and the
objective is not to define this partitioning. Rather, the purpose of this figure is to
illustrate the following:
• What we currently refer to as being aleatory variability is not all aleatory
variability and instead contains a significant component of epistemic uncertainty
(which is why it reduces from the present to the near future)
• The fact that ontological uncertainty exists means that we cannot assign a
numerical value to epistemic uncertainty
• The passage of time allows certain components to be reduced
In the fields of seismic hazard and risk it is common for criticism to be made of
projects due to the improper handling of aleatory variability and epistemic uncer-
tainty by the analysts. However, the distinction between these components is not
always clear and this is at least in part a result of loose definitions of the terms as
well as a lack of understanding about the underlying motivation for the
decomposition.
As discussed at length by Der Kiureghian and Ditlevsen (2009), what is aleatory
or epistemic can depend upon the type of analysis that is being conducted. The
important point that Der Kiureghian and Ditlevsen (2009) stress is that the
categorisation of an uncertainty as either aleatory of epistemic is largely at the
discretion of the analyst and depends upon what is being modelled. The uncer-
tainties themselves are generally not properties of the parameter in question.
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 111
Fig. 4.8 Components of the total uncertainty in ground motion prediction, and their evolution in
time. The percentage values shown are entirely fictitious
Following the more complete discussion provided by Der Kiureghian and Ditlevsen
(2009), consider the physical process that results in the generation of a ground
motion y for a particular scenario. The underlying basic variables that parameterise
this physical process can be written as X.
Now consider a perfect deterministic ground-motion model (i.e., one that makes
predictions with no error) that provides a mathematical description of the physical
link between these basic variables and the observed motion. In the case that we
knew the exact values of all basic variables for a given scenario we would write
such a model as:
y ¼ g x; θg ð4:13Þ
where, here θg are the parameters or coefficients of the model. Note that the above
model must account for all relevant physical effects related to the generation of y. In
practice, we cannot come close to accounting for all relevant effects and so rather
than working with the full set X, we instead work with a reduced set Xk
(representing the known random variables) and accept that the effect of the
unknown basic variables Xu will manifest as differences between our now approx-
imate model ĝ and the observations. Furthermore, as we are working with an
observed value of y (which we assume to be known without error) we also need
to recognise that we will have an associated observed instance of Xk that is not
perfectly known xk. Our formulation is then written as:
y ¼ g^ x^k ; θ^g þ ε ð4:14Þ
What is important to note here is that the residual error ε is the result of three
distinct components:
112 P.J. Stafford
In the context seismic hazard and risk analysis, one would ordinarily regard the
variability represented by ε as being aleatory variability and interpret this as being
inherent randomness in ground motions arising from the physical process of
ground-motion generation. However, based upon the formulation just presented
one must ask whether any actual inherent randomness exists, or whether we are just
seeing the influence of the unexplained parameters xu. That is, should our starting
point have been:
y ¼ g x; θg þ εA ð4:16Þ
where here the εA represents intrinsic randomness associated with ground motions.
When one considers this problem one must first think about what type of
randomness we are dealing with. Usually when people define aleatory variability
they make an analogy with the rolling of a die, but often they are unwittingly
referring to one particular type of randomness. There are broadly three classes of
randomness:
• Apparent Randomness: This is the result of viewing a complex deterministic
process from a simplified viewpoint.
• Chaotic Randomness: This randomness arises from nonlinear systems that
evolve from a particular state in a manner that depends very strongly upon that
state. Responses obtained from very slightly different starting conditions can be
markedly different from each other, and our inability to perfectly characterise a
particular state means that the system response is unpredictable.
• Inherent Randomness: This randomness is an intrinsic part of reality. Quantum
mechanics arguably provides the most pertinent example of inherent
randomness.
Note that there is also a subtle distinction that can be made between systems that
are deterministic, yet unpredictable, and systems that possess genuine randomness.
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 113
In addition, some (including historically Einstein) argue that systems that possess
‘genuine randomness’ are actually driven by deterministic processes and variables
that we simply are not aware of. In this case, these systems would be subsumed
within the one or more of the other categories of apparent or chaotic randomness.
However, at least within the context of quantum mechanics, Bell’s theorem dem-
onstrates that the randomness that is observed at such scales is in fact inherent
randomness and not the result of apparent randomness.
For ground-motion modelling, what is generally referred to as aleatory variabil-
ity is at least a combination of both apparent randomness and chaotic randomness
and could possibly also include an element of inherent randomness – but there is no
hard evidence for this at this point. The important implication of this point is that
the component associated with apparent randomness is actually an epistemic
uncertainty that can be reduced through the use of more sophisticated models.
The following two sections provide examples of apparent and chaotic randomness.
lny ¼ β0 þ β1 M ð4:17Þ
Model 1
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β2 ln R2 þ β23 ð4:18Þ
Model 2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β2 ln R2 þ β23 þ β4 lnV S, 30 ð4:19Þ
Model 3
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β1a ðM 6:5Þ2 þ ½β2 þ β2a ðM 6:5Þln R2 þ β23
þ β4 lnV S, 30 ð4:20Þ
114 P.J. Stafford
Model 4
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β1a ðM 6:5Þ2 þ ½β2 þ β2a ðM 6:5Þln R2 þ β23 ð4:21Þ
þ β4 lnV S, 30 þ β5 Fnm þ β6 Frv
Models 5 and 6
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lny ¼ β0 þ β1 M þ β1a ðM 6:5Þ2 þ ½β2 þ β2a ðM 6:5Þln R2 þ β23 ð4:22Þ
þ β4 lnV S, 30 þ β5 Fnm þ β6 Frv þ β7 Fas
where we see that the first of these models is overly simplified, but that by the time
we reach Models 5 and 6, we are accounting for the main features of modern
models. The difference between Models 5 and 6 is not in the functional form, but in
how the coefficients are estimated. Models 1–5 use standard mixed effects regres-
sion with one random effect for event effects. However, Model 6 includes this
random effect, but also distinguishes between these random effects depending upon
whether we have mainshocks or aftershocks and also partitions the intra-event
variance into components for mainshocks and aftershocks. The dataset consists of
2,406 records from the NGA database.
Figure 4.9 shows estimates of apparent randomness for each of these models,
assuming that Model 6 is ‘correct’. That is, the figure shows the difference between
the total standard deviation of Model i and Model 6 and because we assume the
latter model is correct, this difference in variance can be attributed to apparent
randomness. The figure shows that the inclusion of distance scaling and
distinguishing between mainshocks and aftershocks has a very large impact, but
that other additions in complexity provide a limited reduction in apparent random-
ness. The important point here is that this apparent randomness is actually epistemic
uncertainty – not aleatory as is commonly assumed.
0.8
Apparent Variability, σ i2 − σ 26
Effect of Distance
0.6
0.4
0.2
Effect of Aftershocks
0.0
Fig. 4.9 Variation of apparent randomness associated with models of increasing complexity
€
u þ 2ζω0 u_ þ αω20 u þ ð1 αÞω20 z ¼ B sin ðΩtÞ ð4:23Þ
6
4
5
2
0
0
−2
−5
−4
−10
−6
−8
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Displacement, u Displacement, u
Fig. 4.10 Dependence of the hysteretic parameter z (left), and the normalised restoring force f S
_ zÞ (right) on the displacement for the example system considered
ðu; u;
steady-state conditions. For this sort of system we expect that the transient terms
will decay according to expðζω0 tÞ and for these examples we have set ζ ¼ 0:05
and ω0 ¼ 1:0 and we only look at the system response after 200 s have passed in
order to compute the maximum displacement and velocity shown in Fig. 4.12. We
would expect that the transient terms would have decayed to less than 0:5 105 of
their initial amplitudes at the times of interest.
Figure 4.12 shows some potentially surprising behaviour for those not familiar
with nonlinear dynamics and chaos. We can see that for low harmonic amplitudes
we have a relatively smoothly varying maximum response and that system response
is essentially predictable here. However, this is not to say that the response does not
become more complex. For example, consider the upper row of Fig. 4.13 that shows
the response for B ¼ 15. Here we can see that the system tends towards some stable
state and that we have a stable limit-cycle in the phase space. However, it has a
degree of periodicity that corresponds to a loading/unloading phase for negative
restoring forces.
This complexity continues to increase as the harmonic amplitude increases as
can be seen in the middle row of Fig. 4.13 where we again have stable steady-state
response, but also have another periodic component of unloading/reloading for both
positive and negative restoring forces. While these figures show increased com-
plexity as we move along the harmonic amplitude axis of Fig. 4.12, the system
response remains stable and predictable in that we know that small changes in the
value of B continues to map into small qualitative and quantitative changes to the
response. However, Fig. 4.12 shows that once the harmonic amplitude reaches
values of roughly B ¼ 53 we suddenly have a qualitatively different behaviour. The
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 117
4
4
2
2
Displacement
Velocity
0
0
−2
−2
−4
−4
20
4
10
2
Restoring Force
0
Velocity
0
−10
−2
−20
−4
−4 −2 0 2 4 −4 −2 0 2 4
Displacement Displacement
Fig. 4.11 Response of the nonlinear system for a harmonic amplitude of B ¼ 5. Upper left panel
shows the displacement time-history; upper right panels shows the velocity time history; lower
right panel shows the response trajectory in phase space; and lower right panel shows the
hysteretic response
system response now becomes extremely sensitive to the particular value of the
amplitude that we consider. The reason for this can be seen in the bottom row of
Fig. 4.13 in which it is clear that we never reach a stable steady state. What is
remarkable in this regime is that we can observe drastically different responses for
very small changes in amplitude of the forcing function. For example, when we
move from B ¼ 65:0 to B ¼ 65:1 we have transition back into a situation in which
we have a stable limit cycle (even if it is a complex cycle).
This lesson here is that for highly nonlinear processes there exist response
regimes where the particular response trajectory and system state depends very
strongly upon a prior state of the system. There are almost certainly aspects of the
ground-motion generation process that can be described in this manner. Although
these can be deterministic processes, as it is impossible to accurately define the state
of the system the best we can do is to characterise the observed chaotic randomness.
Note that although this is technically epistemic uncertainty, we have no choice but
to treat this as aleatory variability as it is genuinely irreducible.
118
25
35
30
20
25
15
20
15
10
10
5
Maximum Absolute Steady−state Velocity
5
Fig. 4.12 Maximum absolute steady-state displacement (left) and velocity (right) response against the harmonic forcing amplitude B
P.J. Stafford
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 119
Fig. 4.13 Response of the nonlinear system for a harmonic amplitude of B ¼ 15 (top), B ¼ 35
(middle), and B ¼ 65 (bottom). Panels on the left show the response trajectory in phase space; and
panels on the right show the hysteretic response
and variability that arises from the ergodic assumption. It is also almost certain that
the standard deviation reflects a degree of chaotic randomness and possibly also
includes some genuine randomness and it is only these components that are
actually, or practically, irreducible. Therefore, it is clear that the standard deviation
of a ground-motion model does not reflect aleatory variability as it is commonly
defined – as being ‘inherent variability’.
If the practical implications of making the distinction between aleatory and
epistemic are to dictate what goes into the hazard integral and what goes into the
logic tree then one might take the stance that of these contributors to the standard
deviation just listed we should look to remove the effects of the ergodic assumption
(which is attempted in practice), we should minimise the effects of metadata
uncertainty (which is not done in practice), and we should increase the sophistica-
tion of our models so that the apparent randomness is reduced (which some would
argue has been happening in recent years, vis- a-vis the NGA projects).
An example of the influence of metadata uncertainty can be seen in the upper left
panel of Fig. 4.14 in which the variation in model predictions is shown when
uncertainties in magnitude and shear-wave velocity are considered in the regression
analysis. The boxplots in this figure show the standard deviations of the predictions
for each record in the NGA dataset when used in a regression analysis with Models
1–6 that were previously presented. The uncertainty that is shown here should be
regarded as a lower bound to the actual uncertainty associated with meta-data for
real ground-motion models. The estimates of this variable uncertainty are obtained
by sampling values of magnitude and average shear-wave velocity for each event
and site assuming a (truncated) normal and lognormal distribution respectively.
This simulation process enables a hypothetical dataset to be constructed upon
which a regression analysis is performed. The points shown in the figure then
represent the standard deviation of median predictions from each developed regres-
sion model.
Figure 4.14 also shows how an increase in model complexity is accompanied by
an increase in parametric uncertainty for the models presented previously. It should
be noted that these estimates of parametric uncertainty are also likely to be near
lower bounds given that the functional forms used for this exercise are relatively
simple and that the dataset is relatively large (consisting of 2,406 records from the
NGA database). The upper right panel of Fig. 4.14 shows this increasing parametric
uncertainty for the dataset used to develop the models, but the lower panel shows
the magnitude dependence of this parametric uncertainty when predictions are
made for earthquake scenarios that are not necessarily covered by the empirical
data. In this particular case, the magnitude dependence is shown when motions are
computed for a distance of just 1 km and a shear-wave velocity of 316 m/s is used. It
can be appreciated from this lower panel that the parametric uncertainty is a
function of both the model complexity but also of the particular functional form
adopted. The parametric uncertainty here is estimated by computing the covariance
matrix of the regression coefficients and then sampling from the multivariate
normal distribution implied by this covariance matrix. The simulated coefficients
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 121
Fig. 4.14 Influence of meta-data uncertainty (upper left), increase in parametric uncertainty with
increasing complexity of models (upper right), and the dependence of parametric uncertainty upon
magnitude (bottom)
are then used to generate predictions for each recording and the points shown in this
panel represent the standard deviation of these predictions for every record.
Rather than finally looking to increase the complexity of the functional forms
that are used for ground-motion predictions, herein I propose that we look at this
problem in a different light and refer back to Eq. (4.2) in which we say explicitly
that what matters for hazard and risk is the overall estimate of ground-motion
exceedance and that this is the result of two components (not just the ground-
motion model). We should forget about trying to push the concept that only aleatory
variability should go into the hazard integral and rather take the viewpoint that our
optimal model (which is a model of the ground motion distribution – not median
predictions) should go into the hazard integral and that our uncertainties should then
be reflected in the logic tree. The reason why we should forget about only pushing
122 P.J. Stafford
aleatory variability into the hazard integral is that from a quantitative ground-
motion perspective we are still not close to understanding what is actually aleatory
and irreducible.
The proposed alternative of defining an optimal model is stated in the light of
minimising the uncertainty in the estimate of the probability of exceedance of
ground-motions. This uncertainty comes from two components: (1) our ability to
accurately define the probability of occurrence of earthquake scenarios; and (2) our
ability to make robust predictions of the conditional ground-motion distribution.
Therefore, while a more complex model will act to reduce the apparent variability,
if this same model requires the specification of a number of independent variables
that are poorly constrained in practice then the overall uncertainty will be large. In
such cases, one can obtain a lower level of overall uncertainty in the prediction of
ground-motion exceedance by using a less complex ground-motion model. A
practical example of this trade-off is related to the requirement to define the
depth distribution of earthquake events. For most hazard analyses this depth
distribution is poorly constrained and the inclusion of depth-dependent terms in
ground-motion models only provides a very small decrease in the apparent
variability.
Figure 4.15 presents a schematic illustration of the trade-offs between apparent
randomness (the epistemic uncertainty that is often regarded as aleatory variability)
and parametric uncertainty (the epistemic uncertainty that is usually ignored) that
exist just on the ground-motion modelling side. The upper left panel of this figure
shows, as we have seen previously, that the apparent randomness decreases as we
increase the complexity of our model. However, the panel also shows that this
reduction saturates once we reach the point where we have chaotic randomness,
inherent randomness, or a combination of these irreducible components. The upper
right panel, on the other hand, shows that as this model complexity increases we
also observe an increase in parametric uncertainty. The optimal model must balance
these two contributors to the overall uncertainty as shown in the lower left panel.
On this basis, one can identify an optimal model when only ground-motion model-
ling is considered. When hazard or risk is considered then the parametric uncer-
tainty shown here should reflect both the uncertainty in the model parameters
(governed by functional form complexity, and data constraints) and the uncertainty
associated with the characterisation of the scenario (i.e., the independent variables)
and its likelihood.
The bottom right panel of Fig. 4.15 shows how one can justify an increased
complexity in the functional form when the parametric uncertainty is reduced, as in
this case the optimal complexity shifts to the right. To my knowledge, these sorts of
considerations have never been explicitly made during the development of more
complex ground-motion models. Although, in some ways, the quantitative inspec-
tion of residual trends and of parameter p-values is an indirect way of assessing if
increased complexity is justified by the data.
Recent years have seen the increased use of external constraint during ground-
motion model development. In particular, numerical simulations are now com-
monly undertaken in order to constrain nonlinear site response scaling, large
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 123
Parametric Uncertainty
Apparent Randomness
Chaotic/ Inherent
Randomness
Predictive Uncertainty
Justified Increase in
Model Complexity
Optimal Model
Complexity
Reduction in Parametric
Uncertainty
Fig. 4.15 Schematic illustration of the trade-off that exists between the reduction in apparent
randomness (upper left) and the increase in parametric uncertainty (upper right). The optimal
model in this context balances the two components (lower left) and an increase in complexity is
justified when parametric uncertainty is reduced (lower right)
magnitude scaling, and near field effects. Some of the most recent models that have
been presented have very elaborate functional forms and the model developers have
justified this additional complexity on the basis of the added functional complexity
being externally constrained. In the context of Fig. 4.15, the implication is that the
model developers are suggesting that the red curves do not behave in this manner,
but rather that they saturate at some point as all of the increasing complexity does
not contribute to parametric uncertainty. On one hand, the model developers are
correct in that the application of external constraints does not increase the estimate
of the parametric uncertainty from the regression analysis on the free parameters.
However, on the other hand, in order to properly characterise the parametric
uncertainty the uncertainty associated with the models used to provide the external
constraint must also be accounted for. In reality this additional parametric uncer-
tainty is actually larger than what would be obtained from a regression analysis
because the numerical models used for these constraints are normally very complex
and involve a large number of poorly constrained parameters. Therefore, it is not
clear that the added complexity provided through the use of external constraints is
actually justified.
124 P.J. Stafford
The coverage thus far has been primarily focussed upon issues that arise most
commonly within hazard analysis, but that are also relevant to risk analysis.
However, in this final section the attention is turned squarely to a particular issue
associated with the generation of ground-motion fields for use in earthquake loss
estimation for spatially-distributed portfolios. This presentation is based upon the
work of Vanmarcke (1983) and has only previously been employed by
Stafford (2012).
The normal approach that is taken when performing risk analyses over large
spatial regions is to subdivide the region of interest into geographic cells (often
based upon geopolitical boundaries, such as districts, or postcodes). The generation
of ground-motion fields is then made by sampling from a multivariate normal
distribution that reflects the joint intra-event variability of epsilon values across a
finite number of sites equal to the number of geographic cells. The multivariate
normal distribution for epsilon values is correctly assumed to have a zero mean
vector, but the covariance matrix of the epsilon values is computed using a
combination of the point-to-point distances between the centroids of the cells
(weighted geographically, or by exposure) and a model for spatial correlation
between two points (such as that of Jayaram and Baker 2009). The problem with
this approach is that the spatial discretisation of the ground-motion field has been
ignored. The correct way to deal with this problem is to discretise the random field
to account for the nature of the field over each geographic cell and to define a
covariance matrix for the average ground-motions over the cells. This average level
of ground-motion over the cell is a far more meaningful value to pass into fragility
curves than a single point estimate.
Fortunately, the approach for discretisation of a two-dimensional random field is
well established (Vanmarcke 1983). The continuous field is denoted by ln y(x)
where y is the ground motion and x now denotes a spatial position. The logarithmic
motion at a point can be represented as a linear function of the random variable ε(x).
Hence, the expected value of the ground motion field at a given point is defined by
Eq. (4.25), where μln y is the median ground motion, and η is an event term.
Therefore, in order to analyse the random field of ground motions, attention need
only be given to the random field of epsilon values. Once this field is defined it may
be linearly transformed into a representation of the random field of spectral
ordinates.
In order to generate ground-motion fields that account for the spatial
discretisation, under the assumption of joint normality, we require three
components:
• An expression for the average mean logarithmic motion over a geographic cell
• An expression for the variance of motions over a geographic cell
• An expression for the correlation of average motions from cell-to-cell
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 125
For the following demonstration, assume that the overall region for which we are
conducting the risk analysis is discretised into a regular grid aligned with the N-S
and E-W directions. This grid has a spacing (or dimension) in the E-W direction of
D1 and a spacing in the N-S direction of D2. Note that while the presentation that
follows concerns this regular grid, Vanmarcke (1983) shows how to extend this
treatment to irregularly shaped regions (useful for regions defined by postcodes or
suburbs, etc.).
Within each grid cell one may define the local average of the field by integrating
the field and dividing by the area of the cell (A ¼ D1 D2 ).
Z
1
lnyA ¼ lnyðxÞdx ð4:26Þ
A
A
Now, whereas the variance of the ground motions for a single point in the field,
given an event term, is equal to σ 2, the variance of the local average ln yA must be
reduced as a result of the averaging. Vanmarcke (1983) shows that this reduction
can be expressed as in Eq. (4.27).
Z Z
1 D2 D1
jδ1 j jδ2 j
σ 2A ¼ γ ðD1 ; D2 Þσ 2
! γ ðD1 ; D2 Þ ¼ 1 1
D1 D2 D2 D1 D1 D2
In Eq. (4.27), the correlation between two points within the region is denoted by
ρ(δ1, δ2), in which δ1 and δ2 are orthogonal co-ordinates defining the relative
positions of two points within a cell. In practice, this function is normally defined
as in Eq. (4.28) in which b is a function of response period.
0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
δ21 þ δ22
3
ρðδ1 ; δ2 Þ ¼ exp@ A ð4:28Þ
b
The reduction in variance associated with the averaging of the random field is
demonstrated in Fig. 4.16 in which values of γ(D1, D2) are shown for varying values
of the cell dimension and three different values of the range parameter b. For this
example the cells are assumed to be square.
With the expressions for the spatial average and the reduced variance now given,
the final ingredient that is required is the expression for the correlation between the
average motions over two cells (rather than between two points). This is provided in
Eq. (4.29), with the meaning of the distances D1k and D2l shown in Fig. 4.17.
126 P.J. Stafford
1.0
Range, b
Variance Reduction, γ(D 1, D 2)
10 km
0.8
20 km
30 km
0.6
0.4
0.2
0.0
0 5 10 15 20
Fig. 4.17 Definition of geometry used in Eq. (4.29) (Redrawn from Vanmarcke (1983))
σ2 X3 X 3
ρ lnyA1 , lnyA2 ¼ ð1Þk ð1Þl ðD1k D2l Þ2 γ ðD1k ; D2l Þ ð4:29Þ
4A1 A2 σ A1 σ A2 k¼0 l¼0
The correlations that are generated using this approach are shown in Fig. 4.18 both
in terms of the correlation against separation distance of the cell centroids and in
terms of the correlation against the separation measured in numbers of cells.
Figure 4.18 shows that the correlation values can be significantly higher than the
corresponding point-estimate values (which lie close to the case for the smallest
4 Variability and Uncertainty in Empirical Ground-Motion Prediction for. . . 127
1.0
1.0
Cell Dimension
Correlation, ρ(lny A 1, lny A 2)
0.8
2 km
4 km
0.6
0.6
6 km
8 km
10 km
0.4
0.4
0.2
0.2
0.0
0.0
0 10 20 30 40 0 5 10 15 20 25 30
Centroid−to−Centroid Distance (km) Distance in Number of Cells
Fig. 4.18 Example correlations computed using Eq. (4.29) for square cells of differing dimension
4.6 Conclusions
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
128 P.J. Stafford
References
Campbell KW, Bozorgnia Y (2008) NGA ground motion model for the geometric mean horizontal
component of PGA, PGV, PGD and 5 %-damped linear-elastic response spectra for periods
ranging from 0.01 to 10.0 s. Earthq Spectra 24:139–171
Campbell KW, Bozorgnia Y (2014) NGA-West2 ground motion model for the average horizontal
components of PGA, PGV, and 5%-damped linear acceleration response spectra. Earthq
Spectra. http://dx.doi.org/10.1193/062913EQS175M
Chiou BSJ, Youngs RR (2008) An NGA model for the average horizontal component of peak
ground motion and response spectra. Earthq Spectra 24:173–215
Chiou BSJ, Youngs RR (2014) Update of the Chiou and Youngs NGA model for the average
horizontal component of peak ground motion and response spectra. Earthq Spectra. http://dx.
doi.org/10.1193/072813EQS219M
Der Kiureghian A, Ditlevsen O (2009) Aleatory or epistemic? Does it matter? Struct Saf
31:105–112
Elms DG (2004) Structural safety – issues and progress. Prog Struct Eng Mat 6:116–126
Jayaram N, Baker JW (2009) Correlation model for spatially distributed ground-motion intensi-
ties. Earthq Eng Struct D 38:1687–1708
Li H, Meng G (2007) Nonlinear dynamics of a SDOF oscillator with Bouc-Wen hysteresis. Chaos
Soliton Fract 34:337–343
Stafford PJ (2012) Evaluation of the structural performance in the immediate aftermath of an
earthquake: a case study of the 2011 Christchurch earthquake. Int J Forensic Eng 1(1):58–77
Vanmarcke E (1983) Random fields, analysis and synthesis. The MIT Press, Cambridge, MA
Chapter 5
Seismic Code Developments for Steel
and Composite Structures
Ahmed Y. Elghazouli
5.1 Introduction
use and importance, although it could also be applied for higher seismicity areas if
vibration reduction or isolation devices are incorporated. Otherwise, the code aims
to achieve economical design by employing dissipative behaviour which, apart
from for special irregular or complex structures, is usually performed by assigning a
structural behaviour factor to reduce the code-specified forces resulting from
idealised elastic response spectra. This is carried out in conjunction with the
capacity design concept which requires an appropriate determination of the capac-
ity of the structure based on a pre-defined plastic mechanism, coupled with the
provision of sufficient ductility in plastic zones and adequate over-strength factors
for other regions.
This paper examines the dissipative seismic design provisions for steel and
composite framed structures, which are mainly covered in Part 1 (general rules,
seismic actions and rules for buildings) of Eurocode 8 (2005). General provisions in
other sections of EC8 Part 1 are also referred to where relevant. Additionally, where
pertinent, reference is made to US procedures for the seismic design of steel and
composite structures (ASCE7 2010; AISC341 2010). The assessment focuses on
the behaviour factors, ductility considerations, capacity design rules and connection
design requirements stipulated in EC8. Particular issues that warrant clarification or
further developments are highlighted and discussed.
EC8 focuses essentially on three main structural steel frame systems, namely
moment resisting, concentrically braced and eccentrically braced frames. Other
systems such as hybrid and dual configurations are referred to in EC8, but limited
information is provided. It should also be noted that additional configurations such
as those incorporating buckling restrained braces, truss moment frames or special
plate shear walls, which are covered in recent US provisions, are not directly
addressed in the current version of EC8.
The behaviour factors are typically recommended by codes of practice based on
background research involving extensive analytical and experimental investiga-
tions. The reference behaviour factors (q) stipulated in EC8 for steel-framed
structures are summarised in Table 5.1. These are upper values of q allowed for
each system, provided that regularity criteria and capacity design requirements are
met. For each system, the dissipative zones are specified in the code (e.g. beam
ends, diagonals, link zones in moment, concentrically braced and eccentrically
braced frames, respectively). The multiplier αu/α1 depends on the failure/first
plasticity resistance ratio of the structure, and can be obtained from push-over
analysis (but should not exceed 1.6). Alternatively, default code values can be used
to determine q (as given in parenthesis in Table 5.1).
5 Seismic Code Developments for Steel and Composite Structures 131
The same upper limits of the reference behaviour factors specified in EC8 for
steel framed structures are also employed for composite structures. This applies to
composite moment resisting frames, composite concentrically braced frames and
composite eccentrically braced frames. However, a number of additional composite
structural systems are also specified, namely: steel or composite frames with
connected infill concrete panels, reinforced concrete walls with embedded vertical
steel members acting as boundary/edge elements, steel or composite coupling beams
in conjunction with reinforced concrete or composite steel/concrete walls, and
composite steel plate shear walls. These additional systems are beyond the scope
of the discussions in this paper which focuses on typical frame configurations.
132 A.Y. Elghazouli
EC8 explicitly stipulates three ductility classes, namely DCL, DCM and DCH
referring to low, medium and high dissipative structural behaviour, respectively.
For DCL, global elastic analysis can be adopted alongside non-seismic detailing.
The recommended reference ‘q’ factor for DCL is 1.5–2.0. In contrast, structures in
DCM and DCH need to satisfy specific requirements primarily related to ensuring
sufficient ductility in the main dissipative zones. The application of a behaviour
factor larger than 1.5–2.0 must be coupled with sufficient local ductility within the
critical dissipative zones. For buildings which are not seismically isolated or
incorporating effective dissipation devices, design to DCL is only recommended
for low seismicity areas. It should be noted however that this recommendation can
create difficulties in practice (ECCS 2013), particularly for special or complex
structures. Although suggesting the use of DCM or DCH for moderate and high
seismicity often offers an efficient approach to providing ductility reserve against
uncertainties in seismic action, achieving a similar level of reliability could be
envisaged through the provision of appropriate levels of over-strength, possibly
coupled with simple inherent ductility provisions where necessary.
5 Seismic Code Developments for Steel and Composite Structures 133
EC8 refers to three general design concepts for composite steel/concrete structures:
(i) Concept a: low-dissipative structural behaviour – which refers to DCL in the
same manner as in steel structures; (ii) Concept b: dissipative structural behaviour
with composite dissipative zones for which DCM and DCH design can be adopted
with additional rules to satisfy ductility and capacity design requirements; Concept
c: dissipative structural behaviour with steel dissipative zones, and therefore spe-
cific measures are stipulated to prevent the contribution of concrete under seismic
conditions; in this case, critical zones are designed as steel, although other ‘non-
seismic’ design situations may consider composite action to Eurocode 4 (2004).
For dissipative composite zones (i.e. Concept b), the beneficial presence of the
concrete parts in delaying local buckling of the steel components is accounted for
by relaxing the width-to-thickness ratio as indicated in Table 5.2 which is adapted
from EC8. In the table, partially encased elements refer to sections in which
concrete is placed between the flanges of I or H sections, whilst fully encased
elements are those in which all the steel section is covered with concrete. The cross-
section limit c/tf refers to the slenderness of the flange outstand of length c and
134 A.Y. Elghazouli
thickness tf. The limits in hollow rectangular steel sections filled with concrete are
represented in terms of h/t, which is the ratio between the maximum external
dimension h and the tube thickness t. Similarly, for filled circular sections, d/t is
the ratio between the external diameter d and the tube thickness t. As in the case of
steel sections, notable differences also exist between the limits in EC8 for compos-
ite sections when compared with equivalent US provisions. Also, it should be noted
that the limits in Table 5.2 for partially encased sections (Elghazouli and Treadway
2008) may be relaxed even further if special additional details are provided to delay
or inhibit local buckling as indicated in Fig. 5.2 (Elghazouli 2009).
For beams connected to slabs, a number of requirements are stipulated in EC8 in
order to ensure satisfactory performance as dissipative composite elements (i.e. for
Concept b). These requirements comprise several criteria including those related to
the degree of shear connection, ductility of the cross-section and effective width
assumed for the slab. As in other codes, EC8 aims to ensure ductile behaviour in
composite sections by limiting the maximum compressive strain that can be
imposed on concrete in the sagging moment regions of the dissipative zones. This
5 Seismic Code Developments for Steel and Composite Structures 135
Fig. 5.2 Partially encased composite sections: (a) conventional, (b) with welded bars
is achieved by limiting the maximum ratio of x/d, as shown in Fig. 5.3. Limiting
ratios are provided as a function of the ductility class (DCM or DCH) and yield
strength of steel ( fy). Close observation suggests that these limits are derived based
on assumed values for εcu2 of 0.25 % and εa of q εy, where εy is the yield strain of
steel.
For dissipative zones of composite beams within moment frames, EC8 requires
the inclusion of ‘seismic bars’ in the slab at the beam-to-column connection region.
The objective is to incorporate ductile reinforcement detailing to ensure favourable
dissipative behaviour in the composite beams. The detailed rules are given in
Annex C of Part 1 and include reference to possible mechanisms of force transfer
in the beam-to-column connection region of the slab. The provisions are largely
based on background European research involving detailed analytical and experi-
mental studies (Plumier et al. 1998). It should be noted that Annex C of the code
only applies to frames with rigid connections in which the plastic hinges form in the
beams; the provisions in the annex are not intended, and have not been validated,
for cases with partial strength beam-to-column connections.
Another important consideration related to composite beams is the extent of the
effective width beff assumed for the slab, as indicated also in Fig. 5.3. EC8 includes
two tables for determining the effective width. These values are based on the
condition that the slab reinforcement is detailed according to the provisions of
Annex C since the same background studies (Plumier et al. 1998; Doneux and
136 A.Y. Elghazouli
Plumier 1999) were used for this purpose. The first table gives values for negative
(hogging) and positive (sagging) moments for use in establishing the second
moment of area for elastic analysis. These values vary from zero to 10 % of the
beam span depending on the location (interior or exterior column), the direction of
moment (negative or positive) and existence of transverse beams (present or not
present). On the other hand, the second table in the code provides values for use in
the evaluation of the plastic moment resistance. The values in this case are as high
as twice those suggested for elastic analysis. They vary from zero to 20 % of the
beam span depending on the location (interior or exterior column), the sign of
moment (negative or positive), existence of transverse beams (present or not
present), condition of seismic reinforcement, and in some cases on the width and
depth of the column cross-section. Clearly, design cases other than the seismic
situation would require the adoption of the effective width values stipulated in EC4.
Therefore, the designer may be faced with a number of values to consider for
various scenarios. Nevertheless, since the sensitivity of the results to these varia-
tions may not be significant (depending on the design check at hand), some
pragmatism in using these provisions appears to be warranted. Detailed research
studies (Castro et al. 2007) indicate that the effective width is mostly related to the
full slab width, although it also depends on a number of other parameters such as the
slab thickness, beam span and boundary conditions.
As in other seismic codes, EC8 aims to satisfy the ‘weak beam/strong column’
concept in moment frames, with plastic hinges allowed at the base of the frame, at
the top floor of multi-storey frames and for single-storey frames. To obtain ductile
plastic hinges in the beams, checks are made that the full plastic moment resistance
and rotation are not reduced by coexisting compression and shear forces. To satisfy
capacity design, columns should be verified for the most unfavourable combination
of bending moments MEd and axial forces NEd (obtained from MEd ¼ MEd,G
+ 1.1γ ovΩMEd,E, and similarly for axial loads), where Ω is the minimum over-
strength in the connected beams (Ωi ¼ Mpl,Rd/MEd,i). The parameters MEd,G and
MEd,E are the bending moments in the seismic design situation due to the gravity
loads and lateral earthquake forces, respectively, as shown in Fig. 5.4 (Elghazouli
2009).
The beam over-strength parameter (Ω ¼ Mpl,Rd/MEd) as adopted in EC8 involves
a major approximation as it does not account accurately for the influence of gravity
loads on the behaviour (Elghazouli 2010). This issue becomes particularly pro-
nounced in gravity-dominated frames (i.e. with large beam spans) or in low-rise
configurations (since the initial column sizes are relatively small), in which the
5 Seismic Code Developments for Steel and Composite Structures 137
Fig. 5.4 Moment action under gravity and lateral components in the sesimic situation
Fig. 5.5 Axial action under gravity and lateral components in the seismic situation
need for detailed considerations in the slab, including those related to seismic
rebars, effective width and ductility criteria associated with composite dissipative
sections. This consideration also implies that the connections would be designed on
the plastic capacity of the steel beams only. Additionally, the columns need to be
capacity designed for the plastic resistance of steel instead of composite beam
sections, which avoids over-sizing of the column members.
Whilst for moment frames, the dissipative zones may be steel or composite, the
dissipative zones in braced frames are typically only allowed to be in steel
according to EC8. In other words, the diagonal braces in concentrically braced
frames, and the bending/shear links in eccentrically braced frames, should typically
be designed and detailed such that they behave as steel dissipative zones. This
limitation is adopted in the code as a consequence of the uncertainty associated with
determining the actual capacity and ductility properties of composite steel/concrete
elements in these configurations. As a result, the design of composite braced frames
follows very closely those specified for steel, an issue which merits further assess-
ment and development.
Capacity design of concentrically braced frames in EC8 is based on ensuring
yielding of the diagonals before yielding or buckling of the beams or columns and
before failure of the connections. Due to buckling of the compression braces,
tension braces are considered to be the main ductile members, except in V and
inverted-V configurations. According to EC8, columns and beams should be capac-
ity designed for the seismic combination actions. The design resistance of the beam
or column under consideration NEd,(MEd) is determined (i.e. NEd,(MEd) NEd,G
+ 1.1γ ovΩ NEd,E) with due account of the interaction with the bending moment
MEd, where NEd,G and NEd,E, are the axial loads due to gravity and lateral actions,
respectively, in the seismic design situation, as illustrated in Fig. 5.5 (Elghazouli
2009); Ω is the minimum value of axial brace over-strength over all the diagonals of
the frame and γov is the material over-strength. However, Ω of each diagonal should
not differ from the minimum value by more than 25 % in order to ensure reasonable
distribution of ductility. It is worth noting that unlike in moment frames, gravity
5 Seismic Code Developments for Steel and Composite Structures 139
loading does not normally have an influence on the accuracy of Ω. It should also be
noted that the 25 % limit can result in difficulties in practical design; it can be
shown (Elghazouli 2010) that this limit can be relaxed or even removed if measures
related to column continuity and stiffness are incorporated in design.
As mentioned previously, US provisions (AISC341 2010) for braced frames
differ from those in EC8 in terms of the R factors recommended as well as cross-
section limits for some section types. However, the most significant difference is
related to the treatment of the brace buckling in compression which may lead to
notably dissimilar seismic behaviour depending mainly on the slenderness of the
braces. This has been examined in detail in recent studies (Elghazouli 2010), and
has significant implications on the frame over-strength as well as on the applied
forces and ductility demand imposed on various frame components.
As expected, in the design of the diagonal members in concentrically braced
frames, the non-dimensional slenderness λ used in EC3 plays an important role in
the behaviour (Elghazouli 2003). In earlier versions of EC8, an upper limit of 1.5
was proposed to prevent elastic buckling. However, further modifications have
been made in subsequent versions of EC8 and the upper limit has been revised to
a value of 2.0 which results in a more efficient design. On the other hand, in frames
with X-diagonal braces, EC8 stipulates that λ should be between 1.3 and 2.0. The
lower limit is specified in order to avoid overloading columns in the pre-buckling
stage of diagonals. Satisfying this lower limit can however result in significant
difficulties in practical design (Elghazouli 2009). It would be more practical to
avoid placing such limits, yet ensure that forces applied on components other than
the braces are based on equilibrium at the joints, with due account of the relevant
actions in compression. Figure 5.6 illustrates, for example, the compression force
F (normalised by Npl sinϕ) developing in a column of X and decoupled brace
140 A.Y. Elghazouli
configurations (Elghazouli 2010), where Npl is the axial plastic capacity of the brace
cross-section and ϕ is the brace angle. These actions can be based on the initial
buckling resistance (Nb) or the post-buckling reserve capacity (Npb) depending on
the frame configuration and design situation. Based on available experimental
results (Goggins et al. 2005; Elghazouli et al. 2005), a realistic prediction of Npb
can be proposed (Elghazouli 2010) accounting for brace slenderness as well as
expected levels of ductility.
relaxed stringent
drift limits drift limits
q=8
4
q=6
q=4
implies that
2 actual strength
q <_ 3
is larger than
Ve (i.e. at q=1) _ mainly governed by material
For q<3,
& reditribution considerations
(i.e. 1.1g ov a u / a1)
0
0 0.2 0.4 0.6 0.8 1
Elastic spectral acceleration (Se/g)
It can be shown that, in comparison with North American and other international
provisions, drift-related requirements in EC8 are significantly more stringent
(Elghazouli 2010). This is particularly pronounced in case of the stability coeffi-
cient θ, which is a criterion that warrants further detailed consideration. As a
consequence of the stern drift and stability requirements and the relative sensitivity
of framed structures, particularly moment frames, to these effects, they can often
govern the design leading to considerable over-strength, especially if a large
behaviour factor is assumed. This over-strength (represented as the ratio of the
actual base shear Vy to the design value Vd) is also a function of the normalised
elastic spectral acceleration (Sa/g) and gravity design, as illustrated in Fig. 5.7
(Elghazouli 2010).
Whereas the presence of over-strength reduces the ductility demand in dissipa-
tive zones, it also affects forces imposed on frame and foundation elements. A
rational application of capacity design necessitates a realistic assessment of lateral
142 A.Y. Elghazouli
5
Lower limit
V in EC8
4 for X-diagonals
Over-strength Vy/Vd
Upper limit
in EC8
3
V
2 Compression design
1
Tension design
0
0 0.5 1 1.5 2 2.5 3
–
Slenderness λ
Fig. 5.8 Lateral frame over-strength arising from tension and compression design
strength arising from the compression design is insignificant for stocky members
but increases steadily with the slenderness ratio. As noted previously, it is important
to quantify the level of over-strength in a frame and assess the actual forces
sustained by the braces in compression. Depending on the specific design situation
and frame configuration, it may be necessary to estimate either the maximum or
minimum forces attained in compression members in a more realistic manner as
opposed to the idealised approaches currently adopted in seismic codes.
Steel moment frames have traditionally been designed with rigid full-strength
connections, usually of fully-welded or hybrid welded/bolted configuration. Typi-
cal design provisions ensured that connections are provided with sufficient over-
strength such that dissipative zones occur mainly in the beams. However, the
reliability of commonly-used forms of full-strength beam-to-column connection
has come under question following poor performance in large seismic events,
particularly in Northridge and Kobe earthquakes (SAC 1995). The extent and
repetitive nature of damage observed in several types of welded and hybrid
connections have directed considerable research effort not only to repair methods
for existing structures but also to alternative connection configurations to be
incorporated in new designs.
Observed seismic damage to welded and hybrid connections was attributed to
several factors including defects associated with weld and steel materials, welding
procedures, stress concentration, high rotational demands, scale effects, as well as
the possible influence of strain levels and rates (FEMA 2000). In addition to the
concerted effort dedicated to improving seismic design regulations for new con-
struction, several proposals have been forwarded for the upgrading of existing
connections. As shown schematically in Fig. 5.9 (Elghazouli 2009), this may be
carried out by strengthening of the connection through haunches, cover or side
plates, or other means. Alternatively, it can be achieved by weakening of the beam
by trimming the flanges (i.e. reduced beam section ‘RBS’ or ‘dog-bone’ connec-
tions), perforating the flanges, or by reducing stress concentrations through slots in
beam webs, enlarged access holes, etc. In general, the design can be based on either
prequalified connections or on prototype tests. Prequalified connections have been
proposed in the US (AISC358 2010), and a similar European activity is currently
underway. It should be noted however that most prequalification activities have
been focusing on connections to open section columns, with comparatively less
attention given to connections to tubular columns (Elghazouli and Packer 2014).
144 A.Y. Elghazouli
Fig. 5.9 Examples of modified moment beam-to-column connection configurations: (a) with
haunches, (b) with cover plates; (c) reduced beam section
state, and (iii) connection deformation is accounted for through nonlinear analysis.
Unlike in AISC, there is no limit given in EC8 on the minimum moment ratio, nor
on the use with different ductility classes. Dissipative connections should satisfy the
rotational demand implied for plastic hinge zones, irrespective of whether the
connections are partial or full strength; these are specified as 25 and 35 mrad for
DCM and DCH, respectively, which are broadly similar to the demands in IMF and
SMF in AISC 341 (total drift of 0.02 and 0.04 rad, for IMF and SMF, respectively).
As discussed previously, EC8 permits three general design concepts for composite
structures (low dissipative behaviour, dissipative composite zones or dissipative
steel zones). On the other hand, AISC refers to specific composite systems as
indicated in Table 5.1 (e.g. C-OMF, C-IMF, C-SMF). In principle, this classifica-
tion applies to systems consisting of composite or reinforced concrete columns and
structural steel, concrete-encased composite or composite beams. The use of PR
connections (C-PRMF) is included, and is applicable to moment frames that consist
of structural steel columns and composite beams that are connected with partially
restrained (PR) moment connections. Similar to PR steel connections, they should
have strengths of at least 0.5Mp but additionally should exhibit a rotation capacity
of at least 0.02 rad. It should be noted that, as mentioned previously, Annex C in
EC8 for the detailing of slabs only applies to frames with rigid connections in which
the plastic hinges form in the beams. However, guidance on the detailing of
composite joints using partial strength connections are addressed in the commen-
tary of AISC 341 for C-PRMF systems.
The use of composite connections can often simplify some of the challenges
associated with traditional steel and concrete construction, such as minimizing field
welding and anchorage requirements. Given the many alternative configurations of
composite structures and connections, there are few standard details for connections
in composite construction. In most composite structures built to date, engineers
have designed connections using basic mechanics, equilibrium models
(e.g. classical beam-column, truss analogy, strut and tie, etc.), existing standards
for steel and concrete construction, test data, and good judgment. As noted above,
however, engineers do face inherent complexities and uncertainties when dealing
with composite dissipative connections, which can often counterbalance the merits
of this type of construction when choosing the structural form. In this context, the
‘total disconnection’ approach permitted in EC8 (i.e. Concept c) offers a practical
alternative in order to use standard or prequalified steel-only beam-to-column
connections. This status can also be achieved using North American codes provided
the potential plastic hinge regions are maintained as pure steel members. A similar
approach has also been recently used in hybrid flat slab-tubular column connections
(Eder et al. 2012), hence enabling the use of flat slabs in conjunction with steel-only
dissipative members.
146 A.Y. Elghazouli
Bracing Member
2t
Gusset Plate (thickness = t)
Fold Line
Issues related to connection performance and design are clearly not only limited to
moment connections, but also extend to other configurations such as connections to
bracing members. Many of the failures reported in concentrically braced frames due
to strong ground motion have been in the connections. In principle, bracing
connections can be designed as rotationally restrained or unrestrained, provided
that they can transfer the axial cyclic tension and compression effectively. The in-
and out-of-plane behaviour of the connection, and their influence on the beam and
column performance, should be carefully considered in all cases. For example,
considering gusset plate connections, as shown in Fig. 5.10 (Elghazouli 2009),
satisfactory performance can be ensured by allowing the gusset plate to develop
plastic rotations. This requires that that the free length between the end of the brace
and the assumed line of restraint for the gusset can be sufficiently long to permit
plastic rotations, yet short enough to preclude the occurrence of plate buckling prior
to member buckling. Alternatively, connections with stiffness in two directions,
such as crossed gusset plates, can be detailed. The performance of bracing connec-
tions, such as those involving gusset plate components, has attracted significant
research interest in recent years (e.g. Lehman et al. 2008). Alternative tri-linear and
nonlinear fold-line representations have been proposed and validated. A recent
European research programme has also examined the performance of alternative
5 Seismic Code Developments for Steel and Composite Structures 147
This paper highlights various issues related to the seismic design of steel and
composite frames that would benefit from further assessment and code develop-
ment, with particular focus on the provisions of EC8. Since the European seismic
code is in general relatively clear in its implementation of the underlying capacity
design principles as well as the purpose of the parameters adopted within various
procedures, its rules can be readily adapted and modified based on new research
findings and improved understanding of seismic behaviour.
Comparison of EC8 provisions with those in AISC in terms of structural
configurations and associated behaviour factors highlights a number of issues that
are worthy of further development. Several lateral resisting systems that are cur-
rently dealt with in AISC are not incorporated in EC8 including steel-truss moment
frames, steel-plate walls and buckling-restrained braces. It is anticipated that these
will be considered in future revisions of the code. Another notable difference is the
148 A.Y. Elghazouli
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
AISC (2012) Seismic design manual, 2nd edn. American Institute of Steel Construction Inc.,
AISC, Chicago
AISC 341 (2010) Seismic provisions for structural steel buildings. ANSI/AISC 341–10 American
Institute of Steel Construction Inc., AISC, Chicago
AISC 358 (2010) Prequalified connections for special and intermediate steel moment frames for
seismic applications. ANSI/AISC 358–10, American Institute of Steel Construction Inc.,
AISC, Chicago
ASCE7 (2010) ASCE/SEI – ASCE 7–10 – minimum design loads for buildings and other
structures. American Society of Civil Engineers/Structural Engineering Institute, Reston
Broderick BM, Hunt A, Mongabure P, LeMaoult A, Goggins JM, Salawdeh, S, O’Reilly G, Beg D,
Moze P, Sinur F, Elghazouli AY, and Plumier A (2013) Assessment of the seismic response of
concentrically-braced frames. SERIES Concluding Workshop, Earthquake Engineering
Research Infrastructures, European Commissions, JRC-Ispra, Italy
Castro JM, Elghazouli AY, Izzuddin BA (2007) Assessment of effective slab widths in composite
beams. J Constr Steel Res 63(10):1317–1327
Castro JM, Davila-Arbona FJ, Elghazouli AY (2008) Seismic design approaches for panel zones in
steel moment frames. J Earthq Eng 12(S1):34–51
Doneux C, Plumier A (1999) Distribution of stresses in the slab of composite steel-concrete
moment resistant frames submitted to earthquake action. Stahlbau 68(6):438–447
ECCS (2013) Assessment of EC8 provisions for seismic design of steel structures. In: Landolfo R
(ed) European convention for constructional steelwork, Brussels
Eder MA, Vollum RL, Elghazouli AY (2012) Performance of ductile RC flat slab-to-steel column
connections under cyclic loading. Eng Struct 36(1):239–257
Elghazouli AY (2003) Seismic design procedures for concentrically braced frames. Struct Build
156:381–394
Elghazouli AY (ed) (2009) Seismic design of buildings to Eurocode 8. Taylor and Francis/Spon
Press, London
Elghazouli AY (2010) Assessment of European seismic design procedures for steel framed
structures. Bull Earthq Eng 8(1):65–89
Elghazouli AY, Packer JA (2014) Seismic design solutions for connections to tubular members. J
Steel Constr 7(2):73–83
Elghazouli AY, Treadway J (2008) Inelastic behaviour of composite members under combined
bending and axial loading. J Constr Steel Res 64(9):1008–1019
Elghazouli AY, Broderick BM, Goggins J, Mouzakis H, Carydis P, Bouwkamp J, Plumier A
(2005) Shake table testing of tubular steel bracing members. Struct Build 158:229–241
Elghazouli AY, Castro JM, Izzuddin BA (2008) Seismic performance of composite moment
frames. Eng Struct 30(7):1802–1819
Elghazouli AY, Kumar M, Stafford PJ (2014) Prediction and optimisation of seismic drift
demands incorporating strong motion frequency content. Bull Earthq Eng 12(1):255–276
Eurocode 3 (2005) Design of steel structures – Part 1.1: General rules and rules for buildings. EN
1993–1: 2005, European Committee for Standardization, CEN, Brussels
5 Seismic Code Developments for Steel and Composite Structures 151
Eurocode 4 (2004) Design of composite steel and concrete structures – Part 1.1: General rules and
rules for buildings. EN 1994–1: 2004, European Committee for Standardization, CEN,
Brussels
Eurocode 8 (2005) Design of structures for earthquake resistance – Part 1: General rules, seismic
actions and rules for buildings. EN 1998–1: 2004, European Committee for Standardization,
Brussels
FEMA (2000) Federal Emergency Management Agency. Recommended seismic design criteria
for new steel moment-frame buildings. Program to reduce earthquake hazards of steel moment-
frame structures, FEMA-350, FEMA, Washington, DC
Goggins JM, Broderick BM, Elghazouli AY, Lucas AS (2005) Experimental cyclic response of
cold-formed hollow steel bracing members. Eng Struct 27(7):977–989
Gray MG, Christopoulos C, Packer JA (2014) Cast steel yielding brace system (YBS) for
concentrically braced frames: concept development and experimental validations. J Struct
Eng (American Society of Civil Engineers) 140(4):pp.04013095
Herion S, de Oliveira JC, Packer JA, Christopoulos C, Gray MG (2010) Castings in tubular
structures – the state of the art. Struct Build (Proceedings of the Institution of Civil Engineers)
163(SB6):403–415
Kumar M, Stafford PJ, Elghazouli AY (2013) Influence of ground motion characteristics on drift
demands in steel moment frames designed to Eurocode 8. Eng Struct 52:502–517
Lehman DE, Roeder CW, Herman D, Johnson S, Kotulka B (2008) Improved seismic performance
of gusset plate connections. J Struct Eng, ASCE 134(6):890–901
Plumier A, Doneux C, Bouwkamp JG, Plumier C (1998) Slab design in connection zones of
composite frames. Proceedings of the 11th ECEE Conference, Paris
SAC (1995) Survey and assessment of damage to buildings affected by the Northridge Earthquake
of January 17, 1994, SAC95-06, SAC Joint Venture, Sacramento
Chapter 6
Seismic Analyses and Design of Foundation
Soil Structure Interaction
Alain Pecker
Abstract The topic of this paper is to illustrate on a real project one aspect of soil
structure interaction for a piled foundation. Kinematic interaction is well recog-
nized as being the cause of the development of significant internal forces in the piles
under seismic loading. Another aspect of kinematic interaction which is often
overlooked is the modification of the effective foundation input motion. As
shown in the paper such an effect may however be of primary importance.
6.1 Introduction
A. Pecker (*)
Géodynamique et Structure, Bagneux, France
Ecole Nationale des Ponts ParisTech, Champs-sur-Marne, France
e-mail: [email protected]
• The ground profile has an average shear wave velocity smaller than 180 m/s
(ground type D) and contains consecutive layers of sharply differing stiffness;
consecutive layers of sharply differing stiffness are defined as layers with a ratio
for the shear moduli greater than 6.
• The zone is of moderate or high seismicity, i.e. presents a ground surface
acceleration larger than 0.1 g, and the category of importance of the structure
is higher than normal (importance category III or IV).
There is another aspect of kinematic interaction often overlooked, even in
seismic building codes, which is the modification of the effective foundation
input motion. For example the European Seismic code (CEN 2004) does not
mention it, nor does the ASCE 41-13 standard (2014) which however dedicates
several pages to the effect of kinematic interaction for shallow or embedded
foundations.
This issue might be critical when substructuring is used and the global soil-
structure-interaction problem is solved in several steps. However, when a global
model including both the soil and the superstructure is contemplated, kinematic
interaction is accounted for in the analysis, provided the global model correctly
reflects the physical character of the problem. These aspects are illustrated below on
a real bridge project.
In the global model, piles are represented by beam elements supported by linear or
nonlinear, depth-varying, Winkler springs. In the case of earthquake excitation,
ground motion would impart different loading at each soil spring and these motions
need to be calculated from a separate analysis (site response analysis). Kinematic
interaction is therefore correctly accounted for. However, the main drawback of this
modeling technique is the large number of degrees of freedom needed to formulate
the complete system.
6 Seismic Analyses and Design of Foundation Soil Structure Interaction 155
Superstructure
Depth Varying
Free Field
Pile Cap
Motions
kh1
Horizontal Motion 1
kv1 Vertical Motion 1
kh2
Horizontal Motion 2
kv2 Vertical Motion 2
kh3
Horizontal Motion 3
kv3 Vertical Motion 3
Pile Foundation
kh4
Horizontal Motion 4
kv4 Vertical Motion 4
khn
Horizontal Motion n
kvn Vertical Motion n
The p-y relation, representing the nonlinear spring stiffness, is generally devel-
oped on the basis of a semi-empirical curve, which reflects the nonlinear resistance
of the local soil surrounding the pile at specified depths. A number of p-y models
156 A. Pecker
have been proposed by different authors for different soil conditions. The two most
commonly used p-y models are those proposed by Matlock et al. (1970) for soft clay
and by Reese et al. (1974) for sand. These models are essentially semi-empirical
and have been developed on the basis of a limited number of full-scale lateral load
tests on piles of small diameters ranging from 0.30 to 0.40 m. To extrapolate the
p-y criteria to conditions that are different from the one from which the p-y models
were developed requires some judgment and consideration. For instance in Slove-
nia, values of the spring stiffnesses are derived from the static values, increased by
30 %. Based on some field test results, there are indications that stiffness and
ultimate lateral load carrying capacity of a large diameter drilled shaft are larger
than the values estimated using the conventional p-y criteria. Pender (1993) sug-
gests that the subgrade modulus used in p-y formulation would increase linearly
with pile diameter.
Studies have shown that Matlock and Reese p-y criteria give reasonable pile
design solutions. However, the p-y criteria were originally conceived for design
against storm wave loading conditions based on observation of monotonic static
and cyclic pile load test data. Therefore, Matlock and Reese’s static p-y curves can
serve to represent the initial monotonic loading path for typical small diameter
driven isolated piles. If a complete total system of a bridge is modeled for seismic
response study, individual piles and p-y curves can be included in the analytical
model.
However, for a large pile group, group effects become important. An example is
given in Fig. 6.3 which presents the results of horizontal impedance calculations of
the group of piles of half the foundation (22 piles) of one of the pylon of the Vasco
da Gama bridge in Lisbon (Pecker 2003); the group efficiency, computed from
elastodynamic theory, is of the order of 1/6 at low frequencies and decreases with
frequency due to the constructive interference of diffracted waves from adjacent
piles. Typically, for large pile groups it is not uncommon to calculate group
efficiency in the range 1/3 to 1/6.
Although group effect has been a popular research topic within the geotechnical
community, currently there is no common consensus on the design approach to
incorporate group effects. Full scale and model tests by a number of authors show
that in general, the lateral capacity of a pile in a pile group is less than that of a
single isolated pile due to so-called group efficiency. The reduction is more
pronounced as the pile spacing is reduced. Other important factors that affect the
efficiency and lateral stiffness of the pile are the type and strength of soil, number of
piles, type and level of loading. In the past, analyses of group effects were based
mostly on elastic halfspace theory due to the absence of costly full-scale pile
experiments. In addition to group effect, gapping and potential cyclic degradation
have been considered in the recent studies. It has been shown that a concept based
on p-multiplier applied on the standard static loading p-y curves works reasonably
well to account for pile group and cyclic degradation effects (Brown and Bollman
1996). The p-multiplier is a reduction factor that is applied to the p-term in the p-y
curve for a single pile to simulate the behavior of piles in the group.
6 Seismic Analyses and Design of Foundation Soil Structure Interaction 157
41.95 m
5.80 m
5.00 m
20.20 m 5.80 m
44 piles
f = 2.20m
1000
Real part (MN/m)
750
Pile group
500
Isolated pile
250
0
0.0 0.2 0.4 0.6 0.8 1.0
Frequency (Hz)
Fig. 6.3 Horizontal pile group impedance for the Vasco da Gama bridge (Pecker 2003)
A direct (or global) interaction analysis in which both the soil and the structure are
modelled with finite elements is very time demanding and not well suited for
design, especially in 3D. The alternative approach employing a substructure system
in which the foundation element is modeled by a condensed foundation stiffness
matrix and mass matrix along with equivalent forcing function represented by the
kinematic motion, may be more attractive; in addition, it more clearly separates the
role of the geotechnical engineer and of the structural engineer. The substructuring
approach is based on a linear superposition principle and therefore linear soil
behavior is more appropriate. In that case, the condensed stiffness matrix may be
obtained either from the beam on Winkler springs model or from continuum
impedance solutions (Gazetas 1991). When nonlinear soil behavior is considered,
the condensed stiffness matrix is generally evaluated by a pushover analysis of the
pile group and linearization at the anticipated displacement amplitude of the
pile head.
158 A. Pecker
Substructuring reduces the problem to more amenable stages and does not
necessarily require that the whole solution be repeated again if modifications
occur in the superstructure. It is of great mathematical convenience and rigor
which stem, in linear systems, from the superposition theorem (Kausel
et al. 1974). This theorem states that the seismic response of the complete system
can be computed in two stages (Fig. 6.4)
• Determination of the kinematic interaction motion, involving the response to
base acceleration of a system which differs from the actual system in that the
mass of the superstructure is equal to zero;
• Calculation of the inertial interaction effects, referring to the response of the
complete soil-structure system to forces associated with base accelerations equal
to the accelerations arising from the kinematic interaction.
The second step is further divided into two subtasks:
• computation of the dynamic impedances at the foundation level; the dynamic
impedance of a foundation represents the reaction forces acting under the
foundation when it is directly loaded by harmonic forces;
• analysis of the dynamic response of the superstructure supported on the dynamic
impedances and subjected to the kinematic motion, also called effective foun-
dation input motion.
Although the substructure approach described above is rigorous for the treatment
of linear SSI, its practical implementation is subject to several simplifications:
• full linear behavior of the system is assumed; it is well recognized that this
assumption is a strong one since nonlinearities occur in the soil and at the soil
pile interface. Soil nonlinearities can be partly accounted for, as recommended
6 Seismic Analyses and Design of Foundation Soil Structure Interaction 159
In the remaining of the paper we will focus on the first step of the substructure
analysis described above with illustration of two foundations responses of the same
bridge.
Foundation 1 is composed of 18 concrete piles, 1,800 mm in diameter, 20 m
long, penetrating a 2.50 m thick layer of a residual soil with a shear wave velocity
300 m/s, overlying a 10 m thick weathered layer of the rock formation with a shear
wave velocity of 580 m/s; the rock formation is found at 12.50 m below the ground
surface. Site response analyses were carried out with the software SHAKE (linear
equivalent viscoelastic model) and for seven time histories spectrally matched to the
design spectrum; these time histories were input at an outcrop of the rock formation.
The foundation response was modeled with the software SASSI-2010; (Ostadan et al.
2010) the model includes the 18 piles, a massless pile cap and the soil layers; the
strain compatible properties retrieved from the SHAKE analyses are used for each
soil layer and the input motion is represented by the seven ground surface time
histories computed in the SHAKE analyses. Figure 6.5 compares the freefield ground
surface spectrum to the foundation response spectra calculated at the same elevation.
Note that because of the asymmetric pile layout the motion in the X-direction is
different from the motion in the Y-direction. As expected since the soil profile is
stiffer than the piles in flexure, both the freefield motion and the foundation motions
are very close to each other. For that configuration, using the freefield motion for the
effective foundation input motion would not be a source of error.
Foundation 2 of the same bridge is composed of 35 large diameter concrete piles
(2.5 m), 49 m long, crossing a very soft mud layer, 11 m thick, with a shear wave
velocity of the order of 100 m/s; the piles go through a residual soil (VS ¼ 250–400-
m/s) and reach the competent rock formation at 25 m depth (Fig. 6.6). Freefield and
foundation response spectra are compared in Fig. 6.7 The free-field ground
response spectrum determined from a site specific response analysis has a smooth
shape; the kinematic interaction motion, i.e. the motion of the piled foundation,
160 A. Pecker
1.0E+01
Foundation X-direction
1.0E+00 Foundation Y-direction
Pseudo acceleration (g)
Freefield H-direction
1.0E-01
1.0E-02
1.0E-03
0.0 0.1 1.0 10.0 100.0
Period (s)
0
Very soft clay
5
Depth below ground surface (m)
10
Reidual
soil
15
Weathered rock
20
25
Rock
30
0 200 400 600 800 1000
Shear wave velocity (m/s)
1.0E+01
Foundation X-direction
Foundation Y-direction
1.0E+00
Freefield H-direction
Pseudo acceleration (g)
1.0E-01
1.0E-02
1.0E-03
0.0 0.1 1.0 10.0 100.0
Period (s)
exhibits a marked peak at 0.5 s and a significant deamplification with respect to the
free-field motion between 0.8 and 3.0 s. This phenomenon is due to the inability of
the piled foundation to follow the ground motion because of the piles stiffnesses.
Obviously, in that case, using the freefield motion for the foundation input
motion would be strongly misleading and may produce an unconservative design.
These two examples, drawn from a real project clearly illustrate the need for a
careful examination of the relative foundation-soil profile stiffness before deciding
whether or not there is a chance that the freefield motion be modified by the
foundation. When faced to that latter situation, it is mandatory to correctly evaluate
the effective foundation input motion to obtain meaningful results.
6.4 Conclusions
attractive, the method is often used with approximations in its implementation and
the designer must be fully aware of those shortcuts. In this paper, one such
approximation, which consists in taking the freefield motion for the effective
foundation input motion, has been illustrated on a real project. It has been shown
that significant differences may take place between both motions when the piled
foundation cannot be considered flexible with respect to the soil profile. If this
situation is faced, rigorous treatment of soil-structure interaction requires that the
effective foundation input motion be calculated, an additional step in the design.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution
Noncommercial License, which permits any noncommercial use, distribution, and reproduction in
any medium, provided the original author(s) and source are credited.
References
ASCE/SEI 41–13 (2014) Chapter 8: Foundations and geologic site hazards. In: Seismic evaluation
and retrofit of existing buildings, vol 52. American Society of Civil Engineers, Reston, pp 1–8
Brown DA, Bollman HT (1996) Lateral load behavior of pile group in sand. J Geotech Eng ASCE
114(11):1261–1276
CEN (2004) European Standard EN 1998-5: 2004 Eurocode 8: Design of structures for earthquake
resistance. Part 5: Foundations, retaining structures, geotechnical aspects. Comité Europeen de
Normalisation, Brussels
Gazetas G (1991) Foundation vibrations. In: Fang HY (ed) Foundation engineering handbook, 2nd
edn. Van Nostrand Rheinhold, New York
Idriss IM, Sun JI (1992) SHAKE 91: a computer program for conducting equivalent linear seismic
response analyses of horizontally layered soil deposits. Program modified based on the original
SHAKE program published in December 1972 by Schnabel, Lysmer and Seed, Center of
Geotechnical Modeling, Department of Civil Engineering, University of California, Davis
Kausel E, Roesset JM (1974) Soil structure interaction for nuclear containment structures. Pro-
ceedings ASCE, power division specialty conference, Boulder
Kavvadas M, Gazetas G (1993) Kinematic seismic response and bending of free head piles in
layered soils. Geotechnique 43(2):207–222
Lam PI, Law H (2000) Soil structure interaction of bridges for seismic analysis. Technical report
MCEER-00-008
Matlock H (1970) Correlation for design of laterally loaded piles in soft clay. 2nd Annual Offshore
Technology Conference. Paper No 1204
Ostadan F, Nan D (2012) SASSI 2010 – a system for analysis of soil-structure interaction.
Geotechnical Engineering Division, Civil Engineering Department, University of California,
Berkeley
Pecker A (2003) Aseismic foundation design process – lessons learned from two major projects:
the Vasco da Gama and the Rion-Antirion bridges. Proceedings 5th ACI international confer-
ence on seismic bridge design and retrofit for earthquake resistance, La Jolla
Pender MJ (1993) Aseismic pile foundation design and analysis. Bull N Z Soc Earthq Eng 26
(1):49–160
Reese L, Cox W, Koop R (1974) Analysis of laterally load piles in sand. 6th Annual Offshore
Technology Conference. Paper No. 2080
Chapter 7
Performance-Based Seismic Design
and Assessment of Bridges
Andreas J. Kappos
Abstract Current trends in the seismic design and assessment of bridges are
discussed, with emphasis on two procedures that merit some particular attention,
displacement-based procedures and deformation-based procedures. The available
performance-based methods for bridges are critically reviewed and a number of
critical issues are identified, which arise in all procedures. Then two recently pro-
posed methods are presented in some detail, one based on the direct displacement-
based design approach, using equivalent elastic analysis and properly reduced dis-
placement spectra, and one based on the deformation-based approach, which involves
a type of partially inelastic response-history analysis for a set of ground motions and
wherein pier ductility is included as a design parameter, along with displacement
criteria. The current trends in seismic assessment of bridges are then summarised and
the more rigorous assessment procedure, i.e. nonlinear dynamic response-history
analysis, is used to assess the performance of bridges designed to the previously
described procedures. Finally some comments are offered on the feasibility of
including such methods in the new generation of bridge codes.
7.1 Introduction