Probabilistic Model Code (2001)
Probabilistic Model Code (2001)
This document is a first attempt to put together in a consistent way some - certainly not all - of the
rules, regulations, and explanations that are necessary for the design of new structures, or the
assessment of existing ones from a probabilistic point of view. The document, of course, is also useful
for background calculations of non –probabilistic codes.
From a probabilistic point of view designing new structures, or accepting existing ones as sufficiently
safe, is the result of a decision-making process guided by some optimality criteria. This process links,
in a logical and consistent way, the requirements and expectations of the client or owner of a structure,
the loads and actions to be expected, the characteristics of materials to be used or found in the
proposed or existing structure, the calculation models, the grades of workmanship expected or
observed on the site, the behaviour of the users, and, finally, in an ideal case, the perceptions of
society with respect to environmental impact and sustainable development.
The aim of this document is threefold: First, it is the attempt of a number of people interested in such
an approach to see whether, at this point in time, the main problems in the development of such a
document can be mastered. Second, it is intended to put a text into the hands of structural engineers
who are willing now to apply new approaches in their work. Third, the Joint Committee on Structural
Safety (JCSS) is convinced that such a document will spur the development of a Probabilistic Code
covering all aspects of Structural Engineering.
There are people who advocate staying with traditional non-probabilistic codes, claiming that data is
not sufficient for full probabilistic methods. There is much truth in the statement that often data is
scarce. But this holds for both approaches. Let's face it: since data is often scarce in either approach,
what remains is in essence probabilistic. Important in this respect is the meaning of the word
“probability”. In this document a “probability” is not necessarily considered as a “relative frequency
that can be observed in reality”. Such a straightforward interpretation is possible for dice and card
games, but not for structural design where uncertainties must be modelled by complicated probabilistic
models and which interact in a complex way. Here, probabilities are understood in the Bayesian way,
expressing degrees of belief in relation to the various uncertainties, and suitable to decision making
processes. At best, probabilities can be interpreted as “best estimates” of the relative frequencies,
sometimes being wrong on the one side, sometimes on the other the degree of deviation from reality
being a direct function of the state of knowledge. More discussion on this topic can be found on
Annex X of Part 1, Basis of Design.
The present version of this JCSS Probabilistic Model Code document is available on the Internet at
www. [Link]. It is intended that the document will be adapted and extended a number of times in
the years to come. To get the best possible and efficient improvements all users are invited to send
their questions, comments and suggestions to [Link]. The JCSS hopes that this document -
the most recent of its pre-codification work since its creation in 1972 - will find its way into the
practical work of structural engineers.
March 2001
Joint Committee 12th draft
on Structural Safety
JCSS-OSTL/DIA/VROU -10-11-2000
1
Contents
1. Introduction ....................................................................................................................... 3
2. Requirements ..................................................................................................................... 3
6. Reliability ......................................................................................................................... 14
JCSS-OSTL/DIA-04-10-1999
7.2.1. Ultimate Limit States..................................................................................................................... 16
7.2.2. Serviceability Limit State .............................................................................................................. 19
8.4. Recommendation................................................................................................................ 21
10.6. Figures................................................................................................................................. 52
1
11. Annex D:Bayesian Interpretation of Probabilities .................................................... 59
11.2. Discussion............................................................................................................................ 59
2
1. Introduction
This part treats the general principles for a probabilistic design of load bearing structures. The
more detailed aspects dealing with the probabilistic description of loads are treated in part 2.
In the same way the probabilistic description of structural resistance parameters is treated in
part 3.
This part doesn’t give detailed information about methods for the calculation of probabilities.
It is assumed that the user of a probabilistic code is familiar with such methods. A clause on
the interpretation of probabilities treated in this document is provided in Annex D.
2. Requirements
In particular they shall, with appropriate levels of reliability, fulfil the following
requirements:
- They shall remain fit for the use for which they are required (serviceability limit state
requirement)
- They shall withstand extreme and/or frequently repeated actions occurring during their
construction and anticipated use (ultimate limit state requirement)
- They shall not be damaged by accidental events like fire, explosions, impact or
consequences of human errors, to an extent disproportionate to the triggering event
(robustness requirement, see Annex A).
The choice of the various levels of reliability should take into account the possible
consequences of failure in terms of risk to life or injury, the potential economic losses and the
degree of social inconvenience, as described in chapter 8. It should also take into account the
amount of expense and effort required to reduce the risk of failure. It is further noted, that the
3
term "failure" as used in this document refers to either inadequate strength or inadequate
serviceability of the structure.
The consequences of a failure generally depend on the mode of failure, specially in those
cases when the risk to human life or injury exists.
In order to provide a structure corresponding to the requirements and to the assumptions made
in the design, appropriate quality measures shall be adopted. These measures comprise
definition of reliability requirements, organisational measures and controls at the stages of
design, execution and use and the maintenance of the structure.
a) By using materials that, if well maintained, will not degenerate during the design
working life.
b) By giving such dimensions that deterioration during the design working life is
compensated.
c) By chosing a shorter lifetime for structural elements, which may be replaced one or
more times during the design working life.
d) By inspection at fixed or condition dependent intervals and appropriate maintenance
activities.
In all cases the reliability requirements for long and short term periods should be met.
Analysis aspects on durability are described in Annex B.
The limit states are divided into the following two basic categories:
4
- the ultimate limit states, which concern the maximum load carrying capacity as well as
the maximum deformability
- the serviceability limit states, which concern the normal use.
The exceedance of a limit state may be irreversible or reversible. In the irreversible case the
damage or malfunction associated with the limit state being exceeded will remain until the
structure has been repaired. In the reversible case the damage or malfunction will remain only
as long as the cause of the limit state being exceeded is present. As soon as this cause ceases
to act, a transition from the adverse state back to the desired state occurs.
It is further noted here that in some cases a limit between the aforementioned limit state types
may be defined This can be done by an artificial discretization of a the continuous situation
between the serviceability and the ultimate limit state. By applying such a procedure a so-
called partial damage limit state” can be defined. For example in case of earthquake damage
of plant structures such limit state is associated to the safe shut down of the plant.
The exceedance of an ultimate limit state is almost always irreversible and the first time that
this occurs causes failure.
5
In the cases of permanent local damage or permanent unacceptable deformations the
exceedance of a serviceability limit state is irreversible and the first time that this occurs
causes failure.
In other cases the exceedance of a serviceability limit state may be reversible and then failure
occurs:
a) the first time the serviceability limit state is exceeded, if no exceedance is considered
as acceptable
b) if exceedance is acceptable but the time when the structure is in the undesired state is
longer than specified
c) if exceedance is acceptable but the number of times that the serveciability limit state is
exceeded is larger than specified
d) if a combination of the above criteria occur.
These cases may involve temporary local damage (eg. temporarily wide cracks), temporary
large deformations and vibrations. Limit values for the serviceability limit state should be
defined on the basis of utility considerations.
Such variables may be time dependent. Models, which describe the behaviour of a structure,
should be established for each limit state. These models include mechanical models, which
describe the structural behaviour, as well as other physical or chemical models, which
describe the effects of environmental influences on the material properties. The parameters of
such models should in principle be treated in the same way as basic variables.
Where calculation models are available, the limit state can be described with aid of a function,
g, of the basic variables X(t) = X1(t), X2(t), ... so that
6
g (X(t)) = 0 (1)
In a component analysis where there is one dominating failure mode the limit state condition
can normally be described by one equation according to eq. (1). In a system analysis, where
more than one failure mode may be determining, there are several such equations.
Persistent situations, which refer to conditions of normal use of the structure and are
generally related to the working life of the structure.
Transient situations, which refer to temporary conditions of the structure, in terms of its use
or its exposure.
Accidental situations, which refer to exceptional conditions of the structure or its exposure.
7
the wide sence given above) are assumed to carry the entire input information to the
calculation model.
The basic variables may be random variables (indlucing the special case deterministic
variables) or stochastic processes or random fields. Each basic variable is defined by a
number of parameters such as mean, standard deviation, parameters determining the
correlation structure etc.
Within given classes of structural design problems the types of probability distributions of the
basic variables should be standardized. These standardizations are defined in the parts 2 and 3
of the probabilistic model code.
The basis for the definition of a population is in most cases the physical background of the
variable. Factors which may define the population are:
The choice of a population is to some extent a free choice of the designer. It may depend on
the objective of the analysis, the amount and nature of the available data and the amount of
work that can be afforded.
8
In connection with theoretical treatment of data and with the evaluation of observations it is
often convenient to divide the largest population into sub-populations which in turn are
further divided in smaller sub-populations etc. Then it is possible to study and distinguish
variability within a population and variability between different populations.
The hierarchical model assumes that a random quantity X can be written as a function of
several variables, each one representing a specific type of variability:
The Y represent various origins, time scales of fluctuation or spatial scales of fluctuation.
For instance Yi may represent the building to building variation, Yij the floor to floor variation
in building i and Yijk the point to point variation on floor j in building i.
In a similar way, Yi may represent the constant in time variability, Yij a slowly fluctuating
time process and Yijk a fast fluctuating time process.
5.1. General
Calculation models shall describe the structure and its behaviour up to the limit state under
consideration, accounting for relevant actions and environmental influences. Models should
9
generally be regarded as simplifications which take account of decisive factors and neglect
the less important ones.
However, in some cases it is not possible or convenient to make this distinction, for example,
if the instability or loss of equilibrium of an entire structural system is studied or if
interactions between loads and structural response are of interest.
The magnitude F of an action may often be described by two different types of variables so
that
F = ϕ (Fo, W) (4)
Eq. (4) should be regarded as a symbolic expression where Fo and W may represent several
variables.
One example may be snow load where Fo is the time dependent snow load on ground and W
is the conversion factor for snow load on ground to snow load on roof which normally is
assumed to to be time independent.
10
Further information on action models is provided in part 2. It is noted that action models may
include material properties (earthquake action depends for example on material damping).
The geometrical quantities which are included in the model generally refer to nominal values,
i.e. the values given in drawings, descriptions etc. Normally, the geometrical quantities of a
real structure differ from their nominal values, i.e. the structure has geometrical
imperfections. If the structural behaviour is sensitive to such imperfections, these shall be
inlcuded in the model.
In many cases the deformation of a structure causes significant deviations from nominal
values of geometrical quantities. If such deformations are of importance for the structural
behaviour, they have to be considered in the design in principally the same way as
imperfections. The effects of such deformations are generally denoted geometrically
nonlinear or second order effects and should be accounted for.
Other material properties, e.g. resistance against material deterioration may often be treated in
a similar way. However the principles are strongly dependent on type of material and the
property considered.
11
5.5. Mechanical models
In almost all design calculations some assumptions concerning the relation between forces or
moments and deformations (or deformation rates) are necessary. These assumptions can vary
and depend on the purpose and type of calculation. The most general relationship regarding
structural response is considered to be elastic) developing into plastic behaviour in certain
parts of the structure at high action effects. In other parts of the structure intermediate stages
occur. Such relationships may be used generally. However the use of any theory taking into
account in-elastic or post-critical behaviour may have to take into account repetitions of
variable actions that are free. Such actions may cause great variations of the action effects,
repeated yielding and exhaustion of the deformation capacity.
The theory of elasticity may be regarded as a simplification of a more general theory and may
generally be used provided that forces and moments are limited to those values, for which the
behaviour of the structure is still considered as elastic. However, the theory of elasticity may
also be used in other cases if it is applied as a conservative approximation.
Theories in which fully developed plasticity is assumed to occur in certain zones of the
structure (plastic hinges in beams, yield lines in slabs, etc) may also be used, provided that the
deformations which are needed to ensure plastic behaviour, occur before the ultimate limit
state is reached. Thus theory of plasticity should be used with care to determine the load
carrying capacity of a structure, if this capacity is limited by:
- brittle failure
- failure due to instability
In most cases dynamic response of a structure is caused by a rapid variation of the magnitude,
position or direction of an action However, a sudden change of the stiffness or resistance of a
structural element may also cause dynamic behaviour.
12
The models for dynamic response consist in general of:
• a stiffness model
• a damping model
• an inertia model
Fatigue models are used for the description of fatigue failures caused by fluctuating actions.
Two types of models are distinguished:
It is further noted here, that other types of degradation such as chemical attack or fire can
modify the parameters entering the aforementioned models or the models themselves.
Y = model output
f() = model function
Xi = basic variables
The model f (...) may be complete and exact, so that, if the values of Xi are known in a
particular experiment (from measurements), the outcome Y can be predicted without error.
This, however, is not normally the situation. In most cases the model will be incomplete and
inexact. This may be the result of lack of knowledge, or a deliberate simplification of the
model, for the convenience of the designer. The difference between the model prediction and
the real outcome of the experiment can be written down as:
13
θi are referred to as parameters which contain the model uncertainties and are treated as
random variables. Their statistical properties can in most cases be derived from experiments
or observations. The mean of these parameters should be determined in such a way that, on
average, the calculation model correctly predicts the test results.
6. Reliability
Another equivalent reliability measure is the probability of the complement of the adverse
event
Ps = 1 - Pf (8)
The probability Pf should be calculated on the basis of the standardized joint distribution type
of the basic variables and the standardized distributional formalism of dealing with both
model uncertainty and statistical uncertainty.
In special situations other than the standardized distribution types can be relevant for the
reliability evaluation. In such cases the distributional assumptions must be tested on a suitable
representative set of observation data.
14
System reliability is the reliability of a structural system composed of a number of
components or the reliability of a single component which has several failure modes of nearly
equal importance. The following type of systems can be classified:
• -redundant systems where the components are “fail safe”, i.e. local behaviour of one
component does not directly result in failure of the structure;
• -non-redundant systems where local failure of one component leads rapidly to failure of
the structure.
Due to the computational complexity a method giving an approximation to the exact result is
generally applied. Two fundamental accuracy requirements are:
15
The accuracy of the reliability calculation method is linked to the sensitivity with respect to
structural dimensions and material properties in the resulting design.
7. Target Reliability
7.2. Recommendations
Target reliability values are provided in the next paragraphs. They are based on optimization
procedures and on the assumption that for allmost all engineering facilities the only
reasonable reconstruction policy is systematic rebuilding or repair.
Target reliability values for ultimate limit states are proposed in Table 1. The values in Table
1 are obtained based on cost benefit analysis for the public at characteristic and representative
16
but simple example structures and are compatible with calibration studies and statistical
observations.
Table 1: Tentative target reliability indices β (and associated target failure rates) related
to one year reference period and ultimate limit states
1 2 3 4
Relative cost of safety Minor consequences Moderate Large
measure of failure consequences of consequences of
failure failure
Large (A) β=3.1 (pF≈10-3) β=3.3 (pF ≈ 5 10-4) β=3.7 (pF ≈ 10-4)
Normal (B) β=3.7 (pF≈10-4) β=4.2 (pF ≈ 10-5) β=4.4 (pF ≈ 5 10-6)
Small (C) β=4.2 (pF≈10-5) β=4.4 (pF ≈ 5 10-6) β=4.7 (pF ≈ 10-6)
The shadowed value in Table 1 should be considered as the most common design situation.
In order to make the right choice in this table the following guidelines may be of help:
♦ Consequence classes
A classification into consequenze classes is based on the ratio ρ defined as the ratio between
total costs (i.e. construction costs plus direct failure costs) and construction costs.
Risk to life, given a failure, is high, or economic consequences are significant (e.g. main
bridges, theaters, hospitals, high rise buildings).
If ρ is larger than 10 and the absolute value of H also is large, the consequences should be
regarded as extreme and a full cost benefit analysis is recommended. The conclusion might be
that the structure should not be build at all.
17
One should be aware of the fact that failure consequences also depend on the type of failure,
which can be classified according to:
a) ductile failure with reserve strength capacity resulting from strain hardening
b) ductile failure with no reserve capacity
c) brittle failure
• medium variabilities of the total loads and resistances (0.1 < V < 0.3),
• relative cost of safety measure
• normal design life and normal obsolesce rate composed to construction costs of the order
of 3%
The given values are for structures or structural elements as designed (not as built). Failures
due to human error or ignorance and failures due to non-structural causes are not covered by
table 1.
Values outside the given ranges may lead to a higher or lower classification. In particular
attention may be given to the following aspects:
♦ Degree of Uncertainty
A large uncertainty in either loading or resistance (coefficients of variation larger then 40 %),
as for instance the case of many accidental and seismic situations, a lower reliability class
should be used. The point is that for these large uncertainties the additional costs to achieve a
high reliability are prohibitive. If on the other hand both acting and resisting variables have
coefficients of variation smaller than 10%, like for most dead loads and well-known small
resistance variability, a higher class can be achieved by very little effort and this should be
done.
♦ Quality assurance and inspections
18
Quality assurance (for new structures) and inspections (for existing structures) have an
increasing effect on costs. This will lead to a lower reliability class. On the other hand, due to
QA and inspections the uncertainty will normally decrease and a higher class becomes
economically more attractive. General rules are difficult to give.
♦ Existing structures
For existing structures the costs of achieving a higher reliability level are usually high
compared to structures under design. For this reason the target level for existing structures
usually should be lower.
♦ Service life and/or obsolesce
For structures designed for short service life or otherwise rapid obsolesce (say less than 10
years) the beta-values can be lowered by one or half a class.
By definition serviceability failures are not associated with loss of human life or limb. For
existing structures the demand will be more related to the actual situation in performance and
use. No general rules are given in this document.
When setting target values for serviceability limit states (SLS) it is important to distinguish
between irreversible and reversible serviceability limit states. Target values for SLS can be
derived based on decision analysis methods.
For irreversible serviceability limit states tentative target values are given in Table 2. A
variation from the target serviceability indexes of the order of 0.3 can be considered. For
reversible serviceability limit states no general values are given.
19
8. Annex A: The Robustness Requirement
8.1. Introduction
In clause 3.1 the following robustment requirement has been formulated:
“A structure shall not be damaged by events like fire explosions or consequences of human
errors, deterioration effects, etc. to an extend disproportionate to the severeness of the
triggering event”.
This annex is intended to give some further guidance. No attention is being paid to terrorist
actions and actions of war. The general idea is that, whatever the design, proper destructive
actions can always be succesful.
The strategies 1, 2 and 5 are so called non-structural measures. These measures are considered
as being very effective for some specific accidental action.
The strategies 3 and 4 are so called structural measures. In general strategy 3 is extremely
expensive in most cases. Strategy 4, on the other hand accepts some members to fail, but
requires that the total damage is limited. This means that the structure should have sufficient
redundancy and possibilities to mobilise so called alternative load paths.
In the ideal design procedure, the occurrence and effects of an accidental action (impact,
explosion, etc.) are simulated for all possible action scenarios. The damage effect of the
structural members is calculated and stability of the remaining structure assessed. Next the
consequences are estimated in terms of number of casualties and economic losses. Various
measures can be compared on the basis of economic criteria.
20
8.3. Simplified design procedure
The approach sketched in A2 has two disadvantages:
(1) it is extremely complicated
(2) it does not work for unforseenable hazards
As a result other more global design strategies have been developed, like the classical
requirements on sufficient ductility and tying of elements.
Another approach is that one considers the situation that a structural element (beam, column)
has been damaged, by whatever event, to such an extend that its normal load bearing capacity
has vanished almost completely. For the remaining part of the structure it then required that fore
some relatively short period of time (repair period T) the structure can withstand the "normal"
loads with some prescribed reliability:
The probability that some element is removed by some cause, not yet considered in design,
depends on the sophistication of the design procedure and on the type of structure. For a
conventional structure it should, at least in theory, be possible to include all relevant collapse
origins in the design. Of course, it will always be possible to think of failure causes not covered
by the design, but those will have a remote likelihood and may be disregarded on the basis of
decision theoretical arguments. For unconventional structures this certainly will not be the case.
8.4. Recommendation
For unconventional structures, as for instance large structures, the probability of having some
unspecified failure cause is substantial. If in addition new materials or new design concepts are
used, unexpected failure causes become more likely. This would indicate that for
unconventional structures the simplified approach should be recommended.
21
(1) one might argue that, as one never succeeds in dealing with all failure causes explicitly
in a satisfactory way, it has no use to make refined analyses including system effect,
accidental actions and so on; this leads to the use of the simplified procedure.
(2) one might also eliminate the use of an explicit robustness requirement (A1) as much as
possible by taking into the design as many aspects explicitly as possible.
Stated as such it seems that the second approach is more rational, as it offers the possibility to
reduce the risks in the most economical way, e.g. by sprinklers (for fire), barriers (for collision),
QA (for errors), relief openings (for explosions), artificial damping (for earth quake),
maintenance (for deterioration) and so on.
22
9. Annex B: Durability
9.1. Probabilistic Formulations
Loads as well as material properties may vary in time as stationary or non-stationary
processes. Time may also be present in the limit state function as an explicit parameter. As a
result, the failure probability of a structure is also time dependent. The general formulation
for the failure probability for a period of time t may be presented as:
The failure may be of ULS as well as SLS type. One should keep in mind that also in the case
of a non-deteriorating time independent resistance and a stationary loading condition, the
failure probability is also time dependent due to the random fluctuations of the load. This,
however, is usually not considered as a durability problem.
Given (B1), the conditional failure rate (also referred to as risk functions) at time t may be
found as:
P( failure in [t , t + Δ t ] survival up t ) p F (t )
r( t ) = = (B2)
Δt 1 − PF (t )
where
dPF (t)
pF( t ) = (B3)
dt
is the failure time density. For small values of t, the failure probability PF( t ) is close to zero,
which makes the conditional failure rate and the density almost numerically equal. For
durability problems, the conditional failure rate is usually increasing in time. Reliability limits
set in section 7 may be related to (B2) or (B3) whichever is appropriate.
23
∞
For small t the result will be equal to (B2) and (B3). For large t the value of h will
asymptotically lead to 1/μ and where μ is the mean time to failure, defined as:
∞ ∞
μ= ∫t p F (t ) dt = ∫ (1 − PF (t ) ) dt (B5)
0 0
The calculation procedure for PF ( t ) depends on the nature of the limit state function g(.). If
g(.) is a smooth monotonically decreasing function not depending explicitly on random
process variables, the minimum value is reached at the end of the period, and we simply have:
If g(.) depends on random process variables and, therefore, is not monotonically decreasing,
we have a first passage problem. In that case the following upper bound approximation may
be useful:
t
PF(t) = PF(0) + ∫ ν - (τ)dτ (B7)
0
where PF(0) is the failure at the start and ν- the outcrossing rate or unconditional failure rate
which is given by:
In general, the limit state function g(.) may be quite complex due to a combination of
physical, chemical and mechanical processes. Take as an example the deterioration processes
due to carbonation and/or chloride ingress of concrete. After some period of time the
carbonation or chloride fronts may reach the reinforcement and corrosion may start, resulting
eventually in spalling and later even in failure by collapse due to some large mechanical load
(see figure B1). Many parameters like the outside climate, the cover of the concrete, the
diffusion properties, the corrosion speed and so on may play a role.
24
R,S
R R (τ)
0
failure
Figure B1: Failure due to a combination of physical and chemical processes and a variable
mechanical load
dy
= yk h(z) (B9)
dt
where
y (t ) t
∫ ∫
-k
y dy = h(z(τ)) dτ (B10)
y (0) 0
25
Defining Ψ(y) as the integral function of y--k and χ(t) as the right hand side integral of (B10),
this can be written as:
If z(t) is stationary and ergodic, χ(t) may asymptotically be taken as implying that the damage
increases smoothly:
Failure will occur if de damage y(t) exceeds some critical value ycr, which leads finally to the
following expression for the limit state function:
The critical value ycr may be a constant or time dependent. If ycr is a constant we may use
(B3), to find the failure probability. If ycr is time dependent we have a first passage problem.
Characteristic examples
Abrasion and/or corrosion mechanisms can be modelled by k=0 and h(z) = z. In that case
(B9) reduces to:
dy
= z(t)
dt
For abrasion or corrosion the damage parameter y corresponds to the thickness of the lost
material and z represents is the abrasion or corrosion rate. In this case Ψ is simple equal to y
itself. Assuming that z(t) is a stationary and ergodic random process with mean μz, we may
use (B12) and arrive at:
g(t) = ycr – yo – μz t
The value yo may be 0 (or random) and the critical value of ycr may be related to the load and
material strength, for instance:
26
ycr = do – S/f
where do is the original material thickness, S the load per unit length and f the material
rupture strenght. It can easily be seen that ycr is constant in time for a constant load S and that
ycr is time dependent for a fluctuating load.
2. Duration of load
We consider again the case n=0 and h(z) = z. Let now, however, y represent the relative
reduction of the material strength R, that is R(t) = Ro(1-y).Let further the disturbance z be
proportional to the mechanical load S. In other words: the presence of a load will lead to a
damage or strength reduction, and more if the load is higher. Such a model can be used to
represent duration of load effects. If we define z = S/So, with So some random material
parameter, we arrive at:
g(t) = ycr – yo – μS t / So
Let yo = 0 and let ycr correspond to R(t) = Ro(1- ycr) = S(t), we arrive finally at:
or equivalently:
Again, if S is a constant load we may use (B6); if not we have a first passage problem. The
resulting time dependent strength for a constant load S is presented in figure B2.
R,S
R0
R (τ) failure
time to failure
27
Figure B2: Load duration dependent strength under constant load
Due to load fluctuations some initial small crack in a structure may grow and weaken the
cross section. Finally some large load amplitude may lead to collapse of the structural element
(see figure B3). The differential equation for the crack growth a is given by:
da m
= C Y(a) [ ΔS(n) πa ]
dn
Where ΔS represents the stress range, Y(a) represents a geometrical function, C and m are
material constants and n is the stress cycle number. Note that in this example the time t has
been replaced by the load cycle number n and that k in (B5) corresponds to m/2. The
functions Ψ and χ are then given by (assuming ΔS to be stationary and ergodic):
χ = n E{(ΔS)m}
where a0 is the initial crack length and acr the critical crack length, which again may be time
dependent or time independent. In the fist case (B6) may be used, in the second case we have
a first passage problem.
Alternatively, one may formulate the limit state function in the crack domain:
⎧ 2 ⎫
g(t) = acr – a(n) with a(n) = ⎨a1o−m / 2 + C Y π m / 2 nE{Δs m }⎬
⎩ 2−m ⎭
ψ (a cr ) − ψ (a o )
g(t) = N – n with N =
E{(ΔS) m }
28
These alternative formulations are fully equivalent to the first one.
R,S
R (τ)
failure
k = load cycle
The sequence of events can be represented in an event tree as indicated in Figure B4. Let the
first inspection Ii be planned at time ti. In that case we may have three possibilities.
If the structure is repaired, one may usually assume that all variables are reset to the initial
situation. From every event R then a new event tree of the same type as the one in figure B4 is
started.
For reasons of simplicity we will start by having one inspection only. Using the total
probability theorem, the probability of failure for a period t may then formally be written as:
29
PF(t) = P[ F | Zi > 0 ] P(Zi > 0) + P[ F | Zi < 0 ] P(Zi < 0) (B13)
where
F = failure
Zi = inspection result of inspection at time ti (negative values correspond to the detection
of damage)
If we assume that in the case of a serious damage revealed at the inspection (that is Z<0) the
structure will be repaired adequately, (B13) may be reduced to (replacing F by minτ g (τ) < 0,
where g( ) is the limit state function and 0 < τ< t):
or simply:
PF(t) = P [ minτ g(x(τ);τ) < 0 ∩ {∩Zi(x(ti);ti) } > 0 for 0 < τ < t ] (B14)
9.3. Example
Figure B5 clarifies formula (B14) for the case of fatigue. As discussed before, the g-function
for the situation at the load cycle at time τ is given by:
g = acr - a(t)
Let the crack a(τ) be monitored by a yearly inspection. If the measured crack am is larger that
some limit alim the structure will be adequately repaired. An inspection failure may then be
modelled as Zins < 0 with:
30
In present practice alim usually corresponds to the detection limit and the probability
distribution for alim is then equal to the so called POD-curve (probability of detection).
Failure will occur only if the measured value am(tins) is below the limit value alim at inspection
ti but above the acrit before the next inspection. This way failure probability can be reduced by
shorter inspection intervals or by more refined or accurate inspection techniques.
Note that an implication of this method is that these Probability of Detection curves (POD
curves) and measurement accuracy’s must be known to the designer in order to decide
whether or not a certain structure meets the reliability requirements. Note further that the
probability of repair is given by:
P = P[Zins < 0]
Repair may be considered like some serviceability limit state. The designer should also make
sure that the probability of repair is below some economic limit value.
0 F
I1 R F
I2 R
I3
31
a
acrit
alim
ti ti + Δti
Figure B5: Fatigue failure in the interval ti, ti + Δti with a(τ) < alim at the beginning of the
interval.
32
10. Annex C: Reliability Analysis Principles
10.1. Introduction
In recent years, practical reliability methods have been developed to help engineers tackle the
analysis, quantification, monitoring and assessment of structural risks, undertake sensitivity
analysis of inherent uncertainties and make appropriate decisions about the performance of a
structure. The structure may be at the design stage, under construction or in actual use.
This Annex C summarizes the principles and procedures used in formulating and solving risk
related problems via reliability analysis. It is neither as broad nor as detailed as available
textbooks on this subject, some of which are included in the bibliography. Its purpose is to
underpin the updating and decision-making methodologies presented in part 2 of this
document.
Starting from the principles of limit state analysis and its application to codified design, the
link is made between unacceptable performance and probability of failure. It is important,
especially in assessment, to distinguish between components and systems. System concepts
are introduced and important results are summarized. The steps involved in carrying out a
reliability analysis, whose main objective is to estimate the failure probability, are outlined
and alternative techniques available for such an analysis are presented. Some
recommendations on formulating stochastic models for commonly used variables are also
included.
10.2. Concepts
The structural performance of a whole structure or part of it may be described with reference
to a set of limit states which separate acceptable states of the structure from unacceptable
states. The limit states are divided into the following two categories:
- ultimate limit states, which relate to the maximum load carrying capacity.
- serviceability limit states, which relate to normal use.
The boundary between acceptable (safe) and unacceptable (failure) states may be distinct or
diffuse but, at present, deterministic codes of practice assume the [Link], verification of
a structure with respect to a particular limit state is carried out via a model describing the
limit state in terms of a function (called the limit state function) whose value depends on all
relevant design parameters. In general terms, attainment of the limit state can be expressed as:
33
g (s, r) = 0 (C.1)
where s and r represent sets of load (actions) and resistance variables. Conventionally, g (s, r)
≤ 0 represents failure; in other words, an adverse state.
The limit state function, g (s, r), can often be separated into one resistance function, r(.), and
one loading (or action effect) function, s(.), in which case equation (C.) can be expressed as:
Load, material and geometry parameters are subject to uncertainties, which can be classified
according to their nature, see section 3. They can, thus, be represented by random variables
(this being the simplest possible probabilistic representation, whereas more advanced models
might be appropriate in certain situations, such as random fields). The variables S and R are
often referred to as "basic random variables" (where the upper case letter is used for denoting
random variables) and may be collectively represented by a random vector X.
In this context, failure is a probabilistic event and its probability of occurrence, Pf, is given
by:
where, M = g (X). Note that M is also a random variable, called the safety margin.
If the limit state function is expressed in the form of eqn (C.2), eqn (C.3a) can be written as
Pf = Prob { r (R) ≤ s (S) } = Prob { R ≤ S }
where R = r (R) and S = s (S) are random variables associated with resistance and loading
respectively. This expression is useful in the context of the discussion in section 2.2 on code
formats and partial safety factors but will not be further used herein.
The failure probability defined in eqn (A.5a) can also be expressed as follows:
Pf =
∫
g(x ) ≤ 0
f X (x) dx
(C.3b)
34
where fX(x) is the joint probability density function of X.
The reliability, Ps, associated with the particular limit state considered is the complementary
event, i.e.
Ps = 1 - Pf (C.4)
In recent years, a standard reliability measure, the reliability index β, has been adopted which
has the following relationship with the failure probability
-1 -1
β = - Φ (Pf) = Φ (Ps) (C.5)
-1
where Φ (.) is the inverse of the standard normal distribution function, see Table A.1.
In most engineering applications, complete statistical information about the basic random
variables X is not available and, furthermore, the function g(.) is a mathematical model which
idealizes the limit state. In this respect, the probability of failure evaluated from eqn (C.3a) or
(C.3b) is a point estimate given a particular set of assumptions regarding probabilistic
modelling and a particular mathematical model for g(.). The uncertainties associated with
these models can be represented in terms of a vector of random parameters Q, and hence the
limit state function may be re-written as g(X, Q). It is important to note that the nature of
uncertainties represented by the basic random variables X and the parameters Q is different.
Whereas uncetainties in X cannot be influenced without changing the physical characteristics
of the problem (e.g. changing the steel grade), uncertainties in Q can be influenced by the use
of alternative methods and collection of additional data.
where Pf(θ) is the conditional probability of failure for a given set of values of the parameters
θ and fX|θ (x| θ) is the conditional probability density function of X for given θ.
35
In order to account for the influence of parameter uncertainty on failure probability, one may
evaluate the expected value of the conditional probability of failure, i.e.
Pf = E [Pf (θ )] =
∫
θ
Pf (θ ) f Θ (θ ) dθ (C.7a)
where fθ (θ) is the joint probability density function of θ. The corresponding reliability index
is given by
-1
β = - Φ (Pf ) (C.7b)
The main objective of reliability analysis is to estimate the failure probability (or, the
reliability index). Hence, it replaces the deterministic safety check with a probabilistic
assessment of the safety of the structure, e.g. eqn (C.3) or eqn (C.7). Depending on the nature
of the limit state considered, the uncertainty sources and their implications for probabilistic
modeling, the characteristics of the calculation model and the degree of accuracy required, an
appropriate methodology has to be developed. In many respects, this is similar to the
considerations made in formulating a methodology for deterministic structural analysis but
the problem is now set in a probabilistic framework.
Structural design is, at present, primarily concerned with component behaviour. Each limit
state equation is, in most cases, related to a single mode of failure of a single component.
However, most structures are an assembly of structural components and even individual
components may be susceptible to a number of possible failure modes. In deterministic terms,
the former can be tackled through a progressive collapse analysis (particularly appropriate in
redundant structures), whereas the latter is usually dealt with by checking a number of limit
state equations.
However, the system behaviour of structures is not well quantified in limit state codes and
requires considerable innovation and initiative from the engineer. A probabilistic approach
provides a better platform from which system behaviour can be explored and utilised. This
can be of benefit in assessment of existing structures where strength reserves due to system
effects can alleviate the need for expensive strengthening.
(1) A series system is a system which fails if one or more of its components fail.
(2) A parallel system is a system which fails when all its components have failed.
36
The probability of system failure is given by
where Ei (i=1, ...n) is the event corresponding to failure of the ith component. In the case of
parallel systems, which are designed to provide some redundancy, it is important to define the
state of the component after failure. In structures, this can be described in terms of a
characteristic load-displacement response, see Fig. C.2, for which two convenient
idealisations are the 'brittle' and the 'fully ductile' case. Intermediate, often more realistic,
cases can also be defined.
The above expressions can be difficult to evaluate in the case of large systems with
stochastically dependent components and, for this reason, upper and lower bounds have been
developed, which may be used in practical applications. In order to appreciate the effect of
system behaviour on failure probabilities, results for two special systems comprising equally
correlated components with the same failure probability for each component are shown in Fig.
C.3(a) and C.3(b). Note that in the case of the parallel system, it is assumed that the
components are fully ductile.
More general systems can be constructed by combining the two fundamental types. It is fair
to say that system methods are more developed for skeletal rather than continuous structures.
Important results from system reliability theory are summarized in section 4.
37
(5) perform sensitivity studies
Step (1) is essentially the same as for deterministic analysis. Step (2) should be considered
carefully, since it affects the probabilistic modeling of many variables, particularly live
loading. Step (3) is perhaps the most important because the considerations made in
developing the probabilistic models have a major effect on the results obtained, see section
3.2. Step (4) should be undertaken with one of the methods summarized in section 3.3,
depending on the application. Step (5) is necessary insofar as the sensitivity of any results
(deterministic or probabilistic) should be assessed before a decision is taken.
For the particular failure mode under consideration, uncertainty modeling must be undertaken
with respect to those variables in the corresponding limit state function whose variability is
judged to important (basic random variables). Most engineering structures are affected by the
following types of uncertainty:
For most commonly encountered basic random variables there have been studies (of varying
detail) that contain guidance on the choice of distribution and its parameters. If direct
measurements of a particular quantity are available, then existing, so-called a priori,
information (e.g. probabilistic models found in published studies) should be used as prior
statistics with a relatively large equivalent sample size (n' ≈ 50).
The following comments may also be helpful in selecting a suitable probabilistic model.
38
Material properties
- frequency of negative values is normally zero
- log-normal distribution can often be used
- distribution type and parameters should, in general, be derived from large homogeneous
samples and with due account of established distributions for similar variables (e.g. for a
new high strength steel grade, the information on properties of existing grades should be
consulted); tests should be planned so that they are, as far as possible, a realistic description
of the potential use of the material in real applications.
Geometric parameters
- variability in structural dimensions and overall geometry tends to be small
- dimensional variables can be adequately modelled by the normal or log-normal distribution
- if the variable is physically bounded, a truncated distribution may be appropriate (e.g.
location of reinforcement); such bounds should always be carefully considered to avoid
entering into physically inadmissible ranges
- variables linked to manufacturing can have large coefficients of variation (e.g.
imperfections, misalignments, residual stresses, weld defects).
Load variables
- loads should be divided according to their time variation (permanent, variable, accidental)
- in certain cases, permanent loads consist of the sum of many individual elements; they may
be represented by a normal distribution
- for single variable loads, the form of the point-in-time distribution is seldom of immediate
relevance; often the important random variable is the magnitude of the largest extreme load
that occurs during a specified reference period for which the probability of failure is
calculated (e.g. annual, lifetime)
- the probability distribution of the largest extreme could be approximated by one of the
asymptotic extreme-value distributions (Gumbel, sometimes Frechet)
- when more than one variable loads act in combination, load modelling is often undertaken
using simplified rules suitable for FORM/SORM analysis.
In selecting a distribution type to account for physical uncertainty of a basic random variable
afresh, the following procedure may be followed:
- based on experience from similar type of variables and physical knowledge, choose a set of
possible distributions
- obtain a reasonable sample of observations ensuring that, as far as possible, the sample
points are from a homogeneous group (i.e. avoid systematic variations within the sample)
and that the sampling reflects potential uses and applications
39
- evaluate by an appropriate method the parameters of the candidate distributions using the
sample data; the method of maximum likelihood is recommended but evaluation by
alternative methods (moment estimates, least-square fit, graphical methods) may also be
carried out for comparison.
- compare the sample data with the resulting distributions; this can be done graphically
(histogram vs. pdf, probability paper plots) or through the use of goodness-of-fit tests (Chi-
square, Kolmogorov-Smirnov tests)
If more than one distributions give equally good results (or if the goodness-of-fit tests are
acceptable to the same significance level), it is recommended to choose the distribution that
will result in the smaller reliability. This implies choosing distributions with heavy left tails
for resistance variables (material properties, geometry excluding tolerances) and heavy right
tails for loading variables (manufacturing tolerances, defects and loads).
Fig. C.4(a) shows schematically a continuous stochastic process, e.g. wind pressure at a
particular point on a wall of a structure. The trace of this process over time is obtained
through successive realisations of the underlying phenomenon, in this case wind speed, which
is clearly a random variable taking on different values within each infinitesimally small time
interval, δt.
Fig. C.4(b) depicts a two-dimensional random field, e.g. the spatial variation of concrete
strength in a floor slab just after construction. Once again, a random variable, in this case
describing the possible outcomes of, say, a core test obtained from any given small area, δA,
is the basic kernel from which the random field is built up.
In considering either a random process or a random field, it is clear that, apart from the
characteristics associated with the random variable describing uncertainty within a small unit
(interval or area), laws describing stochastic dependence (or, in simpler terms, correlation)
between outcomes in time and/or in space are very important.
40
The other three types of uncertainty mentioned above (measurement, statistical, model) also
play an important role in the evaluation of reliability. As mentioned in section 2.3, these
uncertainties are influenced by the particular method used in, for example, strength analysis
and by the collection of additional (possibly, directly obtained) data. These uncertainties
could be rigorously analysed by adopting the approach outlined by eqns (C.8) and (C.9).
However, in many practical applications a simpler approach has been adopted insofar as
model (and measurement) uncertainty is concerned based on the differences between results
predicted by the mathematical model adopted for g(x) and some more elaborate model
believed to be a closer representation of actual conditions. In such cases, a model uncertainty
basic random variable Xm is introduced where
Xm = actual value
predicted value
and the following comments offer some general guidance in estimating the statistics of Xm:
- the mean value of the model uncertainty associated with code calculation models can be
larger than unity, reflecting the in-built conservatism of code models
- the model uncertainty parameters of a particular calculation model may be evaluated vis-a-
vis physical experiments or by comparing the calculation model with a more detailed model
(e.g. finite element model)
- when experimental results are used, use of measured rather than nominal or characteristic
quantities is preferred in calculating the predicted value
- the use of numerical experiments (e.g. finite element models) has some advantages over
physical experiments, since the former ensure well-controlled input.
- the choice of a suitable probability distribution for Xm is often governed by mathematical
convenience and a normal distribution has been used extensively.
As mentioned above, the failure probability of a structural component with respect to a single
failure mode is given by
Pf =
∫
g (x ) ≤ 0
f X (x) dx
(C.3b)
where X is the vector of basic random variables, g(x) is the limit state (or failure) function for
the failure mode considered and fX(x) is the joint probability density function of X.
An important class of limit states are those for which all the variables are treated as time
independent, either by neglecting time variations in cases where this is considered acceptable
or by transforming time-dependent processes into time-invariant variables (e.g. by using
extreme value distributions). The methods commonly used for calculating Pf in such cases are
41
outlined below. Guidelines on how to deal with time-dependent problems are given in section
5. Note that after calculating Pf via one of the methods outlined below, or any other valid
method, a reliability index may be obtained from equation (C.5), for comparative or other
purposes.
Asymptotic approximate methods
Although these methods first emerged with basic random variables described through 'second-
moment' information (i.e. with their mean value and standard deviation, but without assigning
any probability distributions), it is nowadays possible in many cases to have a full description
of the random vector X (as a result of data collection and probabilistic modelling studies). In
such cases, the probability of failure could be calculated via first or second order reliability
methods (FORM and SORM respectively). Their implementation relies on:
(1) Transformation techniques:
T: X = (X1, X2, ... Xn) → U = (U1, U2, ... Un) (C.9)
where U1, U2, ... Un are independent standard normal variables (i.e. with zero mean value and
unit standard deviation). Hence, the basic variable space (including the limit state function) is
transformed into a standard normal space, see Fig. C.5. The special properties of the standard
normal space lead to several important results, as discussed below.
(2) Search techniques:
In standard normal space, the objective is to determine a suitable checking point: this is
shown to be the point on the limit-state surface which is closest to the origin, the so-called
'design point'. In this rotationally symmetric space, it is the most likely failure point, in other
words its co-ordinates define the combination of variables that are most likely to cause
failure. This is because the joint standard normal density function, whose bell-shaped peak
lies directly above the origin, decreases exponentially as the distance from the origin
increases. To determine this point, a search procedure is required in all but the most simple of
cases (the Rackwitz-Fiessler algorithm is commonly used).
Denoting the co-ordinates of this point by
u* = (u1* , u2* , ... un* )
its distance from the origin is clearly equal to
n
( ∑ u *i )1 / 2
2
i =1
This scalar quantity is known as the Hasofer-Lind reliability index, βHL, i.e.
n (C.10)
β HL = ( ∑ u *i 2 )1 / 2
i =1
42
in basic random variables on the computed reliability. Their absolute value ranges between
zero and unity and the closer this is to the upper limit, the more significant the influence of
the respective random variable is to the reliability. The following expression is valid for
independent variables
n (C.11b)
∑α 12 = 1
i =1
In some cases, a higher order approximation of the limit state surface at the design point is
merited, if only to check the accuracy of FORM. The result for the probability of failure
assuming a quadratic (second-order) approximation of the limit state surface is asymptotically
given by
n −1 (C.12b)
PfSORM = Φ (− β HL )∏ (1 − β HLκ j ) −1 / 2
j =1
for βHL → ∞ , where κj are the principal curvatures of the limit state surface at the design
point. An expression applicable to finite values of βHL is also available.
Simulation Methods
In this approach, random sampling is employed to simulate a large number of (usually
numerical) experiments and to observe the result. In the context of structural reliability, this
means, in the simplest approach, sampling the random vector X to obtain a set of sample
values. The limit state function is then evaluated to ascertain whether, for this set, failure (i.e.
g(x)≤0) has occurred. The experiment is repeated many times and the probability of failure,
Pf, is estimated from the fraction of trials leading to failure divided by the total number of
trials. This so-called Direct or Crude Monte Carlo method is not likely to be of use in
practical problems because of the large number of trials required in order to estimate with a
certain degree of confidence the failure probability. Note that the number of trials increases as
the failure probability decreases. Simple rules may be found, of the form N > C/Pf, where N is
the required sample size and C is a constant related to the confidence level and the type of
function being evaluated.
Thus, the objective of more advanced simulation methods, currently used for reliability
evaluation, is to reduce the variance of the estimate of Pf. Such methods can be divided into
two categories, namely indicator function methods and conditional expectation methods.
43
An example of the former is Importance Sampling, where the aim is to concentrate the
distribution of the sample points in the vicinity of likely failure points, such as the design
point obtained from FORM/SORM analysis. This is done by introducing a sampling function,
whose choice would depend on a priori information available, such as the co-ordinates of the
design point and/or any estimates of the failure probability. In this way, the success rate
(defined here as a probability of obtaining a result in the failure region in any particular trial)
is improved compared to Direct Monte Carlo. Importance Sampling is often used following
an initial FORM/SORM analysis. A variant of this method is Adaptive Sampling, in which
the sampling density is updated as the simulation proceeds. Importance Sampling could be
performed in basic variable or standard normal space, depending on the problem and the form
of prior information.
The two methods outlined above have also been used in combination, which indicates that
when simulation is chosen as the basic approach for reliability assessment, there is scope to
adapt the detailed methodology to suit the particular problem in hand.
10.3.4. Recommendations
As with any other analysis, choosing a particular method must be justified through experience
and/or verification. Experience shows that FORM/SORM estimates are adequate for a wide
range of problems. However, these approximate methods have the disadvantage of not being
quantified by error estimates, except for few special cases. As indicated, simulation may be
used to verify FORM/SORM results, particularly in situations where multiple design points
might be suspected. Simulation results should include the variance of the estimated
probability of failure, though good estimates of the variance could increase the computations
required. When using FORM/SORM, attention should be given to the ordering of dependent
random variables and the choice of initial points for the search algorithm. Not least, the
results for the design point should be assessed to ensure that they do not contradict physical
reasoning.
44
10.4. System Reliability Analysis
As discussed in section 3, individual component failure events can be represented by failure
boundaries in basic variable or standard normal space. System failure events can be similarly
represented, see Fig. C.6(a) and C.6(b), and, once more, certain approximate results may be
derived as an extension to FORM/SORM analysis of individual components. In addition,
system analysis is sometimes performed using bounding techniques and some relevant results
are given below.
where, Fj is the event corresponding to the failure of the jth component. By describing this
event in terms of a safety margin Mj
(C.14)
P[ F j ] = P[ M j ≤ 0] ≈ Φ (− β j )
where βj is its corresponding FORM reliability index, it can be shown that in a first-order
approximation
P =1−Φ β;ρ
(C.15a)
f sys m
where Φm[.] is the multi-variate standard normal distribution function, β is the (m x 1) vector
of component reliability indices and ρ is the (m x m) correlation matrix between safety
margins with elements given by
n (C.15b)
ρ jk = ∑ α ijα ik ,j, k = 1, 2, ...,m
i =1
where αij is the sensitivity factor corresponding to the ith random variable in the jth margin.
In some cases, especially when the number of components becomes large, evaluation of
equation (C.15) becomes cumbersome and bounds to the system failure probability may prove
sufficient.
Simple first-order linear bounds are given by
⎡ m ⎤ (C.16a)
[ ]
m
Max P( F j ) ≤ Pfsys ≤ Min ⎢( ∑ P( F j )),1⎥
j =1
⎣ j =1 ⎦
(A.20b)
but these are likely to be rather wide, especially for large m, in which case second-order linear
bounds (Ditlevsen bounds) may be needed. These are given by
45
m ⎧⎡ j −1
⎤ ⎫ (C.16b)
P [ F1 ] + ∑ Max ⎨⎢ P( F j ) − ∑ P( F j ∩ Fk )⎥ ,0 ⎬ ≤ Pfsys ≤
j =2 ⎩⎣ k =1 ⎦ ⎭
m
{
P [ F1 ] + ∑ P( F j ) − Max P( F j ∩ Fk )
j =2
k< j
[ ]}
The narrowness of these bounds depends in part on the ordering of the events. The optimal
ordering may differ between the lower and the upper bound. In general, these bounds are
much narrower than the simple first-order linear bounds given by equation (C.16a). The
bisections of events may be calculated using a first-order approximation, which appears
below in the presentation of results for parallel systems.
Following the same approach and notation as above, the failure probability of a parallel
system with m components is given by
⎡m ⎤ ⎡m ⎤ (C.17)
Pfsys = P ⎢ ∩ ( F j )⎥ = P ⎢ ∩ ( M j ≤ 0)⎥
⎣ j =1 ⎦ ⎣ j =1 ⎦
and the corresponding first-order approximation is
Pf sys = Φm − β ; ρ
(C.18)
These are usually too wide for practical applications. An improved upper bound is
(C.19b)
[ ]
m
Pfsys ≤ Min P( F j ∩ Fk )
j , k =1
The error involved in the first-order evaluation of the intersections, P[Fj ∩ Fk], is, to a large
extent, influenced by the non-linearity of the margins at their respective design points. In
order to obtain a better estimate of the intersection probabilities, an improvement on the
selection of linearisation points has been suggested.
Even in considering a relatively simple safety margin for component reliability analysis such
as M = R - S, where R is the resistance at a critical section in a structural member and S is the
corresponding load effect at the same section, it is generally the case that both S and
resistance R are functions of time. Changes in both mean values and standard deviations could
46
occur for either R(t) or S(t). For example, the mean value of R(t) may change as a result of
deterioration (e.g. corrosion of reinforcement in an RC bridge implies loss of area, hence a
reduction in the mean resistance) and its standard deviation may also change (e.g. uncertainty
in predicting the effect of corrosion on loss of area may increase as the periods considered
become longer). On the other hand, the mean value of S(t) may increase over time (e.g. due
to higher traffic flow and/or higher individual vehicle weights) and, equally, the estimate of
its standard deviation may increase due to lower confidence in predicting the correct mix of
traffic for longer periods. A time-dependent reliability problem could thus be schematically
represented as in Fig. C.7, the diagram implying that, on average, the reliability decreases
with time. Although this situation is usual, the converse could also occur in reliability
assessment of existing structures, for example through strengthening or favourable change in
use.
Thus, the elementary reliability problem described through equations (C.3a) and (C.3 b) may
now be formulated as:
is the instantaneous failure probability at time t, assuming that the structure was safe at time
less than t.
Interest may also lie in predicting when S(t) crosses R(t) for the first time, see Figure C.8, or
the probability that such an event would occur within a specified time interval. These
considerations give rise to so-called ‘crossing’ problems, which are treated using stochastic
process theory. A key concept for such problems is the rate at which a random process X(t)
‘upcrosses’ (or crosses with a positive slope) a barrier or level ξ, as shown in Figure A.9. This
47
upcrossing rate is a function of the joint probability density function of the process and its
derivative, and is given by Rice’s formula
∞
ν ξ+ = xf XX (ξ, x ) dx
0 (C.21)
where the rate in general represents an ensemble average at time t. For a number of common
stochastic processes, useful results have been obtained starting from Equation (C.21). An
important simplification can be introduced if individual crossings can be treated as
independent events and the occurences may be approximated by a Poisson distribution, which
might be a reasonable assumption for certain rare load events.
Another class of problems calling for a time-dependent reliability analysis are those related to
damage accumulation, such as fatigue and fracture. This case is depicted in Fig. C.10 via a
fixed threshold (e.g. critical crack size) and a monotonically increasing time-dependent load
effect or damage function (e.g. actual crack size at any given time).
It is evident from the above remarks that the best approach for solving a time-dependent
reliability problem would depend on a number of considerations, including the time frame of
interest, the nature of the load and resistance processes involved, their correlation properties
in time, and the confidence required in the probability estimates. All these issues may be
important in determining the appropriate idealisations and approximations.
Although time variations are likely to be present in most structural reliability problems, the
methods outlined in Sections 3 and 4 have gained wide acceptance, partly due to the fact that,
in many cases, it is possible to transform a time dependent failure mode into a corresponding
time independent mode. This is especially so in the case of overload failure, where individual
time-varying actions, which are essentially random processes, p(t), can be modelled by the
distribution of the maximum value within a given reference period T, i.e. X = maxT{ p(t)}
rather than the point-in-time distribution. For continuous processes, the probability
distribution of the maximum value (i.e. the largest extreme) is often approximated by one of
the asymptotic extreme value distributions. Hence, for structures subjected to a single time-
varying action, a random process model is replaced by a random variable model and the
principles and methods given previously may be applied.
The theory of stochastic load combination is used in situations where a structure is subjected
to two or more time-varying actions acting simultaneously. When these actions are
48
independent, perhaps the most important observation is that it is highly unlikely that each
action will reach its peak lifetime value at the same moment in time. Thus, considering two
time-varying load processes p1(t), p2(t), 0 ≤ t ≤ T, acting simultaneously, for which their
combined effect may be expressed as a linear combination p1(t)+ p2(t), the random variable
of interest is:
{
maxT{ p1(t)} + p2(t)
X' = maxT (C.22b)
p1(t) + maxT{ p2(t)}
This rule (Turkstra's rule) suggests that the maximum value of the sum of two independent
load processes occurs when one of the processes attains its maximum value. This result may
be generalised for several independent time-varying loads. The conditions which render this
rule adequate for failure probability estimation are discussed in standard texts. Note that the
failure probability associated with the sum of a special type of independent identically
distributed processes (so-called FBC process) can be calculated in a more accurate way, as
will be outlined below. Other results have been obtained for combinations of a number of
other processes, starting from Rice’s barrier crossing formula.
49
Fmax TXi(x i )= [FXi(x i )]n i (C.23)
When a number of FBC processes act in combination and the ratios of their repetition
numbers within a given reference period are given by positive integers it is, in principle,
possible to obtain the extreme value distribution of the combination through a recursive
formula. More importantly, it is possible to deal with the sum of FBC processes by
implementing the Rackwitz-Fiessler algorithm in a FORM/SORM analysis.
A deterministic code format, compatible with the above rules, leads to the introduction of
combination factors, ψoi, for each time-varying load i. In principle, these factors express
ratios between fractiles in the extreme value and point-in-time distributions so that the
probability of exceeding the design value arising from a combination of loads is of the same
order as the probability of exceeding the design value caused by one load. For time-varying
loads, they would depend on distribution parameters, target reliability and FORM/SORM
sensitivity factors and on the frequency characteristics (i.e. the base period assumed for
stationary events) of loads considered within any particular combination.
The determination of the first passage probability requires an understanding of the theory of
random processes. Herein, only some basic concepts are introduced in order to see how the
methods described above have to be modified in dealing with crossing problems.
The first-passage probability, Pf(t) during a period [0, tL] is
50
Pf (tL) = Pf (0) + ( 1 - Pf(0)) ( 1 - P[N (tL)=0]) (C.24b)
from which different approximations may be derived depending on the relative magnitude of
the terms. A useful bound is
51
10.6. Figures
Figure C.3: Effect of element correlation and system size on failure probability
(a) series system (b) parallel system
52
Figure C.4: Schematic representations
(a) random process (b) random field
Figure C.5: Limit state surface in basic variable and standard normal space
53
Figure C.6(a): Failure region as union of component failure events for series system
Figure C.6(b): Failure region as intersection of component failure events for parallel system
54
Figure C.8: Schematic representation of crossing problems
(a) slowly varying resistance (b) rapidly varying resistance
55
Figure C.11: Realization of an FBC process
56
10.7. Bibliography
[C3] Benjamin J R and Cornell C A, Probability, Statistics and Decision for Civil
Engineers, McGraw Hill, 1970.
[C9] Melchers R E, Structural Reliability: Analysis and Prediction, 2nd edition, J Wiley,
1999.
[C10] Thoft-Christensen P and Baker M J, Structural Reliability Theory and its Applications,
Springer-Verlag, 1982.
[C12] CEB, First Order Concepts for Design Codes, CEB Bulletin No. 112, 1976.
[C13] CEB, Common Unified Rules for Different Types of Construction and Materials, Vol.
1, CEB Bulletin No. 116, 1976.
57
[C14] Construction Industry Research and Information Association (CIRIA), Rationalisation
of Safety and Serviceability Factors in Structural Codes, Report 63, London, 1977.
58
11. Annex D:Bayesian Interpretation of Probabilities
11.1. Introduction
This JCSS Probabilistic Model Code offers distribution functions and corresponding
parameter models for loads and structural properties in order to carry out failure probability
calculations for comparison with specified reliability targets. This annex gives guidance on
the interpretation of both input and results of those calculations.
11.2. Discussion
The frequentistic interpretation is quite straight forward. It means that if one observes for a
long period of time, say T, a large set of say N similar structures, all having a failure rate of p
[1/year], one expects the number of failures not to deviate too far from pTN. The deviation
should fall within the rules of combinatory probabilistic calculations. Such an interpretation,
however, can only be justified in a stationary world where the amount of statistical or
theoretical evidence for the all distribution functions is very large. It should be clear that such
a frequentistic interpretation of the failure probabilities is out of the question in the field of
structural design. In almost all cases the data is too scarce and often only of a very generic
nature. Note, however, that a frequentistic interpretation still can be used in a conditional
sense. The statement that, given a set of statistical models for the various variables, a
structure has some failure probability, is meaningful and helpful.
The interpretation mentioned above given as second, that is the formal approach, gives full
credit to the fact that the numbers used in the analysis are based on (common) ideas rather
than statistical evidence. The probabilistic design is considered as a strictly formal procedure
without any meaning or interpretation. Such a procedure can be believed to be a more rich
59
and consistent design procedure compared to for instance a Partial Factor Method or
Allowable Stress method. The basic philosophy is that a probabilistic design procedure,
running on the average the same design result as its successful predecessors, is a at least as
good as or even better then the other methods. So calibration on the average result is the key
point and the absolute values of the distributions and the failure probabilities have no
meaning at all. An alternative code, prescribing higher standard deviations (resulting in higher
failure probabilities) and corresponding higher target probabilities is considered as fully
equivalent.
To some extent this formal interpretation has many advantages, but is difficult to maintain. In
many cases, it is at least convenient if the various values in the probabilistic calculations have
some meaning in the real world. It should be possible, for instance, to consider the
distribution functions in this code as the best estimate to describe our lack of knowledge and
use them as priors for Bayesian updating procedures in the case of new data. It should also be
possible to use the models for decision making in structural design and optimisation
procedures for target reliabilities. If this cannot be done the method loses many features of its
attraction.
This leads into the direction of a Bayesian probability interpretation, where probabilities are
considered as the best possible expression of the degree of belief in the occurrence of a
certain event. The Bayesian interpretation does not claim that probabilities are direct and
unbiased predictors of occurrence frequencies that can be observed in practice. The only
claim is that, if the analysis is carried out carefully, the probabilities will be correct if
averaged over a large number of decision situations. The requirement to fulfil that claim, of
course, is that the purely intuitive part is neither systematically too optimistic nor
systematically too pessimistic. The calibration to common practice on the average may be
considered as an adequate means to achieve that goal.
The above statement may sound vague and unsatisfactory at first sight. There seems to be an
almost unlimited freedom to make unproven assessments based on a very individual intuition
only. In this respect, one should keep in mind that:
(1) where data is lacking, statistical parameters like means and standard deviations are not
taken as deterministic point estimates but as random quantities usually with a wide scatter;
in this code the scatter is not the opinion of an individual engineer, but it is based on the
judgement of a group of engineers.
60
(2) where data is available, estimates can (and often should) be improved on the basis of it;
the minimum requirement is that intuitive probability models should not be in
contradiction with the data.
Within the Bayesian Probability Theory these starting points have been rigorously formalised.
As long as no data is available, a so called uninformative or vague prior estimate is chosen.
Given observations, the prior can be updated to a so called posterior distribution, using
Bayes’ Theorem. For details the reader is referred to Part 3.0, Material Properties, General
Principles, Annex A. It should be noted that, in the case of sufficient data, this procedure will
tend to probability statements that can be interpreted in a purely frequentistic way.
Data may of course become available in blocks: in such a case the posterior distribution
resulting from the first block may be used as a prior distribution for the second data block.
That is, in fact, precisely what is present in the various chapters of Parts 2 and 3: the
distributions given can often be considered as “data based priors” based on data from a
generic world wide population. These models can be “updated” if data of a specific site or a
specific producer are available.
Practically spoken, lack of statistical data may lead to (1) uncertainties in statistical
parameters (mean, standard deviation, etc) and (2) uncertainty in the type of distribution
(normal, lognormal, Weibull, etc). It turns out that the latter type of uncertainty needs
unrealistic much data to get a substantial reduction, while calculation results may be very
sensitive to it. Also, such large data sets fulfilling the stationarity requirement may hardly be
available. It is exactly on this point that there is a need to standardise the input. It should be
noted that in this code most distribution types have the nature of a ”point estimate”,
neglecting to some extend the distribution uncertainty.
11.3. Conclusion
The conclusion of the foregoing is that distributions and probabilities in this Model Code
should be given a Bayesian degree of belief type interpretation. One may use the distributions
as a start for updating in the presence of specific structure related data and as a basis for
optimisation.
61
(1) The numbers given in this code do not include the effect of gross errors. This is one of the
main sources of the deviation between calculated probabilities and failure frequencies in
practice.
(2) The justification for a Bayesian probabilistic approach in decision making is that it makes
the anyhow inevitable judgement part explicit and minimises the influence of it. The return to
so called deterministic procedures because of a lack of statistical data is no realistic
alternative.
(3) In the Bayesian procedure the prior, if no explicit data is available, is often referred to as
“subjective” or “person dependent”. In the case of this code this would not be the right
terminology. The priors given are not the result of the ideas and experience of a single
individual, but of a large group of experts. This gives the distributions some flavour of
objectivity, however, of course, still on an intuitive basis.
(4) The system of given distributions and their use in Bayesian updating and Bayesian
decision procedures may be considered as a formal procedure in itself.
62
99-CON-DYN/M0037
Februari 2001
1
JCSS-VROU-12-03-97
Table of contents:
2.0.1 Introduction
2.0.2 Classifications
2.0.3 Modelling of actions
2.0.4 Models for fluctuations in time
[Link] Types of models
[Link] Distribution of extremes for single processes
[Link] Distribution of extremes for hierarchical processes
2.0.5 Models for Spatial variability
[Link] Hierarchical model
[Link] Equivalent uniformly distributed load (EUDL)
2.0.6 Dependencies between different actions
2.0.7 Combination of actions
ANNEX 1 - DEFINITIONS
ANNEX 2 - DISTRIBUTIONS FUNCTIONS
ANNEX 3 - MATHEMATICAL COMBINATION TECHNIQUES
99-CON-DYN/M0037
Februari 2001
2
2.0.1 Introduction
The environment in which structural systems function gives rise to internal forces, deformations,
material deterioration and other short-term or long-term effects in these systems. The causes of these
effects are termed actions. The environment from which the actions originate can be of a natural
character, for example, snow, wind and earthquake. lt can also be associated with human activities
such as living in a domestic house, working in a factory, etc.
Action descriptions are in most cases based on suitably simple mathematical models, describing the
temporal, spatial and directional properties of the action across the structure. The choice of the level
of richness of details is guided by a balance between the quality of the available or obtainable
information and a reasonably accurate modelling of the action effect. The choice of the level of
realism and accuracy in predicting the relevant action effects is, in time, guided by the sensitivity of
the implied design decisions to variations of this level and the economical weight of these decisions.
Thus the same action phenomenon may give rise to several very different action models dependent on
the effect and structure under investigation.
99-CON-DYN/M0037
Februari 2001
3
2.0.2 Classifications
Loads can be classified according to a number of characteristics. With respect to the type of the
loads, the following subdivision can be made:
This classification does not cover all possible actions hut most of the common types of actions can be
included in one or more classes. Some of the classes belong as a whole either to uncontrollable
actions or to controllable actions. Other actions may belong to both e.g. water pressure.
With respect to the variations in time the following classification can be made:
- permanent actions, whose variations in time around their mean is small and slow (e.g. self
weight, earth pressure) or which monotonically a limiting value (C.g. prestressing, imposed
deformation from construction processes, effects from temperature, shrinkage, creep or
settlements)
- variable actions, whose variations in time are frequent and large (e.g. all actions caused by
the use of the structure and by most of the external actions such as wind and snow)
- exceptional actions, whose magnitude can be considerable but whose probability of
occurrence for a given structure is small related to the anticipated time of use. Frequently the
duration is short (e.g. impact loads, explosions, earth and snow avalanches).
As far as the spatial fluctuations are concerned it is useful to distinguish between fixed and free
actions. Fixed actions have a given spatial intensity distribution over the structure. They are
completely defined if the intensity is specified in a particular point of the structure (e.g. earth or water
pressure). For free actions the spatial intensity distribution is variable (e.g. regular occupancy
loading, involved although they are variable actions.
99-CON-DYN/M0037
Februari 2001
4
There are two main aspects of the description of an action: one is the physical aspect, the other is the
statistical aspect. In most cases these aspects can be clearly separated. Then the physical description
gives the types of physical data which characterise the action model, for example, vertical forces
distributed over a given area. The statistical description gives the statistical properties of the
variables, for example, a probability distribution function. In some cases the physical and statistical
aspects are so integrated that they cannot be considered separately.
A complete action model consists in general, of several constituents which describe the magnitude,
the position, the direction, the duration etc. of the action. Sometimes there is an interaction between
the components. There may in certain cases also be an interaction between the action and the
response of the structure.
One can in many cases distinguish between two kinds of variables (constituents) Fo and W describing
an action F (see also part 1, Basis of Design).
F = ϕ (Fo, W) ([Link])
Fo is a basic action variable which is directly associated with the event causing the action and
which should be defined so that it is, as far as possible, independent of the structure. For
example, for snow load Fo is the snow load on ground, on a flat horizontal surface
W is a kind of conversion factor or model parameter appearing in the transformation from the
basic action to the action F which affects the particular structure. W may depend on the form
and size of the structure etc. For the snow load example W is the factor which transforms the
snow load on ground to the snow load on roof and which depends on the roof slope, the type
of roof surface etc.
The time variability is normally included in Fo, whereas W can often be considered as time
independent. A systematic part of the space variability of an action is in most cases included in W,
whereas a possible random part may be included in Fo or in W. Eq. ([Link]) should be regarded as a
schematic equation. For one action there may be several variables Fo and several variables W.
Any action model contains a set of parameters and variables that must be evaluated before the model
can be used. In probabilistic modelling all action variables are in principle assumed to be random
variables or processes while other parameters may be time or spatial co-ordinates, directions etc.
Sometimes parameters may themselves be random variables, for example when the model allows for
statistical uncertainty due to small sample sizes.
An action model often includes two or more variables of different character as is described by eq.
(2.0.3.l). For each variable a suitable model should be chosen so that the complete action model
consists of a number of models for the individual variables.
99-CON-DYN/M0037
Februari 2001
5
The definition of the models for these quantities require probability distributions (see annex 2) and a
description of the correlation patterns.
99-CON-DYN/M0037
Februari 2001
6
To describe time depended loads, one needs the probability distribution for the “arbitrary point in
time values" and a description of the variations in time. Some typical process models are (see figure
2.0.4.l):
If the load intensities in subsequent time intervals of model (e) are independent, the model is referred
to as a FBC model (Ferry Borges Castanheta model).
In many applications a combination of models is used, e.g. for wind the long term average is often
modelled as an FBC model while the short term gust process is a continuous Gaussian process. Such
models are referred to as hierarchical models (see Part 1, Basis of Design, Section 5.4). Each term in
such a model describes a specific and independent part of the time variability. For a number of
further definitions and notions, reference is made to Annex 1.
a c
t t
b d
t t
At the design the main interest is normally directed to the maximum value of the load in some
reference period of time to. A quite general and useful upperbound formula to calculate the
distribution of the maximum is given by:
1
ν + (a ) = −ρ' ' (0) exp( −β 2 / 2) ([Link])
2π
Consider the case that the load model contains slowly and rapidly varying parts, as well as random
variables that are constant in time (see figure [Link]).
F=R+Q+S ([Link])
In that case the following expression (see Annex 3, A.3.5) can be used:
νs+(a|RQ) = upcrossing rate of level “a” for process S, conditional upon R and Q
∆t = 1/λ = time interval for the rectangular process Q
ER and EQ denote the expectation operator over all variables R and Q respectively.
t
Q
t
S
As an example for the spatial modelling of actions using a hierarchical model consider the live load in
an office building:
where:
∆F1 is a stochastic variable which describes the variation between the load on different floors.
The distribution function for ∆F1 has the mean value zero and the standard deviation σ1
∆F2 is a stochastic variable which describes the variation between the load in rooms on the same
floor but with different floor areas. The distribution function for ∆F2 has the mean value zero
and the standard deviation σ2
∆F3 is a random field which describes the spatial variability of the load within a room.
The total variability of the samples taken from the total population is described by ∆F1 + ∆F2 + ∆F3.
The variability within the subpopulation of floors is described by ∆F2 + ∆F3.
In many cases the random field q is replaced by a so called Equivalent Uniformly Distributed Load
(EUDL). This load is defined as:
∫ q ( x, y, t )i( xy)dA
q EUDL ( t ) = ([Link])
∫ i( xy)dA
when i(x,y) is the influence function for some specific load effect (e.g. the midspan bending moment).
For given statistical properties of the load field q(x,y) the mean and standard deviation of qEUDL can
be evaluated. For a homogeneous field, that is a random field where the statistical properties of
q(x,y) do not depend on the location, we give here the resulting formulas:
Here ρ(d) is the correlation function describing the correlation between the small scale load qloc, on
the two points (x,y) and (ξ,η). This function may be of the form:
with ∆r2 = (x-ξ)2 + (y-η)2, ∆r being the distance between the two points, and dc some scale distance.
The correlation function tends to zero for distances ∆r much larger than dc.
If the field can be schematised as an FBC-field, the formula for σ2(qEUDL) can be simplified to:
Here Ao is the reference area of the FBC-field and A stands for the total area under consideration, the
so called tributary area. The formula is valid only for A>Ao.
The parameter κ is a factor depending on the shape of the influence line i(x,y). Values are presented
in Figure [Link]. The figure κ = 1 corresponds to a constant value of i(x,y).
κ = 1.0 κ = 1.4
i i
ξ = x/l ξ
η 0 η
κ = 2.0 κ = 2.4
ξ ξ
0 1 0 1
Actions of the same nature are for instance floor loads on different floors in one building or the wind
loads on the front and back wall. The combination of wind and snow is a typical example of the
combinatin of actin of a differente type. Note that sometimes it may be less clear: it may be difficult
to decide whether floor loads of a completely different type in one building (say office loads and
storage loads) are loads of the same nature of a different nature.
If the actions are of the same nature, one might better consider them as components of one action,
The various components are normally described by similar probabilistic models. The basic question is
then to model the statistical dependency between the processes. In general this is a purely
mathematical problem. Details of the mathematical description of the dependencies depend on the
nature of the physical relationship and the nature of the processes themselves. One possibility is to
construct a hierarchical model as has been explained in section 2.5.1. For two stationary continuous
Gaussian processes x(t) and y(t) the correlation alternatively may be described by the cross
correlation function Rxy(τ) or alternatively by the cross spectrum Sxy(ω) (see Annex 1). For pulse type
processes we may have to distinguish between the correlation in amplitude, arrival time and duration.
Floor loads in multi-storey buildings are a good example where all three correlations are of
importance.
If the actions are of a different nature, they sometimes may show quite complex physical interactions.
Typical examples are:
1. the probability of a fire starting given an earth quake has occured and
2. the probability of collapse, given earth quake and fire
The second analysis should take into account that all extinction devices may be not working and that
the structure already may be damamed by the earth quake.
Additional to that, of course, one still needs to consider the standard cases of collapse under earth
quake alone and collapse under fire alone.
99-CON-DYN/M0037
Februari 2001
12
So in all above examples one need to build a more advanced physical model on the one hand and
conditional probability models of one load given the (extreme) condition of the other. In most cases it
may be convenient to define one of the processes as the “leading one” and describe arrival times and
amplitudes of the second process conditional upon the occurrence and amplitude of the first one.
In this model code none or little guidance is presented to this matter. However, the user of this model
code is always entitled to be aware of these possible correlations and interactions. It is stressed that
these interactions may be of great importance to the reliability of the structure.
99-CON-DYN/M0037
Februari 2001
13
From a mathematical modelling point of view the load on a structure is a joint set F(t) of time varying
random fields. This set of loads gives a rectorial load effect E(t) in a given cross section or point of
the structure at time t as a function of F(t) (i.e. a random process). In the scalar case we have:
The reliability problem related to the considered point is to evaluate the probability Pf that Fmax(t) E V
for all future time V is the nonfailure domain defined by the strength properties at the considered
point and limit state.
The load combination problem is to formulate a reasonably simple but for the considered engineering
purpose sufficiently realistic mathematical model that defines F(t). The needed level of detailed
modelling of F(t) depends on the filtering effect of the function that maps F(t) into the load effect
E(t). This filtering effect is judged under due consideration of the sensitivity of the probability pf to
the detailing of the models. The sensitivity question is tied to the last part of the load combination
problem which is actually to compute the value of Pf. Thus, to be operational, the modelling of F(t)
should be simple enough to enable at least a computer simulation of the scalar process E(t) to an
extent that allows an estimation of Pf.
First the relevant set of different action types is identified. This identification defines the number of
elements in the set F(t) and the subdivision of F(t) into stochastically independent subsets. The
modelling is next concentrated on each of these subsets with dependent components.
The mathematical difficulty to solve probabilities for outcrossing rates of processes of the type
([Link]) is the possible very different nature of the various contributors Fi . Each of these processes
may be of a completely different nature, including all kinds of continuos and intermittent processes.
Numerical solutions will often prove to be necessary, but also analytical solutions may prove to be
very helpful. Reference is made to Annex 3 and to the literature.
99-CON-DYN/M0037
Februari 2001
14
ANNEX 1 - DEFINITIONS
Covariance function
m1 = E [F (tl)] m2 = E [F (t2)]
Stationary processes
The process is defined for - ∞ < t < ∞. If, for all values t1 and for all values τ, chosen such that 0 < ti
< to and 0 < ti + τ < to, the stochastic variable x (ti + τ) has the same distribution function as the
stochastic variable x (ti) the stochastic process x (t) is stationary.
If the mean value function m (t) is constant and the covariance function r (t1, t2) depends solely on the
difference τ = (t2 - tl) the process is sold to be wide-sense stationary.
Thus the covariance function for a stationary or a wide sense stationary process may be written
The concept of stationary applied to action processes should in most cases be interpreted as wide-
sense stationary.
Ergodic processes
A process is ergodic if averaging over several realisations and averaging with respect to time (or
another index parameter) give the same result.
For ergodic processes a relation between the point-in-time value distribution function F and the
excursion time t is determined for a chosen reference period to, by
1 - FF (F) = t/to
r ( τ)
ρ( τ) =
r (0)
Spectrum
∞
S( n) = ∫ e − i2π nτ r ( τ )dτ
−∞
99-CON-DYN/M0037
Februari 2001
15
S(n) may be regarded as a measure of how the process is built up of components with different
frequencies. The total variance of the process is:
∞
Var Q = 2 ∫ S( n)dn
0
Gaussian processes
A special but important class of non-Gaussian, scalar and differentiable processes are built by a
memoryless transformation from a normal process, i.e.
S (t) = h (U(t))
where U(t) is a standard normal process and h(u) is an arbitrary function. For S(t) any admissible
(unimodal) distribution function can be chosen thus defining a certain class of functions h(u). In
addition the autocorrelation function ρs(t1,t2) has to be specific. However, there are some restrictions
on the type of autocorrelation function.
The Hermite process is a special case of the Nataf process. All marginal distribution must be of
Hermite type. For this process the solution of the integral equation occurring for the autocorrelation
function of the equivalent (or better generating) standard normal process is analytic. The standard
Hermite process has the representation, i.e. a special case of the function h(u)
~ ~
S( t ) = κ ( U( t ) + h 3,i ( U( t ) 2 − 1) + h( U( t ) 3 − 3U( t )))
For the coefficients depending on the first four moments of the marginal distribution of the non-
normal process. In addition, the Hermite process requires specification of the autocorrelation
function of S(t). Again, there are certain restrictions on the moments of the marginal distributions as
well as on the autocorrelation function.
Scalar rectangular wave renewal processes are useful models for processes changing their amplitude
at random renewal points in a random fashion. A scalar rectangular wave renewal process is
characterised by the jump rate λ, and the distribution function of the amplitude. The renewals occur
independently of each other. No specific distribution is assigned to the interarrival times. Therefore,
the renewal process characterised only by a jump rate captures only long term statistics. The mean
duration of pulses is asymptotically equal to 1/λ,. For the special case of a Poisson rectangular wave
process the interarrival times and so the durations of the pulses are exponentially distributed with
parameter 1/λ.. In the special case of a Ferry Borges-Castanheta process the durations are constant
and the repetition number r = (t2 - tl)/∆ with ∆ the duration of pulses is equal to λ(t2 - t1). Also, the
sequence of amplitudes is an independent sequence.
99-CON-DYN/M0037
Februari 2001
16
The jump rate can be a function of time as well as the parameters of the distribution function of the
amplitudes.
It is assumed that rectangular wave processes jump from a random value S(t) to a new value S+(t+δ)
with δ → at a renewal without returning to zero. Rectangular wave renewal processes must be regular
processes, i.e. the occurrence of any two or more renewals in a small time interval must be negligible
(of o-order). Non-stationary rectangular wave renewal processes are processes which have either
time-dependent parameters of the amplitude distributions and/or time-dependent jump rates.
Random fields
A random field may be regarded as a one-, two- or three-dimensional stochastic process. The time t is
substituted by the space co-ordinates x, y, z.
For the two-dimensional case the covariance function is written (for a stationary random field)
The concepts of stationary, ergodicy etc. are in principle the same as for the stochastic processes.
Vector processes
Two stationary Gaussian processes F1 and F2 are statistically completely described by their mean
values, auto-spectra and the cross spectrum. The latter is defined by:
∫e
−i 2 π nτ
S12 (n ) = r12 (τ)dτ
−∞
A vector of n stationary Gaussian processes can be described by n mean values, n auto-spectra and
n(n-1) cross spectra. Note that Sij is the complex conjugate of Sji.
99-CON-DYN/M0037
Februari 2001
17
Consider the case that two actions Q1(t) and Q2(t) are to be combined. Assume that these actions can
be described as rectangular or sqaure wave models (Figure A3.1). The following assumptions are
made about the processes:
Q1 Q1 max
time
τ1
tr
Q2
Q2 max
time
τ2
tr
Define Q2c as the maximum value of Q2 occurring during the interval τ1 with the probability
distribution function:
[ ]
τ 2 / τ1
FQ2c (Q) = FQ* (Q)
Assume a linear relationship between the actions effect E and the actions:
E = c1 Q1 + c2 Q2 (A3.2)
The maximum action effect Emax from Q1 and Q2 during the reference period to can then be written
as:
99-CON-DYN/M0037
Februari 2001
19
The maximum should be taken over all intervals τ1 within the reference period to.
As an approximation, the resulting action effects could be calculated as the maximum of the
following two combinations (Turkstra's rule):
{
Emax = max c 1Q max + c 2 Q 2 c ; c 1Q 1c + c 2 Q 2 ,max } (A3.4)
It should be noted that the Turkstra Rule gives a lower bound for the failure probability.
Oucrossing approach
Consider the more general event that random state vector Z(τ) representative for a given problem,
enters the failure domain
where g(.) is the limit state function. Z(τ) may conveniently be separated into three components as:
where R is a vector of random variables independent of time t, Q(τ) is a slowly varying random vector
sequence and S(τ) is a vector of not necessarily stationary but sufficiently mixing random process
variables having fast flunctuations as compared to Q(t).
In the general case where all the different types of random variables R, Q(τ) and S(τ) are present, the
failure probability Pf(t) not only must be integrated over the time invariant variables R, but an
expectation operation must also be performed over the slowly varying variables Q(τ):
It should be observed that the expectation operation with respect to Q is performed inside the
exponent, whereas the expectation operation with respect to R is performed outside the exponent
operator. If the point process of exits is a regular process which can be assumed in most cases, the
conditional expectation of the number of exits in the time interval [tmin, tmax] can be determined from:
99-CON-DYN/M0037
Februari 2001
20
t max
∫ν
+ +
E[ N ( t min , t max ; r , q )] = ( τ; r , q )dτ (A3.6)
t min
1 −
ν + ( τ; r , q ) = l im P( N + ({S( τ) ∈ V} ∩ {S( τ + ∆ ) ∈ V}| r , q ) (A3.7)
∆ →0 ∆
If the vector S consists out of n components (S1, ..... Sn), all of ractangular wave type, the following
formula can be used:
n −
ν + = ∑ ν i P{(S1 , S 2 ,... S i ,... S n ) ∈ V) ∩ (S1 , S 2 ,... S i ,... S n ) ∈ V}
+
(A3.8)
i =1
where Si- and Si+ are two realisations of Si, one before and one after some particular jump and νi is the
jump rate of Si.
Intermittent processes
Intermittent processes are a practically important generalisation for all types of random processes.
Although more general forms are known only the simplest type of intermittancies is discussed below.
The renewals of times where the process is "on" follow a Poisson renewal process with rate κ (or
mean interarrival time l/κ). At a renewal the process activates an “on"-state (state "1"). The "off”-
states are denoted by "0". The initial durations of "on"-states will have exponential distribution with
mean l/µ independent of the arrival times. However, we will assume that a "on"-time is also finished
if a next renewal occurs so that the durations have a truncated distribution. By assuming random
initial conditions the probabilities of the “on/off'-states are then determined by
µ κ−µ
p off ( t ) = + exp[ − (κ + µ ) t ] (A.3.9)
κ+µ κ+µ
µ κ−µ
p on ( t ) = + exp[ − (κ + µ ) t ] (A.3.10)
κ+µ κ+µ
In general it is assumed that the "on/off”-process is already in its stationary state where the last terms
in these equations vanishes. In contrast to rectangular wave renewal processes where the duration of
the rectangular pulse is exactly until the next renewal and the duration of the rectangular pulse is
exponentially distributed with mean l/λ for a Poissonian renewal process the "on"-times are now
truncated at the next renewal. It is easily shown that the effective duration of the "on"-times then are
also exponential but with mean 1/(κ+µ). The so-called interarrival-duration intensity is defined by ρ =
κ/µ. For ρ = κ/µ → ∞ the processes are almost always active. For κ/µ → 0 one obtains spike-like
processes.
Intermittancies can also be defined for differentiable processes. If this is a dependent vector process
the entire vector process must have a common ρ, that is all components of the vector must have the
same κ and µ. Independent differentiable vector processes, however, can have different ρ's.
In the case of a single intermittent process with κt0 >1 and µto << 1 the periods where the intermittent
load are present can conveniently be put together. The failure probability is then approximately given
by:
99-CON-DYN/M0037
Februari 2001
21
where T = κ t0 / µ = the total expected time that the intermittent load is active and to = tmax - tmin; νon and
νoff are the failure rates for present and absent intermittent load respectively.
In the case of two mostly absent uncorrelated intermittent loads, the same approximation principle can
be applied, leading to:
κ κ
Pf ( t min , t max ) = ( 1 )( 2 ) ν on ,on t o +
µ1 µ 2
κ κ
( 1 )(1 − 2 ) ν on ,off t o +
µ1 µ2
κ κ
(1 − 1 )( 2 ) ν off ,on t o +
µ1 µ 2
κ κ
(1 − 1 )(1 − 2 ) ν off ,off t o (A.3.12)
µ1 µ2
where νon,on is the failure rate for both intermittent loads present, etc.
99-CON-DYN/M0037
Februari 2001
22
Table of contents:
2.1.1. Introduction
2.1.2 Basic model
2.1.3 Probability density distribution functions
2.1.4 Weight density
2.1.5 Volume
List of symbols:
d = correlation length
V = volume described by the boundary of the structural part
2.1.1 Introduction
The self weight concerns the weight of structural and non-structural components. The main characteristics of
the self weight can be described as follows:
The variability within a structural part is normally small and can often be neglected. However, for some types
of problem (c.g. static equilibrium) it may be important.
G = ∫ γ dV (1)
Vol
where:
V is the volume described by the boundary of the structural part. The volume of V is Vol.
γ is the weight density of the material.
For a part where the material can be assumed to be reasonably homogeneous eq. (1) can be written
G = γav V (2)
where
γav is an average weight density for the part (see further section 2.1.4).
The weight density and the dimensions of a structural part are assumed to have Gaussian distributions. To
simplify the calculations the self weight, G, may as an approximation be assumed to have a Gaussian
distribution.
99-CON-DYN/M0037
Februari 2001
24
Total variability
Mean values, µ γ , and coefficients of variation, Vγ , for the total variability of the weight density of some
common building materials are given in table 2.1.1.
Table 2.1.1. Mean value and coefficient of variation for weight density 1)
Spatial correlations
Between densities of two points within one member, the following correlation can considered to be present:
where
Only correlation in the length dimensions of a structural part are of importance. For beams the weight density
over the cross section and for plates over the height may be considered as fully correlated.
Between points in two different members, but within one building, a constant correlation ρm is assumed to be
present.
99-CON-DYN/M0037
Februari 2001
25
In the absence of more detailed information the following values can be used:
d 10 m (beam/column)
6 m (plate)
3 m (volume)
ρo 0.85
ρm 0.70
Note: For large members the variability of the weight density may be taken as V ρo; for a whole structure
consisting out of many members the variability may be taken as V ρm , where V is the total variability
according to table 2.1.1.
2.1.5 Volume
In most cases it may be assumed that the mean values of the dimensions are equal to the nominal values i.e. the
dimensions given on drawings, in descriptions etc. The mean value of the volume, V, of the structural parts is
calculated directly from the mean values of the dimensions.
The standard deviation of the volume, V, is calculated directly from the values of the standard deviation for the
dimensions. Standard deviations for cross section dimensions are given in table 2.1.2 for some common
building materials and types of structural elements.
Table 2.1.2. Mean values and standard deviations for deviations of cross-section dimensions from their
nominal values.
Rolled steel
steel profiles, area A 0.01 Anom 0.04 Anom
steel plates, thickness t 0.01 tnom 0.02 tnom
Concrete members 2)
anom < 1000 mm 0.003 anom 4 + 0.006 anom
anom > 1000 mm 3 mm 10 mm
Masonry members
unplastered 0.02 anom 0.04 anom
plastered 0.02 anom 0.02 anom
Structural timber
sawn beam or strut 0.05 anom 2 mm
laminated beam, planed ≈0 1 mm
1)
The values refer to large populations. They are based on data from various sources and they concern
members with currency acceptation dimension accuracy.
2)
The values are valid for concrete members cast in situ. For concrete members produced in a factory the
deviations may be considerably smaller.
The variability within a component (e.g. the variability of the cross section area along a beam)
may be treated according to the same principles that is presented for the weight density in section 2.1.4.
Reference
99-CON-DYN/M0037
Februari 2001
26
CIB W81, Actions on Stuctures, Self weight, Report no. 115, Rotterdam
99-CON-DYN/M0037
Februari 2001
27
Table of contents:
List of symbols:
A = area [m2]
dp = duration of intermittend load [year]
i = influence function
m = mean load intensity in [kN/m2]
p = intermittent load in [kN/m2]
q = sustained load in [kN/m2]
S = load effect in [kN/m2]
T = reference time in [year]
V = zero mean normal distributed variable in [kN/m2]
W = load intensity in [kN/m2]
The live loads on floors in buildings are caused by the weight of furniture, equipment, stored objects
and persons. Not included in this type of load are any structural or non-structural elements, partition walls or
extraordinary equipment. The live load is distinguished according the intended user category of the building, i.
e. domestic buildings, hotels, hospitals, office buildings, schools, and stores. At design stage considerations
should also be given to eventual changes of use during the life-time. Areas dedicated to store goods, materials,
etc. must be treated separately. Live loads vary in time and space in a random manner. The spatial variations
are assumed to be homogeneous in a first approximation. With respect to the variation in time, it is divided into
two components, the sustained load and the intermittent load.
The sustained load contains the weight of furniture and heavy equipment. The load magnitude
according to the model represents the time average of the real fluctuating load. Changes usually related to
changes in use and of users in a building. Short term fluctuations are included in the uncertainties of this load
part.
The intermittent load represents all kinds of live loads, which are not covered by the sustained load.
The sources are like gathering of people, crowded rooms during special events, or stacking of furniture during
remodelling. The relative duration of an intermittent loads is fairly small.
The load intensity is represented by a stochastic field W(x,y), whereby the parameters depend on
the user category of the building.
W ( x, y ) = m + V + U ( x, y ) (1)
where m is the overall mean load intensity for a particular user category, V is a zero mean normal distributed
variable and U(x,y) is a zero mean random field with a characteristic skewness to the right. The quantities V
and U are assumed to be stochastically independent.
The load effects calculated from the model shall describe the load effects caused by the real load, with
sufficient accuracy. For linear elastic systems, where superposition is possible, the load effect S is written as:
S = ∫ W ( x, y ) i ( x, y ) dA (2)
A
where W(x,y) is the load intensity and i(x,y) is the influence function for the load effect over a considered area
A.
For non-linear structural response a stepwise linearity can be assumed, whereby the proposed relation
for the load effect can be used in each step. The load intensity W is substituted by the step ∆W and the
influence function i(x,y) must reflect the total load situation, which results in a corresponding step ∆S for the
load effect. When applying the theory of plasticity, then the influence function is proportional to the deflection
corresponding to the mechanism.
An equivalent uniformly distributed load for the sustained load per unit area is that load having the same
load effect as the original load field, i. e.
99-CON-DYN/M0037
Februari 2001
29
∫ W (x, y )i(x, y ) dA
q= A
(3)
∫ i ( x, y ) dA
A
E[q ] = m
2 A0 (4)
Var[q ] = σ 2
V + σU A κ
whereby the factor κ is given in Figure ([Link]) in Part 2.0. Note that for A<A0 one should take A0/A = 1.
The variable V describes the variability of sustained loads related to areas A1 and A2, which are assumed
to be independent and non overlapping. These areas can be either on the same floor or on different floors. The
covariance between the corresponding loads q1 and q2 is given as:
Cov[q1 , q 2 ] = σ V2 (5)
The stochastic distribution of V is assumed to be normally distributed. The random field U(x,y) has a
specific skewness to the right, and in consequence also the load effect S and the sustained load q. A Gamma
distribution for the sustained load fits best the actual observations, with parameters defined through the
relations E [q ] = k µU and Var [q ] = k µU2 .
The load intensity for the intermittent load p is represented by the same stochastic field as the sustained
load, whereby the parameters depend on the user category of the building. The intermittent load can generally
be considered as concentrated load. But, for design purposes, the same approach as for the sustained load is
used. The duration of the intermittent load dp is considered as deterministic.
The equivalent uniformly distributed load for intermittent loads p has the statistical properties as the
sustained load and can be evaluated in the same manner. Generally, there is a lack of data for this load. The
standard deviation normally gets values in the same magnitude as the mean value, E[p] = µp. Therefore, the
intermittent load is assumed to be exponentially distributed.
The time between load changes is assumed to be exponentially distributed, then the number of load
changes is Poisson distributed. The probability function for the maximum sustained load is given by:
[ ( )]
Fqmax ( x ) = exp − λT 1 − Fq ( x ) (6)
where Fq(x) is the probability function of the sustained load, T is the reference time, like the anticipated
lifetime of the building, and λ is the occurrence rate of sustained load changes. Thus λT is the mean of the
number of occupancy changes.
The maximum of the intermittent load is defined to occur as a Poisson process in time with the mean
occurrence rate ν. The average duration of the intermittent load depends on the process, i.e. personnel,
emergency or remodeling.
99-CON-DYN/M0037
Februari 2001
30
The maximum load which will occur in a building is a combination of sustained load and intermittent
load. Assuming a stochastic independence between both load types, the maximum load during one occupancy
is obtained from the convolution integral. The total maximum load during the reference time T is obtained by
employing the extreme value theory.
In cases with high share in sustained load the duration statistics becomes of interest, especially for creep
and shrinkage problems. Generally, the intermittent load will be of little interest. From the assumed extreme
value distribution the statistical quantities of the excursion time τ over a certain level x can be derived.
( )
E [τ ( x )] = T 1 − Fq ( x )
( )
Var [τ ( x )] = 2T 1 − Fq ( x ) λ
(7)
References
[1] CIB W81. Actions on Structures - Live Loads in Buildings. Conseil International du Bâtiment pour la
Recherche l'Etude et la Documentation (CIB). Report 116, Rotterdam, 1989.
[2] EC 1-Part 2.1: Actions on structures - Densities, self-weight, imposed loads. Eurocode 1 - Basis of Design
and Actions on Structures. Comité Européen de Normalisation (CEN). Pre-standard draft, Brussels, 1994.
[3] Rackwitz R: Live Loads in Buildings. Manuscript, unpublished, Munich, 1995.
[4] PMC Part 1: Basis of Design. Probabilistic Model Code - third draft. Joint Committee on Structural Safety
(JCSS), 1995.
99-CON-DYN/M0037
Februari 2001
32
Table of contents:
List of symbols:
i = influence coefficient
td = busy time per year
ty = busy days per year
L = weight of car in kN
S = load effect
T = reference time in years
N = number of parking places
In car parks the loads on parking areas and drive ways may be distinguished. In general, the loads for
regulated parking are dominating the loads for spatially free parking. Further, the entries and parking places are
such that only certain categories of vehicles can use the facility. It is sufficient to distinguish between facilities
for light vehicles like normal passenger cars, station wagons and vans and for heavy vehicles like trucks and
busses. For each parking facility it can conservatively be assumed that the vehicles form an independent
sequence each vehicle with random weight remaining the same at arrival and when leaving the place. At the
beginning of the busy periods it can conservatively be assumed that parking places left by a car will
immediately be occupied by another car. Thus the loading process due to vehicles is a rectangular wave
renewal process.
With respect to the temporal fluctuations one can distinguish the following usage categories for light
vehicles:
• car parks belonging to residential areas
• car parks belonging to factories, offices etc.
• car parks belonging to commercial areas
• car parks belonging to assembly halls, sport facilities etc.
• car parks connected with railway stations airports etc.
The temporal fluctuations are summarized in table 1. For parking facilities for heavy vehicles similar
distinctions can be made.
The mean weight of light vehicles can be assumed to be about E[L] ≈ 15 kN with coefficient of
variation of 15 to 30% depending on the usage of the parking facility and the traffic mixture. The parking place
covers an area of about 2.4 ⋅ 5.0 m2. A normal distribution can be assumed. In general, light vehicles can be
modeled by point loads located in the middle of the parking places.
Calculation of load effects has to take proper account of influence functions according to
n
S( t ) = ∑ i jL j (1)
j=i
99-CON-DYN/M0037
Februari 2001
34
If the negative parts of the influence functions can be neglected the distribution of extreme load effects can be
computed from
n
Fmax{S} ( x ) ≈ exp − λ d t y t d TP i j L j ≥ x
∑ (2)
j=1
with
n
n
x −
j∑ [ ]
i EL
j
j=1
∑
P i j L j > x ≈ Φ −
n 1/ 2
(3)
j=1
i Var L
j∑ j [ ]
j=1
T is the reference time. On driveways where only one vehicle determines the load effect one has
x − E[L]
Fmax{S} ( x ) ≈ exp − λ d t y TN 1 − Φ (4)
(Var[L])1 / 2
where N ist the numer of parking places associated with the drive way.
References
CIB W81, Actions on Structures: Live Load in Buildings, Rep. N0. 116, Rotterdam, 1989
99-CON-DYN/M0037
Februari 2001
35
Table of contents:
List of symbols:
Ce = exposure coefficient
Cr = redistribution (due to wind) coefficient
Ct = deterministic thermal coefficient
d = snow depth
h = altitute of the building site
hr = reference altitude
k = coefficient for altitude conversion
r = conversion factor of snow load on ground to snow load on roofs
Sr = snow load on the roof
Sg = snow load on ground at the weather station
γ (d) = average weight density of the snow for depth d
ηa = shape coefficient
99-CON-DYN/M0037
Februari 2001
36
h/ h
Sr = Sg r k r (1)
where
The snow load Sr acts vertically and refers to a horizontal projection of the area of the roof. Sg is time
dependent but not space dependent within a specified region with similar climatic conditions and with
approximately the same altitude.
The characteristics of the ground snow load Sg should be determined on the basis of observations from
weather stations. The results of such observations are either water-equivalents of snow or depths of snow. In the
first case the values can be used directly to determine the ground snow load. In the second case the data on snow
depth must be converted to snow load by the relation
Sg = d γ (d) (2)
where
λγ ( ∞) γ (0) d
γ (d) = ln 1 + [exp( ) − 1] (3)
d γ (∞) λ
where
The distribution function Fsg max, its mean µ and its coefficient of variation V are denoted as:
The probability distribution functions in these two cases are gamma distributions. The parameters should
be based on local observations. As prior distribution a vague prior should be used. In some cases data from
"similar stations" can be used as prior with n' = 3 and ν' = 2.
In those cases when the climate is a mixture of maritime and continental climate, a part p of the
observations are associated with a continental climate and a part 1-p with a maritime climate. The combined
probability distribution function F for the mixed climates can then be written as Fs = (1- p) Fs1 + p Fs2 .
[Link] General
The conversion factor r is subdivided into a number of factors and terms according to the expression
r = ηa Ce C t + Cr (6)
where
The exposure coefficient, Ce and the shape factor ηa are a reduction coefficients taking account of the
exposure to wind of a building and the slope of the roof α:
u(H) is the wind speed, averaged over a period of one week, at roof level H.
For intermediate values of α linear interpolation should be used.
The thermal coefficient, Ct , accounts for the reduction of snow load on roofs with high thermal
transmittance, in particular glass covered roofs. Ct is equal to 1.0 for buildings which are not heated and for
buildings where the roofs are highly insulated. A value of 0,8 shall be used for most other cases.
The redistribution coefficient, Cr , takes account of the redistribution of the snow on the roof caused by
wind but in some cases also by other causes.
For symmetrical duopitch roofs the coefficient Cr is assumed to be constant and equal to ± Cro for each
half of the roof according to FIG 1. Cro has a β-distribution with µ(Cro) according to FIG 2; the coefficient of
variation of Cr is equal to 1.0. For other types of roofs the numerical values given in ENV 1991-2-3 and ISO 4355
shall be used. These values can assumed to correspond to the mean value plus one standard deviation.
99-CON-DYN/M0037
Februari 2001
39
Cro
CeCtηa
Cro
ρ1 ρ1 = CeCtηa + Cro
ρ2 ρ2 = CeCtηa - Cro
Cro
0.15
0.10
0.05
0 α
0 20° 40° 60°
1)
Data from similar stations can be used as prior with n' = 3 and ν' = 2.
99-CON-DYN/M0037
Februari 2001
41
2.13 WIND
Table of contents:
2.13.1 Introduction
2.13.2 Wind forces
2.13.3 Mean wind velocity
2.13.4 Terrain roughness (category)
2.13.5 Variation of the mean wind with height
2.13.6 Intensity of turbulence
2.13.7 Power spectral density and autocorrelation function of gustiness
2.13.8 Coherence function
2.13.9 Peak velocities
2.13.10 Mean velocity pressure and the roughness factor
2.13.11 Gust factor for velocity pressure
2.13.12 Exposure factor for peak velocity pressure
2.13.13 Aerodynamic shape factors
2.13.14 Uncertainties consideration
List of symbols:
List of symbols:
2.13.1 Introduction
Wind effects on buildings and structures depend on the general wind climate, the exposure of
buildings, structures and their elements to the natural wind, the dynamic properties, the shape and dimensions
of the building (structure). The section presents basic data and procedures for the estimation of wind loads on
buildings and structures. Tropical cyclones, tornados, thunderstorms and orographic wind phenomena require
separate treatment.
The field of wind velocities over horizontal terrain is decomposed into a mean wind (average over 10
minutes) in the direction of general air flow (x-direction) averaged over a specified time interval and a
fluctuating, turbulent part with zero mean and components in the longitudinal (x) direction, the transversal (y-)
direction and the vertical (z-) direction
The wind force acting per unit area of structure is determined with the relations:
(ii) For structures sensitive to dynamic effects (natural frequency < 1Hz) and for large rigid structures:
w = c d c a c e Q ref (2)
where:
Qref = the reference (mean) velocity pressure
cr = roughness factor
cg = gust factor
ca = aerodynamic shape factor
cd = dynamic factor.
The reference wind velocity, U ref is the mean velocity of the wind averaged over a time interval of 10
min = 600 s, determined at an elevation of 10 m above ground, in horizontal open terrain exposure (z0 = 0.03
m).1
The distribution of the mean wind velocities (for any terrain category, height above ground and
averaging time interval) is the Weibull distribution:
1 x k
FU ( x) = 1 − exp − (3)
2 σ
with k close to 2.
1
For other than 10 min averaging intervals, in open terrain exposure, the following relationships may
. U 1h = 10
be used: 105 . U 10min = 0.84U 1min(fastest mile) = 0.67U 3sec .
99-CON-DYN/M0037
Februari 2001
44
The same distribution is valid for direction dependent mean wind flows. Generally, it can not be
assumed that the mean wind direction is uniformly distributed over the circle.
Mean wind velocities vary over the year. If no data are available it can be assumed in the northern hemisphere
that σ(t) ≈ σ[1+ a cos(2π(t-t0)/365] with the constant a between 1/3 and 1/2 and t0 ≈ 15 to 45, with t in days.
The mean wind velocities are highly autocorrelated. Mean wind velocities with separation of about 4 to
12 (8 on average) hours can be considered as independent in most practical applications.
If seasonal variations are neglected, the mean period the mean wind velocities are between levels x1
and x2 ( x 2 ≥ x 1 ) is asymptotically
[ ]
E Tx1 ,x2 = T [ FU ( x 2 ) − FU ( x 1 )] (4)
with T the reference time. For higher levels of x2 the distribution of individual times above x is approximately
[1 − FU ( x)] / ν( x) with ν( x) the mean upcrossing rate for level x.
The maximum mean wind speeds for longer periods follows a Gumbel distribution for maxima.
Generally, it is not possible to infer the maxima over more years from observations covering only a few years.
If the annual maxima are used, provided that the maximum annual data are homogenous as exposure and
averaging time, the distribution function is:
The mode u and the parameter α1 of the distribution are determined from the mean m1 and the standard
deviation σ1 of the set of maximum annual velocities: u 1 = m 1 − 0.577 / α 1 , α 1 = 1282
. / σ 1 . The coefficient
of variation of maximum annual wind speed, V1 = σ1 / m1 depends on the climate and is normally between 0.10
and 0.35. For reliable results, the number of the years of available records must be of the same order of
magnitude like the required mean recurrence interval.
The lifetime (N years) maxima of wind velocity is also Gumbel distributed and the mean and the
standard deviation of lifetime maxima are functions of the mean and of the standard deviation of annual
maxima: mN + 0.78 σ1 ln N, σN = σ1.
Under special climatic conditions, the distribution of mean wind speeds is a mixed distribution
reflecting different meteorological phenomena.
For load combination purposes it is proposed to model storms, for example those wind regimes where a
mean velocity > 10 m/s lasts for some time, as an intermittent rectangular wave renewal process. The number
of storms per year is approximately 50 corresponding to the frequency with which weather systems pass by, at
least in middle Europe. The mean duration of the storm is approximately 8 hours. Consecutive storms are
independent. The representative mean wind velocity in a storm can also be modeled by a Weibull distribution.
The exponent of the Weibull distribution should be around 2. The location parameter should be based on local
data.
The roughness of the ground surface is aerodynamically described by the roughness length z0, which is
a measure of the size and spacing of obstacles on the ground surface. Alternatively, the terrain roughness can
be described by the surface drag coefficient, κ corresponding to the roughness length z0:
99-CON-DYN/M0037
Februari 2001
45
k
κ2 = (6)
z ref
ln
z0
where k ≅ 0.4 is von Karman´s constant and zref is the reference height (Table 2, Table 3). Various terrain
categories are classified in Table 1 according to their approximate roughness lengths. The distribution of the
surface roughness with wind direction must be considered.
The variation of the mean wind velocity with height over horizontal terrain of homogenous roughness
can be described by the logarithmic law. The logarithmic profile is valid for moderate and strong winds (mean
hourly velocity > 10 m/s) in neutral atmosphere (where the vertical thermal convection of the air may be
neglected).
1 z z z
2
z
3
z
4
where:
99-CON-DYN/M0037
Februari 2001
46
U(z)
u(z0) = = friction velocity in m/s
z
2.5 ln
z0
u (z )
δ = * 0 = depth of boundary layer in m
6 fc
U( z) = mean velocity of the wind at height z above ground in m/s
z = height above ground in m
z0 = roughness length in m
k = von Karman’s constant (k ≅ 0.4
d0 = the lowest height of validity of Eq.(7) in m
fc = 2Ωsin(φ) = Coriolis parameter in 1/s
Ω = 0.726 10-4 = angular rotation velocity in rad/s
φ = latitude of location in degree
For lowest 0.1 δ or 200m of the boundary layer only the first term needs to be taken into account
(Harris and Deaves, 1981). The lowest height of validity for Eq.(7), d0, is close to the average height of
dominant roughness elements : i.e. from less than 1 m, for smooth flat country to more than 15 m, for centers of
cities. For z0 ≤ z ≤ d0 a linear interpolation is recommended. In engineering practice, Eq.(7) is conservatively
used with d0 = 0.
With respect to the reference (open terrain) exposure, the relation between wind velocities in two
different roughness categories at two different heights can be written approximately as (Bietry, 1976, Simiu,
1986):
z
ln 0.07
U ( z) z0 z0
= . (8)
U ref z ref z
ln 0,ref
z 0,ref
At the reference height zref, the ratio of the mean wind velocity in various terrain categories to the mean
wind velocity in open terrain is given by the factor p in Table 2. The corresponding ratio for the mean velocity
pressure is p2 .
Table 2. Scale factors for the mean velocity (and the mean velocity pressure) at reference height in various
terrain exposure
The turbulent fluctuations of the wind velocity can be assumed to be normally distributed with mean
zero. The root mean squared value of the velocity fluctuations in the airflow, deviating from the longitudinal
mean velocity, may be normalised to the friction velocity as follows:
99-CON-DYN/M0037
Februari 2001
47
σu z
= β u 1 − Longitudinal (9a)
u* δ
σv z
= β v 1 − Transversal (9b)
u* δ
σw z
= β w 1 − Vertical . (9c)
u* δ
The approximate linear variation with height (Hanna, 1982) can be used only in moderate and strong
winds. For neutral atmosphere, the ratios σv/σu and σw/σu near the ground are constant irrespective the
roughness of the terrain (ESDU 1993):
σv π z
= 1 − 0.25 cos 4 (10a)
σu 2 δ
σw π z
= 1 − 0.55 cos 4 (10b)
σu 2 δ
For z<<δ the variance of the velocity fluctuations can be assumed independent of height above ground :
σ u = β u u* (11a)
σ v = β v u* (11b)
σ w = β w u* (11c)
The variance of the longitudinal velocity fluctuations can also be expressed from non-linear regression
of measurement data, as function of terrain roughness (Solari, 1987):
The longitudinal intensity of turbulence is the ratio of the root mean squared value of the longitudinal
velocity fluctuations to the mean wind velocity at height z (i.e. the coefficient of variation of the velocity
fluctuations at height z :
1/ 2
u( z, t ) σ u (z )
2
I u ( z) = = (14)
U ( z) U ( z)
βu 1
I( z ) = ≈ (15)
z z
2.5 ln ln
z0 z0
The transversal and vertical intensities of turbulence can be determined by multiplication of the
longitudinal intensity Iu(z) by the ratios σv/σu and σw/σu. Representative values for intensity of turbulence at the
reference height are given in Table 3.
The normalised half-sided von Karman power spectral densities and autocorrelation functions of gust
velocity are given in Table 4.
Transversal
I=v 2 n i (1 + 188.6 n 2i ) 2 2/ 3 1
τ i1/ 3 K1/ 3 (τ i ) − τ i K 2/ 3 (τ i )
Vertical ( 1 + 70.8 n )2 11/ 6 Γ(1 / 3) 2
i
i=w
where the autocorrelation ρi(τi) is the Fourier transform of spectral density. An estimation of the length of the
integral scale of longitudinal turbulence, for heights up to 300 m is given by ESDU (1993), as:
A 3/ 2 ( σ u / u * ) 3 z
Lu(z) = (17)
2.5K 3z / 2 (1 − z / h ) 2 (1 + 5.75z / h )
where
2/3
z
6
A = 0.115 1 + 0.315 1 −
δ
Kz = 0.188[1-(1-z/zc)2]1/2
−1/ 8
u*
zc/δ = 0.39
fc z0
The cross-spectral density for two separated points P1 and P2 with distance r perpendicular to direction
i are given in terms of the point spectra and the coherence function by:
with:
5/ 6
ψ u
Longitudinal 1/ 2
Cohuu r , k =( )
2
Γ (5 / 6 )
[2 K 5 / 6 (ψ u ) − ψ u K1 / 6 (ψ u )] ≈ exp( −1.15ψ u1.5 (20a)
99-CON-DYN/M0037
Februari 2001
50
5/ 6
ψ v
Transversal 1/ 2
( )
Cohvv r , k =
2
2 K 5 / 6 (ψ v ) +
6 rk
2
( )
ψ v K1 / 6 (ψ v ) ≈ exp( −0.65ψ 1v .3 )
Γ (5 / 6 ) 3ψ v2 + 5 rk
2
( )
(20b)
5/ 6
ψ w
6(rL )2
Vertical 1/ 2
( )
Cohww r , k =
2 2 K 5 / 6 (ψ w ) −
Γ (5 / 6 )
ψ w K1 / 6 (ψ w ) ≈ exp( −0.65ψ 1w.3 )
3ψ w2 + 5 rk
2
( )
(20c)
( )
and ψ 2i = r 2 k 2 + r 2 / L2i . All coherence function Cohij1/ 2 ( n, P1 , P2 ) with i≠j can be
2πn
where k =
Um
assumed to vanish.
2
2
2 1/ 2
r nr 11 r
Coh 1uu/ 2 ( n, r ) ≈ exp − + 12 + (21)
L u Um z m
z m = z1 z 2
U m = U1 ( z 1 ) U 2 ( z 2 ) .
For structures of small dimension, i.e. r much smaller than Lu, r can be taken as zero.
Spectral moments, λi of higher than the i = 0 order formally do not exist for turbulence spectra
(including von Karman and other spectra) fulfilling the Kolmogorov asymptote (asymptotic f −5/ 3 behaviour).
However, for high frequencies the spectra fall off more rapidly so that truncation of these spectra at frequencies
of 5÷20 Hz makes them finite. Also, filtering by finite areas on which the wind blows removes this
mathematical inconvenience. Then, the distribution of extreme gust velocities, umax is asymptotically a Gumbel
distribution with mean:
[ ] (
E u max λ 0 , λ 2 , t = 2 ln ν 0 t + γ / 2 ln ν 0 t ) λ0 (22)
and variance:
[ ]
Var u max λ 0 , λ 2 , t = [( π 2 / 6) / 2 ln ν 0 t ]λ 0 (23)
99-CON-DYN/M0037
Februari 2001
51
γ=0.5772 is Euler´s constant, t = 600 s and ν0 is the mean frequency of zero upcrossing, in Hz:
ν0 = λ 2 / λ 0 . (24)
The mean and standard deviation of the random peak factor for gust velocities, g are defined as:
g = 2 ln ν 0 t + 0. 577 / 2 ln ν 0 t (25)
π 1
σg = (26)
6 2 ln ν 0 t
The calculation of g from turbulence spectra is sensitive to the choice of cut-off frequency (5-20 Hz).
Empirically and theoretically one can assume that the mean of g is about 3.2 for 1 hour (3.8 for 8 hours) and its
standard deviation about 0.4. Since the fluctuating velocity pressure is a linear function of fluctuating velocity
of gusts, the above values of g and σg also apply to the peak pressure.
1
Q(z) = ρU 2 ( z) (27)
2
The coefficient of variation of the maximum annual velocity pressure is approximately the double of
the coefficient of variation of the maximum annual velocity, V1 : VQ ≅ 2 V1 .
The roughness factor describes the variation of the mean velocity pressure with height above ground
and terrain roughness as function of the reference velocity pressure. From Eq.(13) one gets:
2
z 0.07
Q ( z) U( z) 2 z 0,ref z
2
c r ( z) = = = ln
z ref z 0
2 (28)
Q ref U ref ln
z 0, ref
2
Conversion of the open country velocity pressure for different averaging time intervals can be guided
by the following values obtained from Section 2.13.2:
. Q 1h = Q 10min = 0.7 Q 1min(fastest mile) = 0.44 Q 3s
11
99-CON-DYN/M0037
Februari 2001
52
The gust factor for velocity pressure is the ratio of the peak velocity pressure to the mean velocity
pressure of the wind:
q peak ( z) Q ( z) + g ⋅ σ q
c g ( z) =
Q( z)
=
Q ( z)
[
= 1 + g ⋅ Vq = 1 + g 2 ⋅ I u ( z) ] (29)
where:
Q(z) = the mean velocity pressure of the wind
1/ 2
σq = q ( z , t ) 2 = the root mean squared value of the longitudinal velocity pressure
fluctuations from the mean
VQ = coefficient of variation of the velocity pressure fluctuations (approximately equal
to the double of the coefficient of variation of the velocity fluctuations):
VQ ≅ 2 I(z)
g = the peak factor for velocity pressure.
Approximately, the longitudinal velocity pressure fluctuation, q(z,t) is a linear function of the velocity
fluctuation. Since:
1
[ 1
] 1 1
2
ρ U ( z ) 2 + u( z , t ) = ρU ( z ) 2 + ρU ( z )u( z , t ) + ρu( z , t ) 2 ≅ ρU ( z ) 2 + ρU ( z )u( z , t )
2 2 2 2
it is:
1
Q(z) = ρU ( z ) 2
2
q( z , t ) ≅ ρU ( z )u( z , t )
and consequently, the mean value and the standard deviation of the peak factor for 10 min. velocity pressure
are the same like that for the gust velocity g ≅ 3.2 and σg ≅ 0.4. The values of the peak factor depend on the
averaging time interval of the reference velocity.3
The peak velocity pressure at the height z above ground is the product of the gust factor: the roughness
factor and the reference velocity pressure;
The exposure factor is defined as the product of the gust and roughness factors:
3
(
Since: q peak = c1gmin c r Q1min
ref = c10
g
min
)
c r Q10min
ref ( )
= c1gh c r Q1h
ref ( ) from Section 2.13.8, the following
approximate relations hold: c1g min = 0.7 c10
g
min
= c1gh
99-CON-DYN/M0037
Februari 2001
53
The aerodynamic shape factor, ca is the ratio of the aerodynamic pressure exerted by the wind on the
surface of a structure and its components to the velocity pressure. The aerodynamic pressure is acting normal to
the surface. By convention ca is assumed positive for pressures and negative for suctions.
As the pressure exerted on a surface is not uniformly distributed over the whole area of the surface or
on the different faces of a building, the aerodynamic coefficients should be assessed separately for the different
parts and faces of a building.
The aerodynamic shape factors refer either to the mean pressure or to the peak pressure of the wind.
The shape factors are dependent on the geometry and the dimensions of building, the angle of attack of
the wind i.e. the relative position of the body in the airflow, terrain category, Reynolds number, etc.
In certain cases the aerodynamic factors for external pressure must be combined with those for internal
pressure.
There are two different approaches to the practical assessment of wind effects on rigid structures: using
pressure coefficients and using force coefficients.
• In the former case the wind force is the result of the summation of the aerodynamic forces normal to a
certain surface. It is intended for parts of the structure.
• In the later case, the wind force is the product of the velocity pressure multiplied by the overall force
coefficient times the frontal area of the building. This approach is used within the procedures for calculating
the structural response.
Typical values of the aerodynamic shape factors can be selected from appropriate national and
international documents or from wind tunnel tests. The aerodynamic shape factors should be determined in
wind tunnels capable of modelling the atmospheric boundary layer.
The factors involved in the assessment of the wind forces on structures contain uncertainties.
The mean and the coefficient of variation of the wind forces expressed through the product of
uncorrelated variables in Eq.(1) or Eq.(2) may be written as follows:
and
Table 5 Statistics of random variables involved in the assessment of the wind loading
Arya S.P., 1993. Atmospheric boundary layer and its parametrization. Proceedings of the NATO Advanced Study Institute
on Wind Climate in Cities, Waldbronn, Germany, July 5-16, Kluwer Academic Publishers, Dordrecht/Boston/London,
p.41-66
ASCE 7-93, 1993 and Draft of ASCE7-95, 1995. Minimum design loads for buildings and other structures. American
Society of Civil Engineers, New York
CIB W81 Commission, 1994. Actions on structures. Wind loads, 6th draft, May
Davenport N.G., 1995. The response of slender structures to wind. In the wind climate and cities. Kluwer Academic
Publishers, p.209-239
Davenport A.G., 1987. Proposed new international (ISO) wind load standard. High winds and building codes. Proceedings
of the WERC/NSF Wind engineering symposium. Kansas City, Missouri, Nov., p.373-388
Davenport A.G., 1967. Gust loading factors. Journal of the Structural Division, ASCE, Vol.93, No.3, p.1295-1313
Davenport A.G., 1964. Note on the distribution of the largest value of a random function with application to gust loading.
Proceedings. Institution of Civil Engineering, London, England, Vol. 28 June, p.187-195
Davenport A.G., 1961. The application of statistical concepts to the wind loading of structures. Proceedings, Institution of
Civil Engineering, London, England, Vol.19, Aug., p.449-472
ESDU 85020, Characteristics of atmospheric turbulence near the ground. Part II: single point data for strong winds (neutral
atmosphere), April 1993, 36 p. London, U.K.
ESDU 86010, Characteristics of atmospheric turbulence near the ground. Part III: variation in space and time for strong
winds (neutral atmosphere), Sept. 1991, 33 p., London, U.K.
European Prestandard ENV 1991-2-4, 1994. EUROCODE 1: Basis of design and actions on structures, Part 2.4 : Wind
actions, CEN
Gerstoft P., 1986. An assessment of wind loading on tower shaped structures. Technical University of Denmark, Lingby,
Serie R, No.213
Ghiocel D., Lungu D., 1975. Wind, snow and temperature effects on structures, based on probability. Abacus Press,
Tunbridge Wells, Kent, U.K.
Harris R.I., Deaves D.M., 1980. The structure of strong winds. The wind engineering in the eighties. Proceedings of CIRIA
Conference 12/13 Nov., Construction Industry, Research and Information Association, London, p.4.1-4.93
ISO / TC 98 / SC3 Draft International Standard 4354, 1990. Wind actions on structures. International Organisation for
Standardisation
Joint Committee on Structural Safety CEB-CECM-CIB-FIP-IABSE, 1974. Basic data on loads. Second draft. Lisbon
Kareem, A., Wind Effects on Structures, Prob. Eng. Mech., 2, 4, 1987, pp. 166-200
Karman v., T., 1948. Progress in statistical theory of turbulence. Proceedings, National Academy of Science, Washington
D.C., p.530-539
99-CON-DYN/M0037
Februari 2001
55
Lumley J.L., Panofsky H.A., 1964. The structure of atmospheric turbulence. [Link] & Sons
Lungu D., Gelder P., Trandafir R., 1995. Comparative study of Eurocode 1, ISO and ASCE procedures for calculating wind
loads. IABSE Colloquium, Basis of design and actions on structures, Background and application of Eurocode 1, Delft, The
Netherlands, 1996
NBC of Canada, 1990. Code National du Bâtiment du Canada, 1990 and Supplement du Code, Comité Associé du Code
National du Bâtiment, Conseil National de Recherche, Canada
Plate E.J., 1993. Urban climates and urban climate modelling: An introduction. Proceedings of the NATO Advanced Study
Institute on Wind Climate in Cities, Waldbronn, Germany, July 5-16, Kluwer Academic Publishers,
Dordrecht/Boston/London, p.23-40
Plate E.J., Davenport A.G., 1993. The risk of wind effects in cities. Proceedings of the NATO Advanced Study Institute on
Wind Climate in Cities, Waldbronn, Germany, July 5-16, Kluwer Academic Publishers, Dordrecht/Boston/London, p.1-20
Ruscheweyh H., 1995. Wind loads on structures from Eurocode 1, ENV 1991-2-3. In Wind climate in cities. Kluwer
Academic Publishers, p.241-258
Schroers H., Lösslein H., Zilch K., 1990. Untersuchung der Windstructur bei Starkwind und Sturm. Meteorol. Rdsch., 42,
Oct., p.202-212
Simiu E., Scanlan R.H., 1986. Wind effects on structures. Second edition. J. Wiley & Sons
Simiu E., 1980. Revised procedure for estimating along-wind response. Journal of the Structural Division, ASCE, Vol.106,
No.1, p.1-10
Simiu E., 1974. Wind spectra and dynamic along-wind response. Journal of the Structural Division, ASCE, Vol.100, No.9,
p.1897-1910
Solari G., 1993. Gust buffeting. I Peak wind velocity and equivalent pressure. Journal of Structural Engineering, ASCE,
Vol.119, No.2, p.365-382
Solari G., 1993. Gust buffeting. II Dynamic along-wind response. Journal of Structural Engineering, Vol.119, No.2, p.383-
398
Solari G., 1988. Equivalent wind spectrum technique: theory and applications. Journal of Structural Engineering ASCE,
Vol.114, No.6, p.1303-1323
Solari G., 1987. Turbulence modelling for gust loading. Journal of Structural Engineering, ASCE, Vol.113, No.7, p.1150-
1569
Theurer W., Bachlin W., Plate E.J., 1992. Model study of the development of boundary layer above urban areas. Journal of
Wind Engineering and Industrial Aerodynamics, Vol. 41-44, p.437-448, Elsevier
Uniform Building Code, 1991 Edition. International Conference of Building Officials, Whittier, California
Vellozi J., Cohen E., 1968. Gust response factors. Journal of the Structural Division, ASCE, Vol.97, No.6, p.1295-1313
Vickery B.J., 1994. Across - wind loading on reinforced concrete chimneys of circular cross-section. ACI Structural
Journal, May-June, p.355-356
Vickery B.J., Basu R., 1983. Simplified approaches to the evaluation of the across-wind response of chimneys. Journal of
Wind Engineering and Industrial Aerodynamics, Vol.14, p. 153-166.
Vickery B.J., 1970. On the reliability of gust loading factors. Proceedings, Technical meeting concerning wind loads on
buildings and structures, Building Science Series 30, National Bureau of Standards, Washington D.C., p.93-104
Vickery B.J., 1969. Gust response factors. Discussion. Journal of the Structural Division, ASCE, ST3, March, p.494-501
Wieringa J., 1993. Representative roughness parameters for homogenous terrain. Boundary Layer Meteorology, Vol.63,
No.4, p.323-364
Wind loading and wind-induced structural response, 1987. State of the art report prepared by the Committee on Wind
effects of the Structural Division of ASCE. ASCE, N.Y.
99-CON-DYN/M0037
Februari 2001
56
Table of contents:
List of symbols:
a = deceleration
Ab = the area of the building including the shadow area
d = distance from the structural element to the road
fs(y) = distribution of initial object position in y direction
Fc(x) = static compression strength at a distance x from the nose
k = stiffness
m = mass
m'(x) = mass per unit length
n = number of vehicles, ships or planes per time unit
n(t) = number of moving objects per time unit (traffic intensity)
Pa = the probability that a collision is avoided by human intervention.
99-CON-DYN/M0037
Februari 2001
57
List of symbols:
Pf q(xy) = the probability of structural failure given a mechanical or human failure on the ship,
vehicle, etc. at point (x,y).
r = d/sin α = the distance from "leaving point" to "impact point"
R = radius of airport influence circle
T = period of time under consideration
vc = the object velocity at impact
vc(t) = velocity of the crashed part
vc (xy) = object velocity at impact, given initial failure at point (x,y)
vo = velocity of the vehicle when leaving the track
x,y = coordinate system;
2.18.1.1Introduction
The basic model for impact loading constitutes of (see figure 2.18.1):
- potentially colliding objects (vehicles, ships, airplanes) that have an intended course, which may be the
centre line of a traffic lane, a shipping lane or an air corridor; the moving object will normally have some
distance to this centre line;
- the occurrence of a human or mechanical failure that may lead to a deviation of the intended course; these
occurences are described by a homogeneous Poison process;
- the course of the object after the initial failure, which depend on both object properties and environment;
- the mechanical impact between object and structure, where the kinetic energy of the colliding object is
partly transferred into elastic-plastic deformation or fracture of the structural elements in both the building
structure and the colliding object.
x=0
structure
Q
object
2.18.1.2Failure probability
The probability that a single object, moving in x-direction, suffers from a human or mechanical failure in
the square [dx, dy] (see figure [Link]) and causes collapse at some structure is given by:
where:
The probability of structural failure for a period T can then be presented as:
where:
In principle, impact is an interaction phenomenon between the object and the structure. It is not possible
to formulate a separate action and a separate resistance function. However, an upper bound for the impact load can
be found using the "rigid structure" assumption. If the colliding object is modelled as an elastic single degree of
freedom system, with equivalent stiffness k and mass m, the maximum possible resulting interaction force equals:
Fc = vc √ (km)
(2.18.3)
Note that (2.18.3) gives the maximum for the external load; dynamic effects within the structure still need
to be considered. Note further that simple upperbounds also may be obtained if the structure and or the object
behaves plastic: Fc = min[Fys, Fyo] where Fys = yield force of the structure and Fyo = yield force of the object; the
duration of this load is ∆t = mvc/Fc.
Based on formulation (2.18.3) the distribution function for the load Fc can be found:
99-CON-DYN/M0037
Februari 2001
60
Consider a structural element in the vicinity of a road or track. Impact will occur if some vehicle,
travelling over the track, leaves its intended course at some critical place with sufficient speed (see Figure 2.18.2).
Q
α
v0 ϕ
r b
B
Figure 2.18.2: A vehicle leaves the intended course at point Q with velocity v0 and angle a. A structural
element at distance r is hit with velocity vr.
The collision force probability distribution based on (2.18.5), neglecting the variability in y-direction is
given by:
λ ∆x is the probability that a passing vehicle leaves the road at the interval ∆x, which is approximated by:
The value of b depends on the structural dimensions. However, for small objects such as columns a
minimum value of b follows from the width of the vehicle, so b > 2.5 m.
The collission force is a horizontal force; only the force component perpendicular to the structural surface
needs to be considered.
The collision force for passenger cars affects the structure at 0.5 m above the level of the driving surface;
for trucks the collision force affects it at 1.25 m above the level of the driving surface. The force application area
is 0.25 m (height) times 1.50 m (width).
For impact loads on horizontal structural elements above traffic lanes the following rules hold (see Figure
2.18.3):
a) on vertical surfaces the impact actions follow from [Link] and the height
reduction as specified at c)
b) on horizontal lower side surfaces upward inclination of 10% should be considered. The force application
area is 0.25 m (heigh) times 0.25 m (width).
c) for free heights h larger than 6.0 m the forces are equal to zero; for free
heights between 4.0 m and 6.0 m a linear interpolation should be used
F 10° 10°
h h
driving direction
Figure 2.18.3: Impact loads on horizontal structural elements above traffic lanes
A co-ordinate system (x,y) is introduced as indicated in Figure 2.18.4. The x coordinate follows the centre
line of the traffic lane, while the y co-ordinate represents the (horizontal) distance of the ship to the centre. The
structure that potentially could be hit is located at the point with co-ordinates x=0 and y=d.
f0 (y)
m
x
v0
y
d
point (x,y)
object
In case (a) a ship is on collission course, which is not corrected due to inattendance, bad visibility, old
cards and so on. In case (b) the orginal course is correct, but changed, due to e.g. rudder problems or
misjudgement.
Both origins (a) and (b) are present in the following model which is a modification of (2.18.1):
For the evaluation in practical cases, it may be necessary to evaluate (2.18.8) for various ship types and
traffic lanes, and add the results in a proper way at the end of the analysis.
99-CON-DYN/M0037
Februari 2001
64
Table 2.18.2 gives a number of standard ship characteristics and velocities that could be chosen by the
designer.
Pna avoidance - -
probability
- small 0.045
- medium 0.003
- large 0.002
- very large 0.001
λ failure rate - 10-6 km-1 -
v velocity
- harbour lognormal 1.5 m/s 0.5 m/s
- canal lognormal 3 1.0
- sea lognormal 6 1.5
m mass
- small lognormal 1000 ton 2000 ton
- medium lognormal 4000 8000
- large lognormal 20000 40000
- very large lognormal 200000 200000
k equivalent stiffness lognormal 15 MN/m 3 MN/m
Bow, stern and broad side impact shall be considered where relevant; for side and stern impact the design
impact velocities may be reduced.
Bow impact shall be considered for the main sail direction with a maximum deviation of 30o.
If a wall structure is hit under an angle a, the following forces should be considered:
- perpendicular to the wall: Fy = F sinα
- in wall direction: Fx = f F sinα
where F is the collision force at α = 90° and f = 0.3 is the friction coefficient.
Impact is to be considered as a free horizontal force; the point of impact depends on the geomertry of the
structure and the size of the vessel. As a guideline one could take the most unfavourable point ranging from 0.1 L
below to 0.1 L above the design water level. The impact area is 0.05 L * 0.1 L unless the stuctural element is
smaller.
L is the typical ship length (L = 15, 40, 100 and 300 m for respectively small, medium, large and very
large ship size).
99-CON-DYN/M0037
Februari 2001
65
The forces on the superstructure of the bridge depend on the height of the bridge and the type of ships te
be expected. In general the force on the superstructure of the bridge will be limited by the yield strenght of the
ships superstructure. A maximum of 10 000 kN for large and very large ships and 3000 kN for small and medium
ships can be taken as a guideline averages.
The probability of a structure being hit by an airplane is very small. Only for exceptional structures like
nuclear power plants, where the consequences of failure may be very large, is it mandatory to account for aircraft
impact during design.
n = number of planes passing per time unit through an air corridor (traffic intensity)
T = time period of interest (for instance reference period)
λ = probability of a crash per unit distance of flying
fs(y) = distribution of ground impact perpendicular to the corridor direction, given a crash
Ab = the area of the building including the shadow area
Pna = probability of not avoiding a collision, given an airplane on collision course
The area Ab is the area of the building itself, enlarged by a so called shadow area (see figure 2.18.5). The
strike angle α is random.
For the vicinity of an airport (at a distance r) the impact force distribution is based on:
ΛR
Λ (r) = (2.18.11)
2r
−
Λ = average air plane collision rate for a circular area with radius R = 8 km
Λ(r) = collision rate for crash at distance r from the airport with r < R
n = number of planes approaching the airport per windtunnel
R = radius of airport influence circle
r = distance to the airport
building
H
10°
For airplanes the impact model (2.18.3) is not sufficient. A better model is given by:
ξ = ∫ 0t v c (τ) d τ (2.18.13)
Sometimes vc(t) is taken as constant and equal to vr for further simplification. Results from calculations
based on this model can be found in table (2.18.4).
It is recommended to make the analysis for each type of aircraft (small, large, civil, military) separately
and add the results afterwards.
λ Crash rate
- military plane 10-8 km-1
- civil plane 10-9 km-1
Table 2.18.3: Numerical values for the air plane impact model
99-CON-DYN/M0037
Februari 2001
67
Cessna 210A 0 0
m = 1.7 ton 3 7 F
v = 100 m/s 6 7
A = 7 m2 18 4
engine m = 0.2 ton 18 4
A = 0.5 m2 t
Lear Jet 23 A 0 0
m = 5.7 ton 20 2 F
v = 100 m/s 35 6
A = 12 m2 50 6
70 12
80 20 t
100 0
Boeing 707-320 0 0
m = 90 ton 30 20 F
v = 100 m/s 150 20
A = 36 m2 200 90
230 90
250 20
320 10
330 0 t
Table 2.18.4: Impact characteristics for various aircrafts (perpendicular on immovable walls
2.20 FIRE
Table of contents:
2.20 Fire
2.20.1 Fire ignition model
2.20.2 Flashover occurrence
2.20.3 Combustible material modelling
2.20.4 Temperature-time relationship
[Link] Scientific models
[Link] Engineering models
List of symbols:
Af = floor area
Ai = area of the vertical opening i in the fire compartment [m2]
At = total internal surface area
f = ventilation opening
Hi = specific combustible energy for material i
mi = derating factor between 0 and 1, describing the degree of combustion
Mki = combustible mass present at ∆A for material i
qo = fire load density per unit floor area
t = time
teq = equivalent time of fire duration
α = parameter
βf = coefficient (model uncertainty)
θ = temperature in the compartment
θo = temperature at the start of the fire
θA = parameter
99-CON-DYN/M0037
Februari 2001
69
2.20. FIRE
The probability of a fire starting in a given building or area is modelled as a Poisson process with constant
occurrence rate:
The occurrence rate νfire can be written as a summation of local values over the floor area:
where λ(x,y) corresponds to the probability of fire ignition per year per m2 for a given occupancy type; Af is the
floor area of the fire compartment. As in most applications λ(x,y) can be simplied as a constant, and (2.20.2) can
be simplified to:
νfire = Af λ (2.20.3)
Table 2.20.1: Example values of annual fire probabilities λ per unit floor area for several types of
occupancy.
After ignition there are various ways in which a fire can develop. The fire might extinguish itself after a
certain period of time because no other combustible material is present. The fire may be detected very early and be
extinguished by hand. An automatic sprinkler system may operate or the fire brigade may arrive in time to prevent
flash over. Only in a minority of cases does a fire develop into a fully developed room or compartment fire;
sometimes the fire may break through a barrier and start a fire in another compartment. From the structural point
of view only these fully developed or post flashover fires (see Figure 2.20.1) may lead to failure. For very large
fire compartments having a very large concentration of fire loads, e.g. industrial buildings, a local fire of high
intensity also may lead to (localised) structural damage.
The probability of a flashover once a fire has taken place, can obviously be influenced by the presence of
sprinklers and fire brigades. Numerical values for the analysis are presented in Table 2.20.2.
99-CON-DYN/M0037
Februari 2001
70
T
d
b
flash over
c
ignition
a
t
taq
ignition flame cooling
phase phase phase
Table 2.20.2: Probability of flashover for given ignition, depending on the type of active protection measures
99-CON-DYN/M0037
Februari 2001
71
The available combustible material can be considered as a random field, which in general might be
nonhomogeneous as well as nonstationary. The intensity of the field q at some point in space and time is defined
as:
Σm i M ki Hi
q = (2.20.5)
Af
The non-dimensional factor µi is a function of the fuel type, the geometrical properties of the fuel, and the
position of the fuel in the fire compartment, among other things. For some types of fire load components, mi
depends on the time of fire duration and on the gas temperature-time characteristics of the compartment fire.
Probabilistic models for q are presented in tabel 2.20.3.
Table 2.20.3: Recommended values for the average fire load intensity qo
For known characteristics of both the combustible material and the compartment, the post flash over
period of the temperature time curve can be calculated from energy and mass balance equations.
In addition, the development of the fire may depend on events like collapse of windows or containments,
which may change the ventilation conditions or the available amount of combustible material respectively.
99-CON-DYN/M0037
Februari 2001
72
Av ∑ Ai hi
f = h; with h = ; A v = ∑ Ai (2.20.7)
At Av
where:
At = total internal surface area of the fire compartment, i.e. the area of the walls, floor and ceiling,
including the openings [m2]
Ai = area of the vertical opening i in the fire compartment [m2]
hi = value of the height of opening i [m]
For a fire compartment which also contains horizontal openings, the opening factor can be calculated
from a similar expression. In calculating the opening factor, it is assumed that ordinary window glass is
immediately destroyed when fire breaks out.
In many cases it will be possible to indicate a physical maximum fmax. The actual value of f in a fire should
be modelled as a random quantity according to:
f = fmax (1 - ζ) (2.20.8)
To avoid negative values of f, this lognormal distribution should be cut off at ζ = 1. In addition one should
multiply the resulting temperatures by an overall model uncertainty factor θmodel.
99-CON-DYN/M0037
Februari 2001
73
with:
βf q o A f
t eq = (2.20.10)
At f
1)
values of ζ > 1 should be supressed
Table of contents:
3.0.1 Introduction
3.0.2 Material properties
3.0.3 Uncertainties in material modelling
3.0.4 Scales of modelling variations
3.0.5 Hierarchical modelling
3.0.6 Quality control strategies
[Link] Types of strategies
[Link] Sampling
[Link] Updating versus selecting
Annex A: Bayesian evaluation procedure for the normal and lognormal distribution – charactersitc
values
Annex B: Bayesian evaluation procedure for regression – characteristic value
List of symbols:
3.0.1 Introduction
The description of each material property consists of a mathematical model (e.g. elastic-plastic
model, creep model, etc.) and random variables or random fields (e.g. modulus of elasticity, creep
coefficient). Functional relationships between the various variables may be part of the material model
(e.g. the relation between tensile stress and compressive stress for concrete).
In general, it is the response to static and time dependent mechanical loading that matters for
structural design. However, also the response to physical, chemical and biological actions is important
as it may affect the mechanical properties and behaviour.
It is understood that modelling is an art of reasonable simplification of reality such that the
outcome is sufficiently explanatory and predictive in an engineering sense. An important aspect of an
engineering models also is its operationability, i.e. the ease in handling it inapplications.
Models and values should follow from (standardised) tests, representing the actual
environmental and loading conditions as good as possible. The set of tested specimen should be
representative for the production of the relevant fabrication sites, cover a sufficient long period of time
and may include the effect of standard quality control measures. Allowance should be made for possible
differences between test circumstances and structural environment (conversion).
For the classical building materials, knowledge about the various properties is generally available from
experience and from tests in the past. For new materials models and values should be obtained from an
extensive and well defined testing program.
Material properties are defined as the properties of material specimens of defined size and
conditioning, sampled according to given rules, subjected to an agreed testing procedure, the results of
which are evaluated according to specified procedures.
The main characteristics of the mechanical behaviour is described by the one dimensional σ-ε-
diagram, as presented in figure 3.0.1. As an absolute minimum for structural design the
• modulus of elasticity
• the strength of the material
both for tensile and compression should be known. Other important parameters in the one-dimensional
σ-ε-diagram are:
The strain at rupture is a local phenomenon and the value obtained may heavily depend on the
shape and dimensions of the test specimen.
3 10.10.2000
ε
εu
Additional to the one dimensional σ-ε-diagram, information about a number of other quantities
and effects is of importance, such as:
In the present version of this JCSS model code not all properties will be considered.
Material properties vary randomly in space: The strength in one point of a structure will not be
the same as the strength in another point of the same structure or another one. This item will be further
developed in the sections 3.04 and 3.05.
4. Different qualities of workmanship affecting the properties of (fictitious) material samples, i.e.
when modelling the material supply as a supply of material samples.
5. The effect of different qualities of workmanship when incorporating the material in actual
structures, not reflected in corresponding material samples.
6. Uncertainties related to alterations in time, predictable only by laboratory testing, field
observations, etc.
Material properties vary locally in space and, possibly, in time. As far as the spacial variations
are concerned, it is useful to distinguish between three hierarchical levels of variation: global (macro),
local (meso) and micro (see table 3.0.1).
For example, the variability of the mean and standard deviation of concrete cylinder strength
per job or construction unit as shown in figure 3.0.2 is a typical form of global parameter variation. This
variation primarily is the result of production technology and production strategy of the concrete
producers. Parameter variations between objects are conveniently denoted as macroscale variations. The
unit of that scale is in the order of a structure or a construction unit. Parameter variations may also be
due to statistical uncertainties.
Given a certain parameter realisation in a system the next step is to model the local variations
within the system in terms of random processes or fields. Characteristically, spatial correlations
(dependencies) become negligible at distances comparable to the size of the system. This is a direct
consequence of the hierarchical modelling procedure where it is natural to assume that the variation
within the system is conditional on the variations between systems and the first type of variation is
conditionally independent of the second. At this level one may speak of meso-scale variations.
Examples are the spatial variation of soils within a given (not too large) foundation site or the number,
size and spatial distribution of flaws along welding lines given a welding factory (or welding operator).
The unit of this scale is in the order of the size of the structural elements and probably most
conveniently measured in meters.
At the third level, the micro-level, one focuses on rapidly fluctuating variations and
inhomogenities which basically are uncontrollable as they originate from physical facts such as the
random distribution of spacing and size of aggregates, pores or particles in concrete, metals or other
materials. The scale of these variations is measured in particle sizes, i.e. in centimeters down to the size
of crystals.
The modelling process normally uses physical arguments as far as possible. Quite generally, the
object is taken as an arrangement of a large number of small elements. The statistical properties of these
elements usually can only be assessed qualitatively as well as their type of interaction. This, however, is
sufficient to perform some basic operations such as extreme value, summation or intersection operations
which describe the overall performance. The large number of elements greatly facilitates such
operations because one can make use of certain limit theorems in probability theory. The advantage of
using asymptotic concepts rests on the fact that the description of the element properties can then be
reduced to some few essential characteristics. The central limit theorem of probability theory,
asymptotic extreme value concepts, convergence theorems to the Poisson distribution, etc. will play an
important role. In particular, size effects have to be taken into account at this level.
5 10.10.2000
A useful concept is to introduce a reference volume of the material which in general is chosen
on rather practical grounds. It most frequently corresponds to some specified test specimen volume at
which material testing is carried out. This volume generally neither corresponds to the volume of the
virtual strength elements introduced at the micro-scale modelling level nor to a characteristic volume for
in situ strength. It needs to be related to the latter one and these operations can involve not only simple
size scaling but more complicated functional relationships if the material produced is subject to further
uncertain factors once put in place. Concrete is the most obvious example for the necessity of such
additional consideration. Of course, scale effects may also be present at the meso-scale level of
modelling.
The reason for this concept of modelling at several levels (steps) is the requirement for
operationability not only in the probabilistic manipulations but also in sampling, estimation and quality
control. This way of modelling and the considerations below are, of course, only valid under certain
technical standards for production and its control. At the macro-scale level it is assumed that the
production process is under control. This simply means that the outcome of production is stationary in
some sense. Should a trend develop, production control corrects for it immediately or with some
sufficiently small delay. Therefore, it is assumed that at least for some time interval (or spatial region)
whose length (size) is to be selected carefully, approximate stationarity on the meso- and micro-scale is
guaranteed. Quite frequently, the operational, mathematical models available so far even require
ergodicity. Variations at the macro scale level, therefore, can be described by stationary sequences. If
the sequences are or can be assumed independent, it is possible to handle macro-scale variations by the
concept of random variables. Stationarity is also assumed at the lower levels. However, it may be
necessary to use the theory of random processes (fields) at the lower levels, especially in order to take
into account of significant effects of dependencies in time or space.
When applying these models to systems with increasing number of elements, they generally
lead to specific distributions for the properties of the system at the meso-scale level. The weakest link
model leads to a Weibull distribution, the other two models to a normal distribution. For larger
coefficients of variation the normal distribution must be replaced by a lognormal distribution in order to
avoid physically impossible, negative strength values.
In the next step (see Table 3.0.1) a unit (a structural member) is considered as meso-scale (local
variations). The respective unit is regarded as being constituted from a sequence of finite volumes.
Hence, a property in this unit is modelled by a random sequence X1, X2, X3 ... of reference volume
6 10.10.2000
properties. The Xi may have to be considered as correlated, with a coefficient of correlation depending
on the distance ∆rij and a correlation parameters ρo and dc, for example:
In general ρo =0.
In the subsequent step the complete structure or some relevant part of it is considered as a lot.
A lot is defined as a set of units, produced by one producer, in a relatively short period, with no obvious
changes in production circumstances and intended for one building site. In practice lots correspond to
e.g.:
As a lot is a set of units it can also be conceived as a set of reference volumes Xi. Normally the
parameters q defined before are defined on the lot level. The correlation between the Xi values within
different members normally can be modelled by a single parameter
ρij = ρ0 (3.0.2)
Finally, at the highest macro scale level, we have a sequence of lots, represented by a random sequence
of lot parameters (in space or in time). Here we are concerned with the estimation of the distribution of
lot parameters, either from one source or several sources. The individual lots may be interpreted as
random samples taken from the enlarged population or gross supply. The gross supply comprises all
materials produced (and controlled) according to given specifications, within a country or groups of
countries. The macro-scale model may be used if the number of producers and structures is large or
differences between producers can be considered as approximately random.
The gross supply is described by f(q). Type and parameters should follow from a statistical
survey of the fluctuations of the various lots which belong to the production under consideration. It will
be assumed here that f(q) is known without statistical uncertainty. If statistical uncertainties cannot be
neglected, they can be incorporated. The distribution f(q) should be monitored more or less continuously
to find possible changes in production characteristics.
The probability density function (predictive density function) for an arbitrary unit (arbitrary
means that the lot is not explicitly identified) can be found from the total probability theorem:
The density function f(x) may be conceived as the statistical description of x within a large
number of randomly selected lots. For some purposes one could also identify f(x) directly.
The characteristic value of a material with respect to a given property X is defined as the px – quantile in
the predictive distribution, i.e.
xc = Fx-1(p) (3.0.4)
Examples for predictive distributions can be found in Annex A. Others may be found in [1], [2] and [3].
Normally the statistical parameters of the material properties are based on general tests, taking
into account standard production methods. For economic reasons it might be advantageous to have more
specific forms of quality control for a particular work or a particular factory.
Quality control may be of a total (all units are tested) or a statistical nature (a sample is tested).
Quality control will lead to more economical solutions, but has in general the disadvantage that the
result is not available at the time of design. In those cases, the design value has to be based on the
combination of the unfiltered production characteristics and the expected effect of the quality control
selection rules.
Various quality control procedures can be activated, each one leading to a different design
value. In Figure 3.0.3 an overview is presented. The easiest procedure is to perform no additional
activities (option "no tests"). This means that the units, lots, production should be defined, their
descriptions f(x|q) and f(q) should be established and only f(q) should be checked for long term changes
in production characteristics.
If on the other hand tests are performed one may distinguish between a total (unit by unit)
control and sampling on the one hand and between selection and updating on the other hand. The
various options will be discussed.
9 10.10.2000
10 10.10.2000
no test
sample
update
total
test
sample
selection
total
Both for updating and for selection one may test all units which go into a structure (total testing)
or one may test a (random) sample only (statistical testing).
If the control is total, every produced unit is inspected. The acceptance rules imply that a unit is
judged as good (accepted) or bad (not accepted). This type of control is also referred to as unit by unit
control. Typically, testing all units requires a non-destructive testing technique. Therefore some kind of
measurement error has to be included resulting in a smooth truncation of the distribution.
If the control is statistical only a limited member of units is tested. The procedure generally
consists of the following parts:
One normally takes a random sample. In a random sample each unit of the lot has the same
probability of being sampled. Where knowledge on the inherent structure of the lot is available, this
could be utilised, rendering more efficient sampling techniques, e.g.:
The larger efficiency results in smaller sample sizes for obtaining the same filtering capability
of a test. No further guidance, however, will be given here.
(1) to update the probability density function f(x) or f(q) of some particular lot or item (updating);
(2) to identify and reject inadequate lots or units on the basis of predefined sampling procedures
and selection rules (selection).
where:
The first option can only be used after production of the lot or item under consideration. This data may
not be known at the time of the design (e.g. ready mix concrete). The second option, on the other hand,
offers the possibility to predict the posterior f”(q) for the filtered supply for a given combination of
f(x|q), f(q) and a selection rule d. In such a case the control may lead to two possible outcomes:
Here d is a function of the test result of a single unit or of the combined test result of the units in
a sample and A is the acceptance domain.
12 10.10.2000
One may then calculate the posterior distributions for an arbitrary accepted lot:
Here f(q) is the distribution function for the unfiltered supply and the acceptance probability P(d ∈ A | q)
should be calculated from the decision rule.
The updated distribution for X can be obtained through (3.0.3) with f(q) replaced by f ”(q).
More information about the effect of quality cobntrol on the distribution of material properties can be
found in [4].
13 10.10.2000
Annex A: Bayesian evaluation procedure for the normal and lognormal distribution – characteristic
values
1
f ′(µ, σ) = k σ-( ν′+ δ (n′)+1) exp{- {ν ′ s ′ 2 + n ′(µ -m ′)2}} (1)
2σ 2
k = normalizing constant
δ(n') = 0 for n' = 0
δ(n') = 1 for n' > 0
This special choice enables a further analytical treatment of the various operations. The prior
distribution (1) contains four parameters: m', n', s' and ν'.
Using equation (3.0.4) one may combine the prior information characterised by (1) and a test
result of n observations with sample mean m and sample standard deviation s. The result is a posterior
distribution for the unknown mean and standard deviation of X, which is again given by (1), but with
parameters given by the following updating formula's:
n” = n’ + n (2)
ν” = ν’ + ν + δ(n’) (3)
Then, using equation (3.0.3) the predictive value of X can be found from:
1
0.5 (6)
X = m" + t ν′′ s" (1 + )
n"
In case of known standard deviation σ eq. (2) and (4) still hold for the posterior mean. The predictive
value of X is
1
0.5 (7)
X = m" + uσ(1 + )
n"
1 0.5
m"+u ( p x )σ(1 + n" ) for σ known
xc = (8)
1
m"+t ν" ( p x ) s " (1 + ) 0.5 for σ unknown
n"
If X has a lognormal distribution, Y = ln (X) has a normal distribution. One may then use the
former formula’s on Y and use X = exp(Y) for results on X.
15 10.10.2000
If only indirect measurements for the quantity of interest are possible and a linear regression model
y = a 0 + a1 x is suitable the predictive value of y has also a t-distribution given by
F I 1/ 2
y = a0 + a1 x + tν
G
sG
1
1+ +
(x − x ) J
J 0
2
G
H ∑
G n
( x − x ) J
n
J
K i
2
i =1
where
a0 = y − a1 x
n
∑x y i i − nxy
a1 = i =1
n
∑xi =1
2
i − nx 2
1 n
x= ∑ xi
n i =1
1 n
y= ∑ yi
n i =1
1 n
s2 = ∑ yi − a0 − a1 xi
n − 2 i =1
b g
2
ν = n−2
F I 1/ 2
yc = a0 + a1 x + T −1
G
( p, ν ) s G
1
1+ +
(x − x ) J
J 0
2
G
H ∑ (x − x) J
G n
J
K
n
i
2
i =1
For example, for S-N curves, it is y = ln(N), x = ln(∆σ), a1 = -m und a0 = lna. The characteristic value
of N for given ln(∆σE) = x0 is Nc = exp[yc].
16 10.10.2000
References
[1] Aitchison, J., Dunsmore, I.R., Statistical Prediction Analysis, Cambridge University Press,
Cambridge, 1975
[2] Raiffa, H., Schlaifer, R., Applied Statistical Decision Theory, MIT Press, Cambridge, 1968
[3] Engelund, S., Rackwitz, R., On Predictive Distribution Functions for the Three Asymptotic
Extreme Value Distributions, Structural Safety, Vol. 11, 1992, pp. 255-258
[4] Kersken-Bradley, M., Rackwitz, R., Stochastic Modeling of Material Properties and Quality
Control, JCSS Working Document, IABSE-publication, March 1991
1 10.10.2000
Table of contents:
List of symbols:
The reference property of concrete is the compressive strength fco of standard test specimens
(cylinder of 300 mm height and 150 mm diameter) tested according to standard conditions and at a
standard age of 28 days (see ISO/DIS 2736 and ISO 3893). Other concrete properties are related to
the reference strength of concrete according to:
λ
In situ compressive strength: f c = α (t, τ) f co [MPa] (1)
1
Modulus of elasticity: E c = 10.5 f 1c / 3 ( ) [GPa] (3)
1 + β d ϕ(t, τ)
−3 -1 / 6
Ultimate compression strain: εu = 6.10 f c (1 + βd ϕ(t, τ)) [m/m] (4)
λ is a factor taking into account the systematic variation of in situ compressive strength and
strength of standard tests (see 3.1.3)
α(t,τ) is a deterministic function which takes into account the concrete age at the loading time t [days]
and the duration of loading τ [days]. The function is given by:
In most applications α1(τ) = 0.8 can be used. The coefficients a and b in α2(t) depend on the
type of cement and the climatical environment; under normal conditions a = 0.6 and b = 0.12.
ϕ(t,τ) is the creep coefficient according to some modern code assumed to be deterministic.
βd is the ratio of the permanent load to the total load and depends on the type of the structure;
generally βd is between 0.6 and 0.8.
3.1.2 Stress-strain-relationship
For concrete under compression the following simplified stress-strain relationship holds:
For calculations where the form of the stress-strain relationships is important the following
relationship should be used:
ε
k
σ = f c 1 − 1 −
ε s
(9)
εs = 0.0011 fc
1/6
(10)
Ecεs
k=
fc (11)
in which
The variable Y1,j can also be taken as a spatially varying random field whose mean value
function takes account of systematic influences in space.
where the variables Y2,j to Y4,j mainly reflect variations due to factors not well accounted for by
concrete compressive strength (e.g., gravel type and size, chemical composition of cement and other
ingredients, climatical conditions).
The variables Uij and Ukj within one member are correlated by:
(rij − rkj ) 2
ρ( U ij , U kj ) = ρ + (1 − ρ) exp (17)
2
d c
where dc = 5 m and ρ = 0.5. For different jobs Uij and Ukj are uncorrelated.
Unless direct measurements are available, the parameters of the variables Yk,j can be taken from
Table 3.1.1. The variables are distributed according to the log-normal distibution. The variability of
the variables Yk,j can further be split into a part depending only on the job under consideration and a
part representing spatial variability.
If direct measurements are available, the parameters in Table 3.1.1 are taken as parameters of
an equivalent prior sample with size n' = 10 (see Part 1 for the details of updating).
The distribution of xij = ln(fco,ij) is normal provided that its parameters M and Σ obtained from an
ideal infinite sample. In general it must be assumed that concrete production varies from production
unit, site, construction period, etc. and that sample sizes are limited. Therefore, the parameters M and
Σ must also be treated as random variables. Then, xij has a student distribution according to:
5 10.10.2000
ln( x / m" ) 1
Fx ( x) = Ftν " (1 + ) −0.5
s" n"
where Ftν′′ is the Student distribution for ν′′ degrees of freedom. fco,ij can be represented as
1 0.5
f co,ij = exp(m" + tν " s " (1 + )
n"
The values of m”, n”, s” and ν” depend on the amount of specific information. Table 3.1.2
gives the values if no specific information is available (prior information).
Table 3.1.2: Prior parameters for concrete strength distribution (fco in MPa) [1, 2]
The prior parameters may depend on the geographical area and the technology with which concrete is
produced.
If n ν > 10, a good approximation of the concrete strength distribution is the log-normal distribution
”, ”
” ” n" ν"
with mean m and standard deviation s .
n" − 1 ν " − 2
References
[1] Kersken-Bradley, M., Rackwitz, R., Stochastic Modeling of Material Properties and Quality
Control, JCSS Working Document, IABSE-publication, March 1991
[2] Rackwitz, R., Predictive Distribution of Strength under Control, Materials & Structures, 16, 94,
1983, pp. 259 - 267
PROBABILISTIC MODEL CODE, PART 3, RESISTANCE MODELS
Properties Considered
A probabilistic model is proposed for the random vector X = (fy, fu, E, ν, εu) to be used for
any particular steel grade, which may be defined in terms of nominal values verified by
standard mill tests (e.g. following the procedures of EN 10025 for sampling and selection of
test pieces and the requirements of EN 10002-1 for testing) or in terms of minimum
(hereinafter referred to as code specified) values given in material specifications (e.g. EN
10025: 1990).
Only distinct points or parts of the full stress-strain curve are considered, thus the proposed
model can be used in applications where this type of information is compatible with the
parameters of the mechanical model used for strength analysis.
In applications where strain-hardening (and in particular the extent of the yield plateau and
the initial strain-hardening) are important (e.g. inelastic local buckling) a more detailed
model, which describes the full stress-strain behaviour, may be warranted. Several
deterministic models exist in the literature which would allow a probabilistic model to be
developed. The parameters of the model chosen to describe the full stress-strain curve should
be selected in a way that does not invalidate the statistics given below for the key points of
the stress-strain diagram.
In certain cases, where an absence of a yield phenomenon is noted, the values given for the
yield strength may be used instead for the 0.2% proof strength. However, it should be
emphasised that most of the data examined refers to steels exhibiting a yield phenomenon,
hence this is only a tentative proposal.
Mean values and coefficients of variation for the above vector are given in Table A whereas
the correlation matrix is given in Table B. A multi-variate log-normal distribution is
recommended. The values given are valid for static loading.
The values in Table A may be used for steel grades and qualities given in EN 10025: 1990,
which have code specified yield strength of up to 380 MPa. Some studies suggest that it is the
standard deviation of the yield strength, rather than its coefficient of variation (CoV), that
remains constant, whilst others point to the converse.
A practice which creates problems with sample homogeneity, and hence with consistency of
estimated statistical properties, is downgrading of material, i.e. re-classifying higher grade
steel to a lower grade if it fails to meet the code specified values for the higher grade on the
basis of quality control tests. This practice produces bi-modal distributions and is clearly seen
in some of the histograms reported in the studies referenced below. Higher mean values but
also significantly higher CoV’s than those given in Table A are to be expected in such cases.
The values given in Tables A and B should not be used for ultra high strength steels (e.g. with
code specified fy = 690 MPa) without verification. In any case, ultra high strength carbon steel
(and stainless steel) grades are characterised by a non-linear uniaxial stress-strain response,
usually modelled through the Ramberg-Osgood expression. Practically no statistical data have
been found for the three parameters describing the Ramberg-Osgood law (initial modulus,
0.2% proof stress and non-linearity index).
The CoV values refer to total steel production and are based primarily on European studies
from 1970 onwards. In US and Canada higher CoV’s have been used (on average, about 50%
higher). The main references on which these estimates are based are given below.
The estimates for ultimate strain, εu, are very sensitive to test instrumentation and rate of
loading up to the point of failure. Both significantly higher and lower CoV’s have, on
occasions, been reported.
Within-batch COVs can be taken as one fourth of the values given in Table A but within-
batch variability for the modulus of elasticity, E, and Poisson’s ratio, ν, may be neglected.
Variations along the length of a rolled section are normally small and can be neglected.
If direct measurements are available, the numbers in Table A should be used as prior statistics
with a relatively large equivalent sample size (e.g. n' ≈ 50).
For applications involving seismic loads, a random variable called ‘yield ratio’, denoted by r
and defined as the ratio of yield to ultimate strength, is often of interest. The statistical
properties of this ratio can be derived from those given in Tables A and B for the two basic
random variables. Given the positive correlation between fy and fu , it follows that there is also
a positive correlation between r and fy. It can also be shown that the CoV for r lies between
the CoV’s for fy and fu.
- the suffix (sp) is used for the code specified or nominal value for the variable considered
- α is spatial position factor (α=1.05 for webs of hot rolled sections and α=1 otherwise)
- u is a factor related to the fractile of the distribution used in describing the distance
between the code specified or nominal value and the mean value; u is found to be in the
range of -1.5 to -2.0 for steel produced in accordance with the relevant EN standards; if
nominal values are used for fysp the value of u needs to be appropriately selected.
- C is a constant reducing the yield strength as obtained from usual mill tests to the static
yield strength; a value of 20 MPa is recommended but attention should be given to the
rate of loading used in the tensile tests.
- B = 1.5 for structural carbon steel
= 1.4 for low alloy steel
= 1.1 for quenched and tempered steel
References
Table of contents:
List of symbols:
Reinforcing steel generally is classified and produced according to grades, for example
S300, S400 and S500, the numbers denoting a specified (minimum) yield stress limit.
The basic mechanical property is the static yield strength fy defined at strain 0.2‰. The
stress-strain curve for hot rolled steels can be approximated by a bi-linear relationship up
to strains of 1% to 2%. The (initial) modulus of elasticity can be taken as constant
Ea=205[Gpa]. The stress-strain relationship for cold worked steel can also be represented
by a bi-linear law but more realistically by a continuous curve for which several
convenient analytical forms exist.
The yield stress, denoted by X1, can be taken as the sum of three independent Gaussian
variables
X 1 (d ) = X 11 + X 12 + X 13 [MPa] (1)
where X11~N(µ11(d), σ11) represents the variations in the global mean of different mills,
X12~N(0, σ12) he variations in a mill from batch(melt) to bath and X13~N(0, σ13) the
variations within a melt. D is the nominal bar diameter in [mm]. For high standard steel
production the following values have been found: σ11=19 [MPa], σ12 =22 [MPa], σ13=8
[MPa] resulting in an overall standard deviation σ1 of about 30 [MPa]. The mean µ11 = µ1
is under controlled conditions Sxxx + 2 σ1. Strength fluctuations along bars are
negligible. The value of µ1(d) is defined as the overall mean from the entire production
given a particular bar diameter.
µ1 (d ) = µ 1 (0.87 + 0.13 exp[− 0.08d ]) −1 [MPa] (2)
Statistical parameters of some other relevant properties are given in the following table:
Quantity Mean σ C.o.V. ρij
Bar area Nom. - 0.02 1.00 0.50 0.35 0
2
[mm ] Area
Yield Snom + 30 - 1.00 0.85 -0.50
stress 2σ
[MPa]
Ultimate - 40 - 1.00 -0.55
strength sym
[MPa]
δ10 - - 0.09 1.00
[%]
Tests of the lot of reinforcing steel to be used can considerably diminish steel variations,
if the lot is known to belong to the production of a specific mill and if it originates from
the same batch. Very few direct tests are necessary. Acceptance control for a given lot
can be very efficient to eliminate bad quality lots.
The yield forces of bundles of bars under static loading is the sum of the yield forces of
each contributing bar (full plasticity model). In general, it can be assumed that all
reinforcing steel used at a job originates from a single (but unknown) mill. The
correlation coefficient between yield forces of individual bars of the same diameter can
then be taken as 0.9. The correlation coefficient between yield forces of bars of different
diameter and between the yield forces in different cross-sections in different beams in a
structure can be taken as 0.4. Along structural members the correlation is unity within
distances of roughly 10m (representative for bar length) and vanishes outside.
1
Table of Contents
3.9.1 General
3.9.2 Types of models for structural analysis
3.9.3 Recommendations for practice
List of Symbols
3.9.1 General
In order to calculate the response of a structure with certain (random) properties under certain
(random) actions use is made of models (see Part I, section 5). In general such a model can be
described as a functional relation of the type:
Y = f (X1,X2,…Xn) (3.9.1)
The model function f (..) is usually not complete and exact, so that the outcome Y cannot be
predicted without error, even if the values of all random basic variables are known. The real
outcome Y’ of the experiment can formally be written down as:
The variables θi are referred to as parameters which contain the model uncertainties and are
treated as random variables. The model uncertainties account for:
In many cases, however, a good and consistent set of experiments is lacking and statistical
properties for model uncertainties are purely based on engineering judgement. Sometimes a
comparison between various models may help to defend certain propositions.
The most common way of introducing the model uncertainty into the calculation model is as
follows:
Y′ = θ1 ƒ (X1… Xn ) (3.9.3)
or
Y′ = θ1 + ƒ (X1… Xn ) (3.9.4)
It should be kept in mind that this way the statistical properties of the model uncertainties
depend on the exact definition of the model output. A theoretical elegant way to avoid these
definition dependency is to link model uncertainties directly to the basic variables, that is to
introduce X’i = θ1 Xi..
3
θ = Y′ / f(X1,..Xn)
1 2 3 4 5 6 7 8 experiment number
For the model uncertainties in the load models reference is made to Part 2.
The load effect calculation models have to do with the linear or nonlinear calculation of
stresses, axial forces, shear forces and bending and torsional moments in the various structural
elements. The model uncertainties are usually the result of negligence of for example 3D-
effects, inhomogenities, interactions, boundary effects, simplification of connection
behaviour, imperfections and so on. The scatter of the model uncertainty will also depend on
the type of structure (frame, plates, shell, solids, etc).
The local models are used to define the behaviour of an element, a typical cross section or
even of the material in a single point. One may think in this respect of the visco-elastic model,
the elastic plastic model, the yield condition (Von Mises, Tresca, Coulomb), the hardening
and softening behaviour, the thermal properties and so on.
The model uncertainties are assumed to be partly correlated throughout the structure: on one
point of the structure the circumstances will usually be different from another point which
makes it unlikely that a full correlation exists. For that reason the Table 3.9.1 also includes an
estimate for the degree of correlation between various points or critical cross sections in one
structure.
4
JCSS-VROU/HOL 99
3.10 DIMENSIONS
Table of contents:
3.10 DIMENSIONS
In the following the only time independent effects are considered. Dimensional
deviations of a dimension X is described by statistical characteristics of its deviations Y from
the nominal value Xnom:
Y = X - Xnom (1)
Concerning external (perimeter) dimensions of reinforced concrete cross-section of
horizontal members (beams, plates), available data are quite extensive, although not
convincing. The following general remarks follow from recent analysis of large samples of
measurements [1,2,3,4]. It has been observed that the following aspects do not significantly
affect dimensional deviations of reinforced concrete cross-section:
- the type of the elements (reinforced, prestressed),
- the shape of the cross/section (rectangular, I, T, L),
- the class of concrete (strength of concrete),
- dimension orientation (depth, width),
- position of the cross-section (mid-span, support).
It has been found [4] that external dimensions of concrete cross-sections are only
slightly dependent on the mode of production (precast, cast in situ).
When precast and cast in situ elements are taken together [2] then for the mean and
standard deviation (the normal distribution seems to be satisfactory) of Y may be expected
within the limits:
0 < µy = 0.003 Xnom < 3 mm (2)
Top Steel
According to the data reported in [1] the average concrete cover to the top steel of
beams and slabs in systematically greater than the nominal value (by about 10 mm), the
standard deviation is also around 10 mm (within an interval from 5 to 15 mm). Reasonable
average formulae (with a great uncertainty) for the cover to beam and plate to steel may be
written in an approximate form as:
5 mm < µy < 15 mm (4)
Memorandum
99-CON-M002 3 April 26, 1999/
3rd draft
σy ≅ 5 mm (7)
Effective depth
Obviously, the above relations are providing only gross estimates and particular values
must be chosen taking into account other specific conditions. Nevertheless, they are in a
reasonable agreement with observations concerning effective depth of the cross-section (the
depth and concrete cover could be highly correlated). If no further information is available, it
is indicated in [2] that the characteristics may be assessed by:
µy ≅ 10 mm (8)
σy ≅ 10 mm (9)
Further experimental measurements (related to specified production procedure) with a
special emphasis on internal dimensions of horizontal, as well as vertical elements are
obviously needed.
According to the data in Table 1 the following characteristics of concrete cover may be
considered as a first approximations (intervals indicated for the mean and standard deviation
Memorandum
99-CON-M002 4 April 26, 1999/
3rd draft
represent a reasonable bonds which are dependent on particular conditions and quality of
production):
- column and wall:
µY = 0 to 5 mm (10)
σY = 5 to 10 mm (11)
- slab bottom steel:
µY = 0 to 10 mm (12)
σY = 5 to 10 mm (13)
- beam bottom steel:
µY = − 10 to 0 mm (14)
σY = 5 to 10 mm (15)
- slab and beams top steel:
µY = 0 to 10 mm (16)
σY = 10 to 15 mm (17)
Obviously, these values represent only very gross estimates of basic statistical
characteristics of concrete cover and particular values should be chosen in accordance with
relevant production conditions. Further experimental measurements (related to given
production procedure) with a specific emphasis on internal dimensions of horizontal, as well
as vertical elements are obviously needed.
Note that recent European document [6] on execution of concrete structures specifies is
in a good agreement with the above mentioned data. The minimum permitted deviation of
concrete cover is -10 mm (corresponding to σy = 6), the maximum permitted deviation is from
10 mm up to 20 mm (corresponding to about σy from 6 to 13 mm).
σY ≤ 1,0 mm (19)
For cross-section area and modulus it has been found that independently on the profile
height the mean of both quantities differ from their nominal values insignificantly (the
differences are practically zero) and the coefficients of variations for cross-section area are
about 3,2 %, for cross-section modulus about 4,0 %. The normal distribution seems to be fully
satisfactory model for all geometrical properties.
Several theoretical models were considered in previous studies [1] and [2]. It appears
that unless further data are available, normal distribution provides a good general model for
external dimensions of both reinforced concrete and steel elements and also for effective
depth of reinforced concrete cross-section.
However, concrete cover to reinforcement in concrete cross-sections of various concrete
elements is a special random variable, which may hardly be described by a normal
distribution. In this case different types of one or two side-limited distribution should be
considered.
Taking into account various combination of the coefficient of variation w= σ/µ, and
skewness α (the subscripts are omitted here), the following commonly used distributions
could be considered:
- for all w and α beta distribution with general lower and upper bound a and b, denoted
Beta(µ;σ,a;b),
- for all w and α >0 shifted lognormal distribution with lower bound a, denoted sLN(µ;σ;
a),
- for all α < 2w beta distribution with the lower a at zero (a = 0), and a general upper
bound b, which is denoted Beta(µ;σ;0;b),
- for α = 3w+w3 lognormal distribution with the lower bound a at zero (a = 0),
- for α = 2w gamma distribution (which has the lower bound a at zero (a = 0) by
definition), denoted Gamma(µ;σ).
3.10.6 Correlations
It has been found [4] that external dimensions of concrete cross-sections are only
slightly dependent on the mode of production (precast, cast in situ). No significant correlation
(the correlation coefficients being around 0,12) has been found between vertical and
horizontal dimensions. No data are available concerning correlation of internal (concrete
cover) and external dimensions even though the depth and concrete cover of some elements
could be highly correlated. There may be a strong auto-correlation along the element;
correlation distance may be assessed as multiple (say from 3 to 5) of the cross section height
or as a part of the span (say 1/4 to 1/2).
3.10.7 References
[1] Casciati, F., Negri, I, Rackwitz, R. Geometrical Variability in Structural Members and
Systems, JCSS Working Document, January 1991.
[2] Tichý, M. Dimensional Variations. In: Quality Control of Concrete Structures, RILEM,
Stockholm, June 1979, pp. 171-180.
[3] Tichý, M. Variability of Dimensions of Concrete Elements. In: Quality Control of
Concrete Structures, RILEM, Stockholm, June 1979, pp. 225-227.
[4] Bouska, P., Holický, M. Statistical analysis of geometric parameters of concrete cross
sections. Research report, Building Research Institute, Prague, 1983 (in Czech -
Summary in English is provided).
Memorandum
99-CON-M002 6 April 26, 1999/
3rd draft
[5] Fajkus, M., Holický, M., Rozlívka L., Vorlíček M.: Random Properties of Steel
Elements Produced In Czech Republic. Proc. Eurosteel'99, Paper No. 90. Prague 1999.
[6] ENV 13670-1 Execution of concrete structures - Part 1: Common, Brussels, 2000.
Memorandum
99-CON-M0034 1 10 May 1999
3.11 EXCENTRICITIES
Table of contents:
3.11.1 Introduction
3.11.2 Basic model
3.11.3 Probability modelling
3.11.4 References
List of symbols:
3.11.1 Introduction
The bearing capacity of slender elements depends to some extend on the difference between
the actual and theoretical lining, the so called excentricity. In this section we will present the models
for the excentrities of columns in braced and unbraced frameworks.
In the analysis three types of eccentricities can be distinguished (see figure 3.11.1)
For the braced frame the out of plumpness is only relevant for the bracing system, but not for the
column under consideration; for the unbraced frame especially the out of plumpness is usually
dominant over the end point eccentricity and the curvature.
e f
φ
The probabilistic model for the three basic parameters are presented in Table 3.11.1. For all three
cases it is assumed that the distribution is symmetrical around zero and that small eccentricities are
more likely than large ones, although large ones are more dangerous. Note that in special cases non-
symmetrical cross sections may have µ(f) ≠ 0 due to the fabrication process.
In many cases only the absolute values of the excentricities are important. From the table it
can be derived that these absolute values are distributed with a truncated normal distribution, the
truncation point being the mean of the untruncated distribution. The absolute value has a mean of
Memorandum
99-CON-M0034 3 10 May 1999
about 0.80 times the standard deviation of the untruncated distribution; the coefficient of variation is
0.75.
X description type µ σ
Table 3.11.1 statistical properties for excentricities (for steel and concrete columns)
For the spatial fluctuation the dependency between various columns in one building is
important. In this code the average eccentricity e as well as the out of straightness f will be
considered as being uncorrelated for all members. For φ the following correlation pattern is
recommended:
In this model some possible negative correlation between columns in vertical direction,
resulting from (over) corrections for out of plumbness on lower storeys is not considered. This is a
conservative assumption.
Note on applications
The limit state function for a simple slender column, clamped at the bottom and free at the top, may
be presented as:
PE
Z = Mp − Pφh
PE − P
Mp = plastic moment
P = vertical load
PE = Euler buckling load
h = height of the column
Memorandum
99-CON-M0034 4 10 May 1999
3.11.4. References
Ellingwood, B., Galambos, Th., MacGregor, J., Cornell, A.: Development of a probability based load
criterion for American national standard A58. Building code requirements for minimum design loads
in buildings and other structures. NBS special publication 577, June 1980
Geometrical and cross-sectional properties of steel structures. Chapter 2. European Convention for
constructional steelwork. Second international colloquium on stability. Introductory report, sept.
1976, pp. 19-46, 58-59.
Edlund, B., Leopoldson, U.: Monte Carlo simulation of the load carrying capacity of steel beams.
Chalmers university of technology, division of steel and timber structures, publ. S71-3 & S71-5,
Göteborg, 1973.
Alpsten, G.A.: Statistical Investigation of the strength of rolled and welded structural steel shapes.
Report 39.4, Swedish institute of steel construction, Stockholm
Hardwick, T.R., Milner, R.M.: Dimensional Variations - Frame structures for schools.
The architects, Journal information library, 20 September 1967, vol. 146, technical study, AJ SfB
Ba4, pp. 745-748.
Klingberg, L.: Studies on the dimensional accuracy of a column and beam framework. National
Swedish building research summaries, R38:1970.
Klingberg, L.: Studies of dimensional accuracy in prefab building with flexible joints.
National Swedish building research summaries, R28:1971.
Fiorato, A.E.: Geometric imperfections in concrete structures. National Swedish building research.
Document D5: 1973.
JCSS PROBABILISTIC MODEL CODE
EXAMPLE APPLICATIONS
Ton Vrouwenvelder
Milan Holicky
Jana Markova
1
EXAMPLE 1: REINFORCED CONCRETE SLAB
g q
h d As
Figure 1.1 Simply supported reinforced concrete slab and its cross-section.
Table 1.1 Probabilistic models for the reinforced concrete slab example (acc. to JCSS Probabilistic Model Code
2001).
Basic variable Sym- Distr. type Dimen- Mean Standard V λ ρ
bol sion deviation
Compression fc lognormal MPa 30 5 0.17
concrete strength
Yield strength fy lognormal MPa 560 30 0.05
Span of the slab L determin. m 5 -
Reinforcement area As determin. m2 nom. -
Slab depth h normal m 0.2 0.005 0.025
Distance of bars to a gamma m c + φ/2 0.005 0.17
the slab bottom
Density of concrete γcon normal MN/m3 0.025 0.00075 0.03
Imposed long-term qlt gamma kN/m2 0.5 0.75 1.5 0.2/year perm.
load
Imposed short-term qst exponenc. kN/m2 0.2 0.46 2.3 1/year 1/365
load
Uncertainty of θR lognormal - 1.1 0.077 0.07
resistance
Uncertainty of load θE lognormal - 1 0.2 0.2
effect
2
The simply supported reinforced concrete slab has the span of 5 m and cross-sectional depth
of 0.20 m. The slab carries permanent load g and imposed load q (office areas) which cause
the bending moment. The model of permanent load is determined as the weight of a concrete
floor of a uniform equivalent thickness of 0.25 m (including weight of the slab and floor
layers). The following material characteristics for concrete and reinforcing steel are
considered: concrete class C 20/25 and reinforcing steel S 500.
The reliability of the designed slab is verified using probabilistic methods. The limit state
function Z for slab may be expressed as
where a is the axial distance of reinforcement to the slab bottom (a = c + φ/2, c is the concrete
cover, φ is the diameter). The basic variables applied in the reliability analysis are listed in
Table 1.1. Statistical properties of the random variables are further described by the moment
characteristics, the mean and standard deviation. Models of variables follows
recommendations of JCSS [1]. Some of the basic variables are assumed to be deterministic
values (As, L), while the others are considered as random variables having the normal,
lognormal, exponential and Gamma distribution.
Coefficients of model uncertainties θR and θE are random variables to cover imprecision and
incompleteness of the relevant theoretical models for resistance and load effects, imposed load
q is assessed by imposed long-term load qlt and imposed short-term load qst for office areas
[1].
The mean and standard deviation of the imposed long-term load correspond to the
distribution of 5 years maximum. This is expressed in Table 1.1 by means of the renewal rate
λ = 1/5. Interarrival-duration intensity ρ is considered as permanent. Following the
recommendations of JCSS [1] for office areas the mean of long-term load m = 0.5 kN/m2,
standard deviations σ(v) = 0.3 kN/m2 and σ(u) = 0.6 kN/m2, the reference area A0 = 20 m2.
The influence area A in this example is assumed to be A = 30 m2 and factor κ for the shape of
influence line i(x,y) κ = 2. Following JCSS [1], Clause 2.2.2 the standard deviation of long-
term load is given as
A0 20
σ(qlg)(u) = σ (v) 2 + σ (u ) 2 κ = 0 .32 + 0 .6 2 2 = 0.75 kN/m2 (1.2)
A 30
For short-term imposed loads the renewal rate λ = 1 (1 occurrence per year) and the
interarrival-duration intensity ρ = 1/365 corresponds to the arrival rate and mean duration of
3
one day. For the short-term imposed load JCSS [1] gives m = 0.2 kN/m2 and σ(u) =
0.4 kN/m2. The standard deviation of the short-term load is assessed as
A0 20
σ(qst)(u) = σ (u ) 2 κ = 0 .4 2 2 = 0.46 kN/m2 (1.3)
A 30
The model of the reinforcement cover follows Section 3.10 of JCSS [1], µ = 0.03 nom, σ =
0.005 m (bottom steel), coefficient of variation v = 0.17, gamma distribution.
The models of uncertainty are considered according to Section 3.9 of JCSS [1]. Model
uncertainty for the bending moment capacity θR has the mean µ = 1.1 and standard deviation
σ = 0.077, model uncertainty for the load effect θE has the mean µ = 1 and standard deviation
σ = 0.2, lognormal distribution.
The software product Comrel [2] is used for time-dependent reliability analysis of the
reinforced concrete slab. The reference period of fifty years is taken into account. The
increasing reliability index β1 (lower bound (Pf)) from 1.9 to 4.5 and β2 (upper bound (Pf))
from 0.4 to 3.9 for the slab depth of 0.20 m depending on designed reinforcement ratio As/[b(h
– a)], considering this ratio in the interval from 0.2 % to 0.5 %, is shown in Figure 1.2. The
reliability index β of the slab is considered to be within the lower and upper bound of
reliability indices β1 and β2. The target value βt = 3.8 for fifty year reference period and the
ultimate limit states is recommended in common cases in JCSS Probabilistic Model Code [1],
in ISO 2394 General principles on reliability for structures and EN 1990 Basis of structural
design.
4
β 5
4,5
3.8
4
3,5 β1
3
2,5
2
1,5 β2
1
0,5
0
0,2 0,25 0,3 0,35 0,4 0,45 0,5 0,55
A s / [b (h - a )] [%]
Fig. 1.2 The reliability of reinforced concrete slab versus reinforcement ratio.
The selected sensitivity factors α shown in Table 1.2 express the influence of basic
variables to the resulting reliability of reinforced concrete slab. The active imposed load is
considered.
Table 1.2 Sensitivity factors α of selected basic variables for active imposed load.
References
[1] JCSS Probabilistic Model Code. 2001.
[2] Comrel, RCP Consulting software, version 7.10, Munich, 1999.
5
EXAMPLE 2: STEEL BEAM
g q
Table 2.1 Probabilistic models for the steel beam example (acc. to JCSS Probabilistic Model Code 2001).
Basic variable Sym- Distr. type Dimen- Mean Standard V λ ρ
bol sion deviation
Yield strength fy lognormal MPa 280 19.6 0.07
Span of the beam L determin. m 5 -
Section modulus W determin. m3 param. -
Concrete density γcon normal MN/m3 0.024 0.00096 0.04
Slab depth h normal m 0.25 0.01 0.04
Distance of beams d determin. m 3 -
Imposed long-term qlt gamma kN/m2 0.9 2.15 0.2/year perm.
load categ. D
Imposed short-term qst exponenc. kN/m2 0.4 1.42 1/ year 14/365
load categ. D
Uncertainty of θR lognormal - 1 0.05 0.05
resistance
Uncertainty of load θE lognormal - 1 0.2 0.2
effect
The simply supported beam of a rolled steel section I is a load-bearing floor element in the
shopping areas of the category D [1], its span is 5 m. The beam carries permanent load g due
to its self-weight, the weight of concrete slab and floor layers. The distance of beams d = 3 m.
The model of the permanent load is considered here as the weight of a reinforced concrete
floor of a uniform equivalent thickness of 0.25 m including self-weight of the steel beam. The
beam is made of steel grade S 235 (nominal yield strength fyk = 235 MPa).
The reliability of the designed steel beam is verified using probabilistic methods. The limit
state function Z for the beam is expressed as
The basic variables applied in the reliability analysis are listed in Table 2.1. Models of
variables follows recommendations of JCSS [2]. Some of the basic variables are assumed to
be deterministic values (d, L, W), while the others are considered as random variables having
the normal, lognormal, exponential and Gamma distribution.
6
Coefficients of model uncertainties θR and θE are random variables to cover imprecision
and incompleteness of the relevant theoretical models for resistance and load effects. The
imposed load q for shopping areas is assessed by imposed long-term load qlt and imposed
short-term load qst following JCSS, Clause 2.2.4 [2] for live load models.
The mean and standard deviation of the imposed long-term load for shopping areas
correspond to the distribution in range from 1 to 5 years maximum (here it is assumed
5 years). This is expressed in Table 2.1 by means of renewal rate λ = 0.2. It is considered
permanent inter-arrival duration intensity ρ. Following the recommendations for shopping
areas indicated in JCSS [2], the mean of long-term load m = 0.9 kN/m2, the standard
deviations σ(v) = 0.6 kN/m2 and σ(u) = 1.6 kN/m2, the reference area A0 = 100 m2. The
influence area A is assumed to be A = 120 m2 in this example and factor κ for the shape of
influence line i(x,y) κ = 2. Following JCSS, Clause 2.2.2 [2] the standard deviation of long-
term load is given as
A0 100
σ(qlg)(u) = σ (v) 2 + σ (u ) 2 κ = 0.6 2 + 1.6 2 2 = 2.15 kN/m2 (2.2)
A 120
For short-term imposed loads are considered 14 occurrences per year (the range may be
from 1 to 14 occurrences per year depending on the shopping process). Thus, the inter-arrival
duration intensity ρ = 14/365. For the short-term load m = 0.4 kN/m2 and σ(u) = 1.1 kN/m2.
The standard deviation of the short-term load is assessed as
A0 100
σ(qst)(u) = σ (u ) 2 κ = 1 .1 2 2 = 1.42 kN/m2 (2.3)
A 120
The models of uncertainty are considered according to Table 3.9.1 of JCSS [2]. The model
uncertainty for the bending moment capacity has for steel the mean µ = 1 and standard
deviation σ = 0.05, the model uncertainty for the load effect µ = 1 and standard deviation σ =
0.1, lognormal distribution.
The software product Comrel [3] is used for time-dependent reliability analysis of the steel
beam. The reference period of fifty years is taken into account. The reliability index β1 (lower
bound (Pf)) increases from 3.1 to 4.9, and β2 (upper bound (Pf)) increases from 2.3 to 4.3 with
the increasing section modulus W of the steel beam (see Figure 2.2). The reliability index β of
the beam is assumed to be within the lower and upper bounds of reliability indices β1 and β2.
Horizontal dash line in Figure 2.2 indicates the recommended reliability index βt = 3.8 for the
ultimate limit states following recommendations of JCSS [2].
7
β 5
4,5 β1
3,5 βt = 3.8
3 β2
2,5
2
0,001 0,0015 0,002 0,0025
3
W [m ]
Fig. 2.2 The reliability index β of a steel beam versus section modulus W.
The selected sensitivity factors α shown in Table 2.2 express the influence of basic
variables to the resulting reliability of steel beam. The active imposed load is considered.
References
[1] prEN 1991 Actions on Structures, Part 1.1 Densities, Self-weight and Imposed Loads on
Buildings. European Committee for Standardisation, CEN/TC 250 Final Draft, July 2001.
[2] JCSS Probabilistic Model Code, 2001.
[3] Comrel, RCP Consulting software, version 7.10, Munich, 1999.
8
EXAMPLE 3: TWO STOREY STEEL FRAME
Q h
W
G
h
Consider the simple two storey steel frame of Figure 3.1. The floors are supposed to be of
concrete. Let the limit state function for a particular member failure be given by:
Z = R – 0.16 mE h (G + Q + W) (3.1)
where R is the resisting bending moment, G the self weight, Q the live load and W the wind
load. The factor 0.16 is the result of a structural analysis. The details of that analysis are not
relevant for this example. The mE is the model factor and h the storey height. The resistance R
and the forces G, Q and W are respectively given by:
R = mR Zp fy (3.2a)
G = a b t ρc g (3.2b)
Q = a b (qlong + qshort) (3.2c)
W = 2 h b ca cg cr (0.5 mq ρa U2) (3.2d)
The designation of the variables as well as their deterministic values or probabilistic models as
derived from the JCSS Probabilistic Model Code are given in Table 3.1.
9
Table 3.1 Probabilistic models for the steel frame example (according to the JCSS Probabilistic Model Code
2001)
X Designation Distribution Mean V λ
a in plane column distance Deterministic 6m -
b frame to frame distance Deterministic 5m -
h storey height Deterministic 3m -
t thickness concrete floor slab Normal 0.20 m 0.03
Zp plastic section modulus Normal 0.0007m3 0.02
fy steel yield stress Lognormal 300 MPa 0.07
2
g acceleration of gravity Deterministic 10 m/s -
ρc mass density concrete Normal 2.4 ton/m3 0.04
2
qlong long term live load (sustained) Gamma 0.50 kN/m 1.15 0.2/year
2
qshort short term live load (1 day) Exponential 0.20 kN/m 1.60 1.0/year
ρa mass density air Deterministic 0.125kg/m3 -
ca aerodynamic shape factor Normal 1.10 0.12
cg gust factor Normal 3.05 0.12
cr roughness factor Normal 0.58 0.15
u ref wind speed (8 hours) Weibull 5 m/s 0.60 3.0/day
U ref wind speed (one year) Gumbel 30 m/s 0.10 1.0/year
mq model factor wind pressure Normal 0.80 0.20
mR model factor resistance Normal 1.00 0.05
mE model factor load effect Normal 1.00 0.10
The information in Table 3.1 can be derived from the JCSS Probabilistic Model code. For some
of the variable some clarification will be given:
where z is the building height, gp is the peak factor and Iu(z) = 1/ln(z/zo) the turbulence
intensity. The building height in this example is equal to two times the storey height, so z =
2h. The peak factor gp for a storm period of 8 hours is about 4.2. Finally the roughness
parameter z0 is assumed to be 0.10 m. The coefficient of variation depends primarily on the
variability of gp.
10
The roughness factor cr is given by:
2 0.07
ln z − ln z 0 z
c r ( z ) = 0.8
(3.4)
ln z ref − ln z z oref
oref
The reference values zref and z0ref have the standard values 10 m and 0.03 m respectively. The
factor 0.8 is an average correction factor. For the coefficient of variation of the roughness
model the JCSS code recommends 0.15.
For the steel resistance the Model Code gives:
Starting from a nominal yield stress of 290 MPa we derive at µ(fy) = 300 MPa. Normally for steel
no job specific tests are performed, so the “gross supply estimates” for µ(fy) and V are used.
For the long term live load the model code gives for offices m = 0.50 kN/m2, σ(v) = 0.30 kN/m2,
σ(u) = 0.60 kN/m2, and Ao = 20 m2. The influence area in this example is A = 2ab = 60 m2 and let
κ = 2. In that case we have:
According to the code the average renewal time is 5 years. This is expressed in the last column of
Table 3 by means of the renewal rate λ = 1 / 5 years.
For the short term load the model code gives m = 0.2 kN/m2 and σ(u) = 0.4 kN/m2. In that case
we find:
According to the code the average renewal time for the short term is one year and each time the
duration of the short term load is 1 day. For the wind speed u the Model Code recommends a 2
parameter Weibull distribution to describe the daily fluctuations. A certain wind condition is
11
supposed to last for about 8 hours. For yearly extremes the Gumbel distribution is recommended.
For both distributions the parameters depend on the local wind climate.
Given these data, the failure probability for a design life time of 50 years can be determined as
follows. First the probability that the structure fails for a period of 5 years is calculated,
assuming the short term live load to be absent all the time. The loading scheme, presented as
FBC models, is given in Figure 3.2. The floor load Q (sustained part) is defined for a period of
5 years (λ = 0.2/yr), so that is okay. The wind speed distribution in Table 3.1 is of the Gumbel
type and defined as a one year extreme. The maximum for a 5 year period can be found by
raising the mean of the one year period according to µ [5 yr] = µ [1 yr] + 0.78 σ ln(5) = 34
m/s. The standard deviation σ of the wind does not change. There is no need to adjust the
distribution of the permanent variables.
Given these data, the failure probability for an assumed design lifetime of 50 years can be
determined. We will follow a simplified procedure, comprising out of two load cases:
Load case 1: Self weight, Long term live load and Wind
The short-term live load will be neglected in this part of the analysis. First the probability is
calculated that the structure fails in a period of 5 years, being the average renewal period of
the long-term live load. This means that we can directly use the data from Table 3.1 for the
live load, the self-weight and the resistance. The wind speed distribution in Table 3.1 is
defined for the one year extreme, so an adjustment to 5 years has to be made. According to the
theory of extreme values the mean value for the 5 year extreme should be taken as µ [5 years]
= 30 m/s + 0.78 σ ln(5) = 34 m/s. A standard time independent FORM analysis for this case
leads to a reliability index β = 4.1 and a failure probability PF = 2.3 10-5. For the assumed
design life of 50 years, using a simple upper bound approximation for convenience, we find
PF = 10 * 2.3 10-5 = 2.3 10-4. The FORM influence coefficients α can be found in Table 3.2
Load case 2: Self weight, Long and Short term live load and Wind.
Next we look at a single day where the short-term load is active. So now the short-term floor
load is present in the limit state equation with the distribution as given in Table 3.1. For the
long-term load we also take the distribution from the table. The wind speed model assumes an
FBC model with ∆t = 8 hrs, so we have to consider the maximum of three Weibull distributed
12
variables with parameters as given in the table. Using standard FORM again we arrive at β =
4.4 and PF = 5.0 10-6. Recall that this result holds for a single day with the short-term floor
load present. According to the model code there is one such a day every year, so during the
design period of 50 years we have 50 days of short-term live load activity. An upper bound
approximation, neglecting correlation due to resistance parameters and the permanent and
sustained loads, gives PF = 2.5 10-4.
Of course, this analysis could haven been performed in a more accurate way. In principle, the
Model Code recommends the Outcrossing Approach to deal with time fluctuating phenomena.
Load effect
5 years
5 yr max
wind W
self weight G
time
Figure 3.2: Self weight, sustained floor load and wind load (yearly maximum) as function of time based on FBC
models.
13
Table 3.2 Probabilistic influence coefficients (Alfa values) from the FORM analysis for the
case with the short term load absent
X Designation α
14
EXAMPLE 4: REINFORCED CONCRETE COLUMN IN MULTI STOREY FRAME
9x hs 0.5 As
h
0.5 A s
L
b
3xa 1
The multi-storey concrete structure considered in the this study is schematically shown in
Figure 4.1. Each plenary frame in the transversal direction of the structure may be considered
as unbraced sway frame. These transversal sway frames consist of four columns at a constant
distance a1; in the longitudinal direction of the structure they are located within a constant
distance a2. The edge bottom column having the height L and rectangular cross section with
h/b = 2 is considered. The column is considered as fully clamped in at top and bottom end.
The axial column force N is considered as a simple sum of axial forces due to all the
considered actions:
where NW is the axial force due to self weight, Nimp is the axial force due to the long and/or
short imposed load and Nwind is the axial force due to wind action (positive values correspond
15
Table 4.1 Probabilistic models for the concrete column example (acc to JCSS Model Code 2001)
X Designation Distribution Mean V λ
a1 column distance in plane deterministic 5m -
a2 perpend. dist. of column deterministic 5m -
t plate thickness (conventional) deterministic 0.30 m 0.03
hs storey height deterministic 3m -
L height of bottom column deterministic 6m -
n number of floors deterministic 10 -
b width of cross section normal 350 mm 0.007
h height of cross section normal 700 mm 0.014
As reinforcement area deterministic 0.01 bh -
d1(2) distance of bars from edge normal 75 mm 0.07
(1)
ζ initial overall sway normal 0 σ=15 mrad
2
g acceleration of gravity deterministic 10 m/s -
3
ρc mass density concrete normal 2.4 ton/m 0.04
qlong long term live load (sustained) gamma 0.50 kN/m2 0.75 (1)
0.2/year
2
qshort short term live load (1 day) exponential 0.20 kN/m 1.60 1.0/year
3
ρa mass density air deterministic 1.25 kg/m -
ca aerodynamic shape factor normal 1.10 0.12
cg gust factor normal 4.06 0.12
cr roughness factor normal 0.45 0.15
u ref wind speed (8 hours) weibull 4 m/s 0.60 3.0/day
U ref wind speed (one year) gumbel 24 m/s 0.10 1.0/year
(2)
fc concrete strength (C35) logstudent 30 MPa 0.18
α long term reduction factor normal 0,85 0,10
fy yield strength normal 560 MPa 0.06
E modulus of elasticity for steel deterministic 200 GPa -
mq model factor wind pressure normal 0.80 0.20
mR uncertainty of column normal 1,10 0,11
mE model factor load effect normal 1.00 0.10
(1) Including the effect of correlation over the various floors (ρ =0.5)
(2) More precisely: the logarithm of fc in MPa has a Student distribution with m=3.85 s=0.12, n=3 and ν=6
16
to compressive forces):
NW = (n +1) a1 a2 t ρc / 2 (4.2)
Nimp = n a1 a2 qimp / 2 (4.3)
2
Nwind = (1/2)(L + nhs ) a2 ca cg cr mq qref /(3 a1) (4.4)
For the designation of all the variables as well as for their values and probabilistic models,
reference is made to Table 4.1. In this analysis the weight of beams and columns are
incorporated in the plate thickness for convenience. In (4.4) qref stands for 0.5 ρa U2. The
bending end moment M is given by:
M = M0 + N ( ea + e2) (4.5)
where M0 is the first order moment. Assuem that th total horizontal wind force
W = ca cg cr mq qref (L+nhs) a2
is taken equally by the 4 columns, leading to a bending moment of 0.5 WL at top and bottom.
In that case the wind part of the first order moment is
The first order moments by self weight are zero if we sum them up over the four columns. As
a consequence there is no need to take them into account if plastic redistribution is assumed. A
similar argument holds for the contribution of the imposed load. Therefore, (4.6) represents
the total first order bending moment M0.
The additional eccentricity ea and the second order eccentricity e2 (according to the Eurocode
2, Design of Concrete Structures) in (4.5) are given by:
ea = ζ L/2 (4.7)
2
e2 = 0,2 L K2 fy / (0,9 Es (h - d1)) (4.8)
17
K2 = (Nu - N) / ( Nu - Nbal) ≤ 1 (4.9)
In K2 the symbol N stands for the normal force according to (4.1); Nu and Nbal are respectively
given by Nu = αbhfc + As fy and Nbal = αbh fc/2. The limit state function Z for the right hand
loweredge column may be expressed as the difference of the resistance moment and the load
induced end moment about the centroid:
Z = mR M R - m E M (4.10)
The two coefficients mR and mE are the model uncertainties. The bending moment M is
according to (4.5). Using a calibrated approximation for the resistance model according to
Eurocode 2, we can elaborate MR to:
MR = [Asfy(h-2d1)/2+hN(1-N/(2αbhfc)] (4.11)
MR = Κ2 [Asfy(h-2d1)/2+αbh2fc/8 ] (4.12)
for N < αbhfc / 2 and for N > αbhfc / 2 respectively. All basic variables applied in the model
are listed in Table 4.1.
Given these data, the failure probability for a design life time of 50 years can be determined as
follows. First the probability that the structure fails for a period of 5 years is calculated,
assuming the short term live load to be absent all the time. For a more detailed explanation,
see Example 1. The wind speed distribution in Table 4.1 is defined for the one year extreme,
so an adjustment has to be made. According to the theory of extreme values the mean value
should be taken as µ = 30 m/s + 0.78 σ ln(5) = 34 m/s. A FORM analysis for this case leads to
a reliability index β = 3.8 and a failure probability PF = 6.9 10-5. The alfa-values are presented
in Table 4.2. For a period of 50 years, using a simple upperbound approximation, we find PF =
6.9 10-4. In this case the short term live load is of little significance, which means that the final
result is P = 1.4 10-5 per year, corresponding to ß = 3.6. This is an acceptable result according to
the target values in Part 1, Basis of Design.
18
Table 4.2 Probabilistic influence coefficients (Alfa values) from the FORM analysis for the
case with the short term load absent
X Designation α
19