Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
20 pages
1 file
Event counts are a powerful diagnostic for earthquake analysis. We report on a bimodal distribution of earthquake event counts in the U.S. since 2013. The new peak is about a magnitude M1 with a distribution very similar to that of induced earthquakes in Groningen, The Netherlands, characterized by exponential growth in event count since 2001. The latter shows a doubling time of 6.24 years with a relatively constant rate of land subsidence. We model its internal shear stresses as a function of dimensionless curvature and illustrate the resulting exponential growth in a tabletop crack formation experiment. Our study proposes a new method of parameter-free statistical forecasting of induced events, that circumvents the need for a magnitude cut-off in the Gutenberg-Richter relationship.
Geophysical Journal International, 2000
We present long-term and short-term forecasts for magnitude 5.8 and larger earthquakes. We discuss a method for optimizing both procedures and testing their forecasting effectiveness using the likelihood function. Our forecasts are expressed as the rate density (that is, the probability per unit area and time) anywhere on the Earth. Our forecasts are for scienti®c testing only; they are not to be construed as earthquake predictions or warnings, and they carry no of®cial endorsement. For our long-term forecast we assume that the rate density is proportional to a smoothed version of past seismicity (using the Harvard CMT catalogue). This is in some ways antithetical to the seismic gap model, which assumes that recent earthquakes deter future ones. The estimated rate density depends linearly on the magnitude of past earthquakes and approximately on a negative power of the epicentral distance out to a few hundred kilometres. We assume no explicit time dependence, although the estimated rate density will vary slightly from day to day as earthquakes enter the catalogue. The forecast applies to the ensemble of earthquakes during the test period. It is not meant to predict any single earthquake, and no single earthquake or lack of one is adequate to evaluate such a hypothesis. We assume that 1 per cent of all earthquakes are surprises, assumed uniformly likely to occur in those areas with no earthquakes since 1977. We have made speci®c forecasts for the calendar year 1999 for the Northwest Paci®c and Southwest Paci®c regions, and we plan to expand the forecast to the whole Earth. We test the forecast against the earthquake catalogue using a likelihood test and present the results. Our short-term forecast, updated daily, makes explicit use of statistical models describing earthquake clustering. Like the long-term forecast, the short-term version is expressed as a rate density in location, magnitude and time. However, the shortterm forecasts will change signi®cantly from day to day in response to recent earthquakes. The forecast applies to main shocks, aftershocks, aftershocks of aftershocks, and main shocks preceded by foreshocks. However, there is no need to label each event, and the method is completely automatic. According to the model, nearly 10 per cent of moderately sized earthquakes will be followed by larger ones within a few weeks.
Journal of Geophysical Research: Solid Earth
Reports on Progress in Physics
Charles Richter's observation that 'only fools and charlatans predict earthquakes,' reflects the fact that despite more than 100 years of effort, seismologists remain unable to do so with reliable and accurate results. Meaningful prediction involves specifying the location, time, and size of an earthquake before it occurs to greater precision than expected purely by chance from the known statistics of earthquakes in an area. In this context, 'forecasting' implies a prediction with a specification of a probability of the time, location, and magnitude. Two general approaches have been used. In one, the rate of motion accumulating across faults and the amount of slip in past earthquakes is used to infer where and when future earthquakes will occur and the shaking that would be expected. Because the intervals between earthquakes are highly variable, these long-term forecasts are accurate to no better than a hundred years. They are thus valuable for earthquake hazard mitigation, given the long lives of structures, but have clear limitations. The second approach is to identify potentially observable changes in the Earth that precede earthquakes. Various precursors have been suggested, and may have been real in certain cases, but none have yet proved to be a general feature preceding all earthquakes or to stand out convincingly from the normal variability of the Earth's behavior. However, new types of data, models, and computational power may provide avenues for progress using machine learning that were not previously available. At present, it is unclear whether deterministic earthquake prediction is possible. The frustrations of this search have led to the observation that (echoing Yogi Berra) 'it is difficult to predict earthquakes, especially before
Geophysical Research Letters, 2013
Volcanic eruptions are commonly preceded by increased rates of earthquakes.
Geosciences
Nearly 20 years ago, the observation that major earthquakes are generally preceded by an increase in the seismicity rate on a timescale from months to decades was embedded in the “Every Earthquake a Precursor According to Scale” (EEPAS) model. EEPAS has since been successfully applied to regional real-world and synthetic earthquake catalogues to forecast future earthquake occurrence rates with time horizons up to a few decades. When combined with aftershock models, its forecasting performance is improved for short time horizons. As a result, EEPAS has been included as the medium-term component in public earthquake forecasts in New Zealand. EEPAS has been modified to advance its forecasting performance despite data limitations. One modification is to compensate for missing precursory earthquakes. Precursory earthquakes can be missing because of the time-lag between the end of a catalogue and the time at which a forecast applies or the limited lead time from the start of the catalogue...
Geophysical Journal International, 2007
We present a strategy for estimating the recurrence times between large earthquakes and associated seismic hazard on a given fault section. The goal of the analysis is to address two fundamental problems. (1) The lack of sufficient direct earthquake data and (2) the existence of 'subgrid' processes that can not be accounted for in any model. We deal with the first problem by using long simulations (some 10 000 yr) of a physically motivated 'coarsegrain' model that reproduces the main statistical properties of seismicity on individual faults. We address the second problem by adding stochasticity to the macroscopic model parameters. A small number N of observational earthquake times (2 ≤ N ≤ 10) can be used to determine the values of model parameters which are most representative for the fault. As an application of the method, we consider a model set-up that produces the characteristic earthquake distribution, and where the stress drops are associated with some uncertainty. Using several model realizations with different values of stress drops, we generate a set of corresponding synthetic earthquake catalogues. The recurrence time distributions in the simulated catalogues are fitted approximately by a gamma distribution. A superposition of appropriately scaled gamma distributions is then used to construct a distribution of recurrence intervals that incorporates the assumed uncertainty of the stress drops. Combining such synthetic data with observed recurrence times between the observational ∼M6 earthquakes on the Parkfield segment of the San Andreas fault, allows us to constrain the distribution of recurrence intervals and to estimate the average stress drop of the events. Based on this procedure, we calculate for the Parkfield region the expected recurrence time distribution, the hazard function, and the mean waiting time to the next ∼M6 earthquake. Using five observational recurrence times from 1857 to 1966, the recurrence time distribution has a maximum at 22.2 yr and decays rapidly for higher intervals. The probability for the post 1966 large event to occur on or before 2004 September 28 is 94 per cent. The average stress drop of ∼M6 Parkfield earthquakes is in the range τ = (3.04 ± 0.27) MPa.
Nature, 1999
As anyone who has ever spent any time in California can attest, much public attention is being focused on the great earthquake-prediction debate. Unfortunately, this attention focuses on deterministic predictions on the day-to-week timescale. But as some of the participants in this debate have pointed out 1,2 current efforts to identify reliable short-term precursors to large earthquakes have been largely unsuccessful, suggesting that earthquakes are such a complicated process that reliable (and observable) precursors might not exist. That is not to say that earthquakes do not have some 'preparatory phase', but rather that this phase might be not be consistently observable by geophysicists on the surface. But does this mean that all efforts to determine the size, timing and locations of future earthquakes are fruitless? Or are we being misled by human scales of time and distance?
Earthquakes and Structures, 2016
The Groningen gas field shows exponential growth in earthquakes event counts around a magnitude M1 with a doubling time of 6-9 years since 2001. This behavior is identified with dimensionless curvature in land subsidence, which has been evolving at a constant rate over the last few decades essentially uncorrelated to gas production. We demonstrate our mechanism by a tabletop crack formation experiment. The observed skewed distribution of event magnitudes is matched by that of maxima of event clusters with a normal distribution. It predicts about one event < M5 per day in 2025, pointing to increasing stress to human living conditions.
2017
This study describes three earthquake occurrence models as applied to the whole Italian territory, to assess the occurrence probabilities of future (M ≥ 5.0) earthquakes: two as short-term (24 hour) models, and one as long-term (5 and 10 years). The first model for short-term forecasts is a purely stochastic epidemic type earthquake sequence (ETES) model. The second short-term model is an epidemic rate-state (ERS) forecast based on a model that is physically constrained by the application to the earthquake clustering of the Dieterich rate-state constitutive law. The third forecast is based on a long-term stress transfer (LTST) model that considers the perturbations of earthquake probability for interacting faults by static Coulomb stress changes. These models have been submitted to the Collaboratory for the Study of Earthquake Predictability (CSEP) for forecast testing for Italy (ETH Zurich), and they were locked down to test their validity on real data in a future setting starting from August 1, 2009.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2005
Tectonophysics, 2001
1999
Geophysical Journal International, 2010
Scientific reports, 2018
Bulletin of the Seismological Society of America, 2011
Journal of Geophysical Research, 1992
Bulletin of the Seismological Society of America, 2010
Pure and Applied Geophysics, 2000
Modelling Critical and Catastrophic Phenomena in Geoscience
Seismological Research Letters, 2007
Tectonophysics, 2006
Pure and Applied Geophysics, 2010
Nonlinear Processes in Geophysics, 2006
Bulletin of the Seismological Society of America, 2013
Journal of Geophysical Research: Solid Earth