Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, Pure and Applied Geophysics
We present a simple method for long-and shortterm earthquake forecasting (estimating earthquake rate per unit area, time, and magnitude). For illustration we apply the method to the Pacific plate boundary region and the Mediterranean area surrounding Italy and Greece. Our ultimate goal is to develop forecasting and testing methods to validate or falsify common assumptions regarding earthquake potential. Our immediate purpose is to extend the forecasts we made starting in 1999 for the northwest and southwest Pacific to include somewhat smaller earthquakes and then adapt the methods to apply in other areas. The previous forecasts used the CMT earthquake catalog to forecast magnitude 5.8 and larger earthquakes. Like our previous forecasts, the new ones here are based on smoothed maps of past seismicity and assume spatial clustering. Our short-term forecasts also assume temporal clustering. An important adaptation in the new forecasts is to abandon the use of tensor focal mechanisms. This permits use of earthquake catalogs that reliably report many smaller quakes with no such mechanism estimates. The result is that we can forecast earthquakes at higher spatial resolution and down to a magnitude threshold of 4.7. The new forecasts can be tested far more quickly because smaller events are considerably more frequent. Also, our previous method used the focal mechanisms of past earthquakes to estimate the preferred directions of earthquake clustering, however the method made assumptions that generally hold in subduction zones only. The new approach escapes those assumptions. In the northwest Pacific the new method gives estimated earthquake rate density very similar to that of the previous forecast.
Geophysical Journal International, 2000
We present long-term and short-term forecasts for magnitude 5.8 and larger earthquakes. We discuss a method for optimizing both procedures and testing their forecasting effectiveness using the likelihood function. Our forecasts are expressed as the rate density (that is, the probability per unit area and time) anywhere on the Earth. Our forecasts are for scienti®c testing only; they are not to be construed as earthquake predictions or warnings, and they carry no of®cial endorsement. For our long-term forecast we assume that the rate density is proportional to a smoothed version of past seismicity (using the Harvard CMT catalogue). This is in some ways antithetical to the seismic gap model, which assumes that recent earthquakes deter future ones. The estimated rate density depends linearly on the magnitude of past earthquakes and approximately on a negative power of the epicentral distance out to a few hundred kilometres. We assume no explicit time dependence, although the estimated rate density will vary slightly from day to day as earthquakes enter the catalogue. The forecast applies to the ensemble of earthquakes during the test period. It is not meant to predict any single earthquake, and no single earthquake or lack of one is adequate to evaluate such a hypothesis. We assume that 1 per cent of all earthquakes are surprises, assumed uniformly likely to occur in those areas with no earthquakes since 1977. We have made speci®c forecasts for the calendar year 1999 for the Northwest Paci®c and Southwest Paci®c regions, and we plan to expand the forecast to the whole Earth. We test the forecast against the earthquake catalogue using a likelihood test and present the results. Our short-term forecast, updated daily, makes explicit use of statistical models describing earthquake clustering. Like the long-term forecast, the short-term version is expressed as a rate density in location, magnitude and time. However, the shortterm forecasts will change signi®cantly from day to day in response to recent earthquakes. The forecast applies to main shocks, aftershocks, aftershocks of aftershocks, and main shocks preceded by foreshocks. However, there is no need to label each event, and the method is completely automatic. According to the model, nearly 10 per cent of moderately sized earthquakes will be followed by larger ones within a few weeks.
Geophysical Journal International, 2010
We have constructed daily worldwide long-and short-term earthquake forecasts. These forecasts specify the earthquake rate per unit area, time and magnitude on a 0.5 degree grid for a global zone region between 75N and 75S latitude (301 by 720 grid cells). We use both the Global Centroid Moment Tensor (GCMT) and Preliminary Determinations of Epicenters (PDE) catalogues. Like our previous forecasts, the new forecasts are based largely on smoothed maps of past seismicity and assume spatial and temporal clustering. The forecast based on the GCMT catalogue, with the magnitude completeness threshold 5.8, includes an estimate of focal mechanisms of future earthquakes and of the mechanism uncertainty. The forecasted tensor focal mechanism makes it possible in principle to calculate an ensemble of seismograms for each point of interest on the Earth's surface. We also introduce a new approach that circumvents the need for focal mechanisms. This permits the use of the PDE catalogue that reliably documents many smaller quakes with a higher location accuracy. The result is a forecast at a higher spatial resolution and down to a magnitude threshold below 5.0. Such new forecasts can be prospectively tested within a relatively short time, such as a few years, because smaller events occur with greater frequency. The forecast's efficiency can be measured by its average probability gains per earthquake compared to the spatially or temporally uniform Poisson distribution. For the short-term forecast the gain is about 2.0 for the GCMT catalogue and 3.7 for the PDE catalogue relative to a temporally random but spatially localized null hypothesis. Preliminary tests indicate that for the long-term global spatial forecast the gain is of the order 20-25 compared to the uniform event distribution over the Earth's surface. We can also prospectively test the long-term forecast to check whether it can be improved.
Research in Geophysics, 2012
The Every Earthquake a Precursor According to Scale (EEPAS) long-range earthquake forecasting model has been shown to be informative in several seismically active regions, including New Zealand, California and Japan. In previous applications of the model, the tectonic setting of earthquakes has been ignored. Here we distinguish crustal, plate interface, and slab earthquakes and apply the model to earthquakes with magnitude M≥4 in the Japan region from 1926 onwards. The target magnitude range is M≥ 6; the fitting period is 1966-1995; and the testing period is 1996-2005. In forecasting major slab earthquakes, it is optimal to use only slab and interface events as precursors. In forecasting major interface events, it is optimal to use only interface events as precursors. In forecasting major crustal events, it is optimal to use only crustal events as precursors. For the smoothed-seismicity component of the EEPAS model, it is optimal to use slab and interface events for earthquakes in t...
Seismological Research Letters
The Global Earthquake Activity Rate (GEAR1) seismicity model uses an optimized combination 12 of geodetic strain rates, hypotheses about converting strain rates to seismicity rates from plate 13 tectonics, and earthquake-catalog data to estimate global m w ≥ 5.767 shallow (≤ 70 km) seismicity 14 rates. It comprises two parent models: a strain rate-based model, and a smoothed-seismicity 15 based model. The GEAR1 model was retrospectively evaluated and calibrated using earthquake 16 data from 2005-2012, resulting in a preferred log-linear, multiplicative combination of the parent 17 forecasts. Since October 1, 2015, the GEAR1 model has undergone prospective evaluation within 18 the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center.
Pure and Applied Geophysics, 2010
The objective of this paper is to quantify the use of past seismicity to forecast the locations of future large earthquakes and introduce optimization methods for the model parameters. To achieve this the binary forecast approach is used where the surface of the Earth is divided into l°9 l°cells. The cumulative Benioff strain of m C m c earthquakes that occurred during the training period, DT tr , is used to retrospectively forecast the locations of large target earthquakes with magnitudes Cm T during the forecast period, DT for . The success of a forecast is measured in terms of hit rates (fraction of earthquakes forecast) and false alarm rates (fraction of alarms that do not forecast earthquakes). This binary forecast approach is quantified using a receiver operating characteristic diagram and an error diagram. An optimal forecast can be obtained by taking the maximum value of Pierce's skill score.
Bulletin of the Seismological Society of America, 2011
We present two models for estimating the probabilities of future earthquakes in California, to be tested in the Collaboratory for the Study of Earthquake Predictability (CSEP). The first, time-independent model, modified from Helmstetter et al. [2007], provides five-year forecasts for magnitudes m ≥ 4.95. We show that large quakes occur on average near the locations of small m ≥ 2 events, so that a highresolution estimate of the spatial distribution of future large quakes is obtained from the locations of the numerous small events. We employ an adaptive spatial kernel of optimized bandwidth and assume a universal, tapered Gutenberg-Richter distribution. In retrospective tests, we show that no Poisson forecast could capture the observed variability. We therefore also test forecasts using a negative binomial distribution for the number of events. We modify existing likelihood-based tests to better evaluate the spatial forecast. Our time-dependent model, an Epidemic Type Aftershock Sequence (ETAS) model modified from Helmstetter et al. [2006], provides next-day forecasts for m ≥ 3.95. The forecasted rate is the sum of a background rate, proportional to our time-independent model, and of the triggered events due to all prior earthquakes. Each earthquake triggers events with a rate that increases exponentially with
Seismological Research Letters, 2007
Bulletin of the Seismological Society of America, 2011
We constructed 5-and 10-yr smoothed-seismicity forecasts of moderateto-large California earthquakes, and we examined the importance of several assumptions and choices. To do this, we divided the available catalog into learning and testing periods and optimized parameters to best predict earthquakes in the testing period. Fourteen different 5-yr testing periods were considered, in which the number of earthquakes varies from 18 to 63. We then compared the likelihood gain per target earthquake for the various choices. In this study, we assumed that the spatial, temporal, and magnitude distributions were independent of one another, so that the joint probability distribution could be factored into those three components. We compared several disjoint test periods of the same length to determine the variability of the likelihood gain. The variability is large enough to mask the effects of some modeling choices. Stochastic declustering of the learning catalog produced a significantly better forecast, and representing larger earthquakes by their rupture surfaces provided a slightly better result, all other choices being equal. Inclusion of historical earthquakes and the use of an anisotropic smoothing kernel based on focal mechanisms failed to improve the forecast consistently. We chose a lower threshold magnitude of 4.7 for our learning catalog so that our results could be compared in the future to other forecasts relying on shorter catalogs with a smaller magnitude threshold. Online Material: Probability gain based on different conditions and earthquake rate forecast map.
Bulletin of the Seismological Society of America, 2012
We present new methods for time-independent earthquake forecasting that employ space-time kernels to smooth seismicity. The major advantage of the methods is that they do not require prior declustering of the catalog, circumventing the relatively subjective choice of a declustering algorithm. Past earthquakes are smoothed in space and time using adaptive Gaussian kernels. The bandwidths in space and time associated with each event are a decreasing function of the seismicity rate at the time and location of each earthquake. This yields a better resolution in space-time volumes of intense seismicity and a smoother density in volumes of sparse seismicity. The long-term rate in each spatial cell is then defined as the median value of the temporal history of the smoothed seismicity rate in this cell. To calibrate the model, the earthquake catalog is divided into two parts: the early part (the learning catalog) is used to estimate the model, and the latter one (the target catalog) is used to compute the likelihood of the model's forecast. We optimize the model's parameters by maximizing the likelihood of the target catalog. To estimate the kernel bandwidths in space and time, we compared two approaches: a coupled near-neighbor method and an iterative method based on a pilot density. We applied these methods to Californian seismicity and compared the resulting forecasts with our previous method based on spatially smoothing a declustered catalog (Werner et al., 2011). All models use small M ≥ 2 earthquakes to forecast the rate of larger earthquakes and use the same learning catalog. Our new preferred model slightly outperforms our previous forecast, providing a probability gain per earthquake of about 5 relative to a spatially uniform forecast.
Pure and Applied Geophysics, 2010
We extend existing branching models for earthquake occurrences by incorporating potentially important estimates of tectonic deformation and by allowing the parameters in the models to vary across different tectonic regimes. We partition the Earth's surface into five regimes: trenches (including subduction zones and oceanic convergent boundaries and earthquakes in outer rise or overriding plate); fast spreading ridges and oceanic transforms; slow spreading ridges and transforms; active continental zones, and plate interiors (everything not included in the previous categories). Our purpose is to specialize the models to give them the greatest possible predictive power for use in earthquake forecasts. We expected the parameters of the branching models to be significantly different in the various tectonic regimes, because earlier studies (Bird and Kagan in Bull Seismol Soc Am 94(6): [2380][2381][2382][2383][2384][2385][2386][2387][2388][2389][2390][2391][2392][2393][2394][2395][2396][2397][2398][2399] 2004) found that the magnitude limits and other parameters differed between similar categories. We compiled subsets of the CMT and PDE earthquake catalogs corresponding to each tectonic regime, and optimized the parameters for each, and for the whole Earth, using a maximum likelihood procedure. We also analyzed branching models for California and Nevada using regional catalogs. Our estimates of parameters that can be compared to those of other models were consistent with published results. Examples include the proportion of triggered earthquakes and the exponent describing the temporal decay of triggered earthquakes. We also estimated epicentral location uncertainty and rupture zone size and our results are consistent with independent estimates. Contrary to our expectation, we found no dramatic differences in the branching parameters for the various tectonic regimes. We did find some modest differences between regimes that were robust under changes in earthquake catalog and lower magnitude threshold. Subduction zones have the highest earthquake rates, the largest upper magnitude limit, and the highest proportion of triggered events. Fast spreading ridges have the smallest upper magnitude limit and the lowest proportion of triggered events. The statistical significance of these variations cannot be assessed until methods are developed for estimating confidence limits reliably. Some results apparently depend on arbitrary decisions adopted in the analysis. For example, the proportion of triggered events decreases as the lower magnitude limit is increased, possibly because our procedure for assigning independence probability favors larger earthquakes. In some tests we censored earthquakes occurring near and just after a previous eScholarship provides open access, scholarly publishing services to the University of California and delivers a dynamic research platform to scholars worldwide. event, to account for the fact that most such earthquakes will be missing from the catalog. Fortunately the branching model parameters were hardly affected, suggesting that the inability to measure immediate aftershocks does not cause a serious estimation bias. We compare our branching model with the ETAS model and discuss the differences in the models parametrization and the results of earthquake catalogs analysis.
Seismological Research Letters, 2007
Geosciences
Nearly 20 years ago, the observation that major earthquakes are generally preceded by an increase in the seismicity rate on a timescale from months to decades was embedded in the “Every Earthquake a Precursor According to Scale” (EEPAS) model. EEPAS has since been successfully applied to regional real-world and synthetic earthquake catalogues to forecast future earthquake occurrence rates with time horizons up to a few decades. When combined with aftershock models, its forecasting performance is improved for short time horizons. As a result, EEPAS has been included as the medium-term component in public earthquake forecasts in New Zealand. EEPAS has been modified to advance its forecasting performance despite data limitations. One modification is to compensate for missing precursory earthquakes. Precursory earthquakes can be missing because of the time-lag between the end of a catalogue and the time at which a forecast applies or the limited lead time from the start of the catalogue...
Nature, 1999
As anyone who has ever spent any time in California can attest, much public attention is being focused on the great earthquake-prediction debate. Unfortunately, this attention focuses on deterministic predictions on the day-to-week timescale. But as some of the participants in this debate have pointed out 1,2 current efforts to identify reliable short-term precursors to large earthquakes have been largely unsuccessful, suggesting that earthquakes are such a complicated process that reliable (and observable) precursors might not exist. That is not to say that earthquakes do not have some 'preparatory phase', but rather that this phase might be not be consistently observable by geophysicists on the surface. But does this mean that all efforts to determine the size, timing and locations of future earthquakes are fruitless? Or are we being misled by human scales of time and distance?
Pure and Applied Geophysics, 2010
We present estimates of future earthquake rate density (probability per unit area, time, and magnitude) on a 0.1degree grid for a region including California and Nevada, based only on data from past earthquakes. Our long-term forecast is not explicitly time-dependent, but it can be updated at any time to incorporate information from recent earthquakes. The present version, founded on several decades worth of data, is suitable for testing without updating over a five-year period as part of the experiment conducted by the Collaboratory for Study of Earthquake Predictability (CSEP). The short-term forecast is meant to be updated daily and tested against similar models by CSEP. The short-term forecast includes a fraction of our long-term one plus time-dependent contributions from all previous earthquakes. Those contributions decrease with time according to the Omori law: proportional to the reciprocal of the elapsed time. Both forecasts estimate rate density using a radially symmetric spatial smoothing kernel decreasing approximately as the reciprocal of the square of epicentral distance, weighted according to the magnitude of each past earthquake. We made two versions of both the long-and shortterm forecasts, based on the Advanced National Seismic System (ANSS) and Preliminary Determinations of Epicenters (PDE) catalogs, respectively. The two versions are quite consistent, but for testing purposes we prefer those based on the ANSS catalog since it covers a longer time interval, is complete to a lower magnitude threshold and has more precise locations. Both forecasts apply to shallow earthquakes only (depth 25 km or less) and assume a tapered Gutenberg-Richter magnitude distribution extending to a lower threshold of 4.0.
Bulletin of the Seismological Society of America, 2013
Among scoring methods employed to determine the performance of probability predictions, the log-likelihood method is the most common and useful. Although the log-likelihood score evaluates the comprehensive power of forecasts, we need to further evaluate the topical predictive powers of respective factors of seismicity, such as total numbers, occurrence times, locations, and magnitudes. For this purpose, we used the conditional-or marginal-likelihood function based on the observed events. Such topical scores reveal both strengths and weaknesses of a forecasting model and suggest the necessary improvements. We applied these scores to the probability forecasts during the devastating period of March 2011, during which the M w 9.0 Off the Pacific Coast of Tohoku-Oki earthquake struck. However, the evaluations did not suggest that any of the prospective forecast models were consistently satisfactory. Hence, we undertook two additional types of retrospective forecasting experiments to investigate the reasons, including the possibility that the seismicity rate pattern has changed after the M 9 mega-earthquake. In addition, our experiments revealed a technical difficulty in the one-day forecasting protocol adopted by the Collaboratory for the Study of Earthquake Predictability (CSEP). Results of further experiments lead us to recommend specific modifications to the CSEP protocols, leading to real-time forecasts and their evaluations.
Annals of Geophysics, 2010
We present a five-year, time-independent, earthquake-forecast model for earthquake magnitudes of 5.0 and greater in Italy using spatially smoothed seismicity data. The model is called HAZGRIDX, and it was developed based on the assumption that future earthquakes will occur near locations of historical earthquakes; it does not take into account any information from tectonic, geological, or geodetic data. Thus HAZGRIDX is based on observed earthquake occurrence from seismicity data, without considering any physical model. In the present study, we calculate earthquake rates on a spatial grid platform using two declustered catalogs: 1) the parametric catalog of Italian earthquakes (Catalogo Parametrico dei Terremoti Italiani, CPTI04) that contains the larger earthquakes from M w 7.0 since 1100; and 2) the catalog of Italian seismicity (Catalogo della Sismicità Italiana, CSI 1.1) that contains the small earthquakes down to M L 1.0, with a maximum of M L 5.9, over the past 22 years (1981-2003). The model assumes that earthquake magnitudes follow the Gutenberg-Richter law, with a uniform b-value. The forecast rates are presented in terms of the expected numbers of M L > 5.0 events per year for each grid cell of about 10 km × 10 km. The final map is derived by averaging the earthquake potentials that come from these two different catalogs: CPTI04 and CSI 1.1. We also describe the earthquake occurrences in terms of probabilities of occurrence of one event within a specified magnitude bin, DM0.1, in a five year time period. HAZGRIDX is one of several forecasting models, scaled to five and ten years, that have been submitted to the Collaboratory for the Study of Earthquake Probability (CSEP) forecasting center in ETH Zurich to be tested for Italy.
Geophysical Journal International, 2012
Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of M w ≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.
2005
No proven method is currently available for the reliable short time prediction of earthquakes (minutes to months). However, it is possible to make probabilistic hazard assessments for earthquake risk. These are primarily based on the association of small earthquakes with future large earthquakes. In this paper we discuss a new approach to earthquake forecasting. This approach is based on a pattern informatics (PI) method which quantifies temporal variations in seismicity. The output is a map of areas in a seismogenic region ("hotspots") where earthquakes are forecast to occur in a future 10-year time span. This approach has been successfully applied to California, to Japan, and on a worldwide basis. These forecasts are binary-an earthquake is forecast either to occur or to not occur. The standard approach to the evaluation of a binary forecast is the use of the relative operating characteristic (ROC) diagram, which is a more restrictive test and less subject to bias than maximum likelihood tests. To test our PI method, we made two types of retrospective forecasts for California. The first is the PI method and the second is a relative intensity (RI) forecast based on the hypothesis that future earthquakes will occur where earthquakes have occurred in the recent past. While both retrospective forecasts are for the ten year period 1 January 2000 to 31 December 2009, we performed an interim analysis 5 years into the forecast. The PI method out performs the RI method under most circumstances.
Applied Sciences, 2021
Seismicity-based earthquake forecasting models have been primarily studied and developed over the past twenty years. These models mainly rely on seismicity catalogs as their data source and provide forecasts in time, space, and magnitude in a quantifiable manner. In this study, we presented a technique to better determine future earthquakes in space based on spatially smoothed seismicity. The improvement’s main objective is to use foreshock and aftershock events together with their mainshocks. Time-independent earthquake forecast models are often developed using declustered catalogs, where smaller-magnitude events regarding their mainshocks are removed from the catalog. Declustered catalogs are required in the probabilistic seismic hazard analysis (PSHA) to hold the Poisson assumption that the events are independent in time and space. However, as highlighted and presented by many recent studies, removing such events from seismic catalogs may lead to underestimating seismicity rates ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.