Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, Journal of Business and Economic Statistics
AI
This paper addresses the issue of combining forecasts from experts with varying availability and biases, particularly in the context of real-time forecasting. It proposes several methods to effectively combine expert opinions, including traditional techniques like least squares and novel approaches such as affine transformation of the equal-weighted forecast. Monte Carlo simulations demonstrate the performance of these methods, revealing that certain modifications can lead to optimal forecasting outcomes even under conditions of missing data and heterogeneity among forecasters.
International Journal of Forecasting, 2013
ABSTRACT This paper explores the gains from combining expert forecasts from the ECB Survey of Professional Forecasters (SPF). The analysis encompasses combinations based on principal components and trimmed means, performance-based weighting, and least squares estimates of optimal weights, as well as Bayesian shrinkage. For GDP growth and the unemployment rate, only few of the individual forecast combination schemes outperform the simple equally weighted average forecast in a pseudo-out-of-sample analysis, while there is stronger evidence of improvement over this benchmark for the inflation rate. Nonetheless, when we account for the effect of multiple model comparisons through White’s reality check, the results caution against any assumption that the improvements identified would persist in the future.
2014
Combining multiple forecasts provides gains in prediction accuracy. Therefore, with the aim of finding an optimal weighting scheme, several combination techniques have been proposed in the forecasting literature. In this paper we propose the use of sparse partial least squares (SPLS) as a method to combine selected individual forecasts from economic surveys. SPLS chooses the forecasters with more predictive power about the target variable, discarding the panelists with redundant information. We employ the Survey of Professional Forecasters dataset to explore the performance of different methods for combining forecasts: average forecasts, trimmed mean, regression based methods and regularized methods also in regression. The results show that selecting and combining forecasts yields to improvements in forecasting accuracy compared to the hard to beat average of forecasters.
2019
Program in Systems Modeling and Analysis Combining multiple forecasts in order to generate a single, more accurate one is a well-known approach. A simple average of forecasts has been found to be robust despite theoretically better approaches, increasing availability in the number of expert forecasts, and improved computational capabilities. The dominance of a simple average is related to the small sample sizes and to the estimation errors associated with more complex methods. We study the role that expert correlation, multiple experts, and their relative forecasting accuracy have on the weight estimation error distribution. The distributions we find are used to identify the conditions when a decision maker can confidently estimate weights versus using a simple average. We also propose an improved expert weighting approach that is less sensitive to covariance estimation error while providing much of the benefit from a covariance optimal weight. These two improvements create a new heuristic for better forecast aggregation that is simple to use. This heuristic appears new to the literature and is shown to perform better than a simple average in a simulation study and by application to xiv economic forecast data.
International Journal of Forecasting, 2014
We combine the probability forecasts of a real GDP decline from the US Survey of Professional Forecasters, after trimming the forecasts that do not have ''value'', as measured by the Kuiper Skill Score and in the sense of . For this purpose, we use a simple test to evaluate the probability forecasts. The proposed test does not require the probabilities to be converted to binary forecasts before testing, and it accommodates serial correlation and skewness in the forecasts. We find that the number of forecasters making valuable forecasts decreases sharply as the horizon increases. The beta-transformed linear pool combination scheme, based on the valuable individual forecasts, is shown to outperform the simple average for all horizons on a number of performance measures, including calibration and sharpness. The test helps to identify the good forecasters ex ante, and therefore contributes to the accuracy of the combined forecasts.
In this paper, we explore the potential gains from alternative combinations of the surveyed forecasts in the ECB Survey of Professional Forecasters. Our analysis encompasses a variety of methods including statistical combinations based on principal components analysis and trimmed means, performance-based weighting, least squares estimates of optimal weights as well as Bayesian shrinkage. We provide a pseudo real–time out-of-sample performance evaluation of these alternative combinations and check the sensitivity of the results to possible data-snooping bias. The latter robustness check is also informed using a novel real time meta selection procedure which is not subject to the data-snooping critique. For GDP growth and the unemployment rate, only few of the forecast combination schemes are able to outperform the simple equal-weighted average forecast. Conversely, for the inflation rate there is stronger evidence that more refined combinations can lead to improvement over this bench...
Applied Economics Letters, 2007
… Research Paper No. …, 2010
We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factoraugmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based forecasts. We also provide an analysis of the importance of model instability for explaining gains from forecast combination. Analytical and simulation results uncover break scenarios where forecast combinations outperform the best individual forecasting model.
International Journal of Forecasting, 2007
Quantification techniques are popular methods in empirical research to aggregate the qualitative predictions at the micro-level into a single figure. In this paper, we analyze the forecasting performance of various methods that are based on the qualitative predictions of financial experts for major financial variables and macroeconomic aggregates. Based on the Centre of European Economic Research's Financial Markets Survey, a monthly qualitative survey of around 330 financial experts, we analyze the out-of-sample predictive quality of probability methods and regression methods. Using the modified Diebold-Mariano-Test of Harvey, Leybourne & Newbold (1997), we confront the forecasts based on survey methods with the forecasting performance of standard linear time series approaches and simple random walk forecasts. JEL classification: G10, E30, E31, E37, C10, C42
International Journal of Forecasting, 2005
Much research shows that combining forecasts improves accuracy relative to individual forecasts. In this paper we present experiments, using the 3003 series of the M3-competition, that challenge this belief: on average across the series, the best individual forecasts, based on post-sample performance, perform as well as the best combinations. However, this finding lacks practical value since it requires that we identify the best individual forecast or combination using post sample data. So we propose a simple model-selection criterion to select among forecasts, and we show that, using this criterion, the accuracy of the selected combinations is significantly better and less variable than that of the selected individual forecasts. These results indicate that the advantage of combining forecasts is not that the best possible combinations perform better than the best possible individual forecasts, but that it is less risky in practice to combine forecasts than to select an individual forecasting method.
Communications in Statistics - Simulation and Computation, 2013
Combining forecasts can be based on different data or different methods or both. In practice, many different data sets can be difficult to obtain. In this paper we propose a combining forecasts method that uses both simulated and observed time series. Several combining methods have been studied and the combined forecasts have been compared with those obtained from the model fitted to the observed time series only. Simulation studies and applications of the proposed method to real data sets show that on average the combined forecasts are more accurate than those obtained without combining.
Socio-Economic Planning Sciences, 1989
Research Papers in Economics, 1998
During the past thirty years, there has been considerable concern about combination of forecasts. Many of the articles and books dedicated to this specific area explain and demonstrate that combining multiple individual forecasts can improve forecast accuracy. The improvement in accuracy mainly depends on forecast combination techniques which range from simple combinations like averaging the forecasts to more complex ones that use the Bayesian approach. This paper provides a bibliography of selected articles and books related to the combination of forecasts in various disciplines and is intended to be a catalog for locating contributions in research areas focusing on the theory and applications of combining forecasts. The bibliography includes recent articles and is as up-to-date as possible.
2008
D ecision makers often rely on expert opinion when making forecasts under uncertainty. In doing so, they confront two methodological challenges: the elicitation problem, which requires them to extract meaningful information from experts; and the aggregation problem, which requires them to combine expert opinion by resolving disagreements. Linear averaging is a justifiably popular method for addressing aggregation, but its robust simplicity makes two requirements on elicitation. First, each expert must offer probabilistically coherent forecasts; second, each expert must respond to all our queries. In practice, human judges (even experts) may be incoherent, and may prefer to assess only the subset of events about which they are comfortable offering an opinion. In this paper, a new methodology is developed for combining expert assessment of chance. The method retains the conceptual and computational simplicity of linear averaging, but generalizes the standard approach by relaxing the requirements on expert elicitation. The method also enjoys provable performance guarantees, and in experiments with real-world forecasting data is shown to offer both computational efficiency and competitive forecasting gains as compared to rival aggregation methods. This paper is relevant to the practice of decision analysis, for it enables an elicitation methodology in which judges have freedom to choose the events they assess.
Entropy, 2019
Forecast combination methods reduce the information in a vector of forecasts to a single combined forecast by using a set of combination weights. Although there are several methods, a typical strategy is the use of the simple arithmetic mean to obtain the combined forecast. A priori, the use of this mean could be justified when all the forecasters have had the same performance in the past or when they do not have enough information. In this paper, we explore the possibility of using entropy econometrics as a procedure for combining forecasts that allows to discriminate between bad and good forecasters, even in the situation of little information. With this purpose, the data-weighted prior (DWP) estimator proposed by Golan (2001) is used for forecaster selection and simultaneous parameter estimation in linear statistical models. In particular, we examine the ability of the DWP estimator to effectively select relevant forecasts among all forecasts. We test the accuracy of the proposed...
Working Paper, 2009
Journal of Forecasting, 1994
A general Bayesian approach to combining n expert forecasts is developed. Under some moderate assumptions on the distributions of the expert errors, it leads to a consistent, monotonic, quasi-linear average formula. This generalizes Bordley's results.
SSRN Electronic Journal, 2000
Macro-economic forecasts are often based on the interaction between econometric models and experts. A forecast that is based only on an econometric model is replicable and may be unbiased, whereas a forecast that is not based only on an econometric model, but also incorporates an expert's touch, is non-replicable and is typically biased. In this paper we propose a methodology to analyze the qualities of combined non-replicable forecasts. One part of the methodology seeks to retrieve a replicable component from the non-replicable forecasts, and compares this component against the actual data. A second part modifies the estimation routine due to the assumption that the difference between a replicable and a non-replicable forecast involves a measurement error. An empirical example to forecast economic fundamentals for Taiwan shows the relevance of the methodological approach.
SSRN Electronic Journal, 2015
Economic forecasts are quite essential in our daily lives, which is why many research institutions periodically make and publish forecasts of main economic indicators. We ask (1) whether we can consistently have a better prediction when we combine multiple forecasts of the same variable and (2) if we can, what will be the optimal method of combination. We linearly combine multiple linear combinations of existing forecasts to form a new forecast ("combination of combinations"), and the weights are given by Bayesian model averaging. In the case of forecasts on Germany's real GDP growth rate, this new forecast dominates any single forecast in terms of root-mean-square prediction errors.
Economic Modelling, 2008
The Bank of England has constructed a`suite of statistical forecasting models' (the`Suite') providing judgement-free statistical forecasts of in ation and output growth as one of many inputs into the forecasting process, and to offer measures of relevant news in the data. The Suite combines a small number of forecasts generated using different sources of information and methodologies. The main combination methods employ weights that are equal or based on the Akaike information criterion (using likelihoods built from estimation errors). This paper sets a general context for this exercise, and describes some features of the Suite as it stood in May 2005. The forecasts are evaluated over the period of Bank independence (1997 Q2 to 2005 Q1) by a mean square error criterion. The forecast combinations generally lead to a reduction in forecast error, although over this period some of the benchmark models are hard to beat.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.