Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, International Journal of Forecasting
…
14 pages
1 file
ABSTRACT This paper explores the gains from combining expert forecasts from the ECB Survey of Professional Forecasters (SPF). The analysis encompasses combinations based on principal components and trimmed means, performance-based weighting, and least squares estimates of optimal weights, as well as Bayesian shrinkage. For GDP growth and the unemployment rate, only few of the individual forecast combination schemes outperform the simple equally weighted average forecast in a pseudo-out-of-sample analysis, while there is stronger evidence of improvement over this benchmark for the inflation rate. Nonetheless, when we account for the effect of multiple model comparisons through White’s reality check, the results caution against any assumption that the improvements identified would persist in the future.
In this paper, we explore the potential gains from alternative combinations of the surveyed forecasts in the ECB Survey of Professional Forecasters. Our analysis encompasses a variety of methods including statistical combinations based on principal components analysis and trimmed means, performance-based weighting, least squares estimates of optimal weights as well as Bayesian shrinkage. We provide a pseudo real–time out-of-sample performance evaluation of these alternative combinations and check the sensitivity of the results to possible data-snooping bias. The latter robustness check is also informed using a novel real time meta selection procedure which is not subject to the data-snooping critique. For GDP growth and the unemployment rate, only few of the forecast combination schemes are able to outperform the simple equal-weighted average forecast. Conversely, for the inflation rate there is stronger evidence that more refined combinations can lead to improvement over this bench...
2007
Official forecasts of international institutions are never purely model-based. Preliminary results of models are adjusted with expert opinions.What is the impact of these adjustments for the forecasts? Are they necessary to get 'optimal' forecasts? When model-based forecasts are adjusted by experts, the loss function of these forecasts is not a mean squared error loss function. In fact, the overall loss function is unknown.To examine the quality of these forecasts, one can rely on the tests for forecast optimality under unknown loss function as developed in Patton and Timmermann (2007). We apply one of these tests to ten variables for which we have model-based forecasts and expert-adjusted forecasts, all generated by the Netherlands Bureau for Economic Policy Analysis (CPB). For almost all variables the added expertise yields better forecasts in terms of fit. In terms of optimality, the effect of adjustments for the forecasts is limited, because for most variables the assu...
Eight years have passed since the European Central Bank (ECB) launched its Survey of Professional Forecasters (SPF). The SPF asks a panel of approximately 75 forecasters located in the European Union (EU) for their short- to longer-term expectations for macroeconomic variables such as euro area inflation, growth and unemployment. This paper provides an initial assessment of the information content of this survey. First, we consider shorter-term (i.e., one- and two-year ahead rolling horizon) forecasts. The analysis suggests that, over the sample period, in common with other private and institutional forecasters, the SPF systematically under-forecast inflation but that there is less evidence of such systematic errors for GDP and unemployment forecasts. However, these findings, which generally hold regardless of whether one considers the aggregate SPF panel or individual responses, should be interpreted with caution given the relatively short sample period available for the analysis. ...
2019
Program in Systems Modeling and Analysis Combining multiple forecasts in order to generate a single, more accurate one is a well-known approach. A simple average of forecasts has been found to be robust despite theoretically better approaches, increasing availability in the number of expert forecasts, and improved computational capabilities. The dominance of a simple average is related to the small sample sizes and to the estimation errors associated with more complex methods. We study the role that expert correlation, multiple experts, and their relative forecasting accuracy have on the weight estimation error distribution. The distributions we find are used to identify the conditions when a decision maker can confidently estimate weights versus using a simple average. We also propose an improved expert weighting approach that is less sensitive to covariance estimation error while providing much of the benefit from a covariance optimal weight. These two improvements create a new heuristic for better forecast aggregation that is simple to use. This heuristic appears new to the literature and is shown to perform better than a simple average in a simulation study and by application to xiv economic forecast data.
2014
Combining multiple forecasts provides gains in prediction accuracy. Therefore, with the aim of finding an optimal weighting scheme, several combination techniques have been proposed in the forecasting literature. In this paper we propose the use of sparse partial least squares (SPLS) as a method to combine selected individual forecasts from economic surveys. SPLS chooses the forecasters with more predictive power about the target variable, discarding the panelists with redundant information. We employ the Survey of Professional Forecasters dataset to explore the performance of different methods for combining forecasts: average forecasts, trimmed mean, regression based methods and regularized methods also in regression. The results show that selecting and combining forecasts yields to improvements in forecasting accuracy compared to the hard to beat average of forecasters.
Journal of Economic Surveys, 2014
Macroeconomic forecasts are frequently produced, widely published, intensively discussed and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations.
2006
Similar to other Central Banks, the BCRA publishes monthly a REM that summaries the forecasts and projections of a group of economic analysts and consultants who volunteer to participate in the program. The BCRA publishes only the median, and the standard deviation of the sample received. The logic for using these statistics is that all participants are to be treated equally. Under the assumption that some forecasters have better underlying models than others, one might be able to improve the accuracy of the aggregate forecast by giving greater priority to those who have historically predicted better. The BCRA does not have access to the models used to make the predictions, only the forecasts are provided. An averaging method that puts higher weights on the predictions of those forecasters who have done best in the past should be able to produce a better aggregate forecast. The problem is how to determine these weights. In this paper, we develop a Bayesian averaging method that can ...
Economic Modelling, 2008
The Bank of England has constructed a`suite of statistical forecasting models' (the`Suite') providing judgement-free statistical forecasts of in ation and output growth as one of many inputs into the forecasting process, and to offer measures of relevant news in the data. The Suite combines a small number of forecasts generated using different sources of information and methodologies. The main combination methods employ weights that are equal or based on the Akaike information criterion (using likelihoods built from estimation errors). This paper sets a general context for this exercise, and describes some features of the Suite as it stood in May 2005. The forecasts are evaluated over the period of Bank independence (1997 Q2 to 2005 Q1) by a mean square error criterion. The forecast combinations generally lead to a reduction in forecast error, although over this period some of the benchmark models are hard to beat.
International Journal of Forecasting, 2011
The Netherlands Bureau for Economic Policy Analysis (CPB) uses a large macroeconomic model to create forecasts of various important macroeconomic variables. The outcomes of this model are usually filtered by experts, and it is the expert forecasts that are made available to the general public. In this paper we re-create the model forecasts for the period 1997-2008 and compare the expert forecasts with the pure model forecasts. Our key findings from the first time that this unique database has been analyzed are that (i) experts adjust upwards more often; (ii) expert adjustments are not autocorrelated, but their sizes do depend on the value of the model forecast; (iii) the CPB model forecasts are biased for a range of variables, but (iv) at the same time, the associated expert forecasts are more often unbiased; and that (v) expert forecasts are far more accurate than the model forecasts, particularly when the forecast horizon is short. In summary, the final CPB forecasts de-bias the model forecasts and lead to higher accuracies than the initial model forecasts. (P.H. Franses).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Forecasting, 2005
Journal of Forecasting, 1994
International Journal of Forecasting, 2014
Applied Economics Letters, 2007
Economic Research-Ekonomska Istraživanja, 2017
International Journal of Forecasting, 2007
Documentos De Trabajo Del Icae, 2011
ECB Occasional Paper No. 8, 2003
International Journal of Forecasting, 2010
RePEc: Research Papers in Economics, 2015
Oxford Bulletin of Economics and Statistics, 2012
OR Insight, 2013
Journal of Business and Economic Statistics, 2009