Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Handbook of Statistics
…
46 pages
1 file
AI-generated Abstract
This paper discusses forecast evaluation methods relevant to economics and finance, specifically focusing on direction-of-change forecasts, probability forecasts, and volatility forecasts. It addresses the trade-off between generality and complexity in forecast evaluation, primarily examining linear least-squares forecasts of univariate covariance stationary processes. Key aspects include the properties of optimal forecasts, various accuracy measures such as mean squared error and mean absolute error, and the dependence of the selection of accuracy measures on the context of forecast usage, particularly in decision-making environments.
2014
We propose to assess the performance of k forecast procedures by exploring the distributions of forecast errors and error losses. We argue that non systematic forecast errors minimize when their distributions are symmetric and unimodal, and that forecast accuracy should be assessed through stochastic loss order rather than expected loss order, which is the way it is customarily performed in previous work. Moreover, since forecast performance evaluation can be understood as a one way analysis of variance, we propose to explore loss distributions under two circumstances; when a strict (but unknown) joint stochastic order exists among the losses of all forecast alternatives, and when such order happens among subsets of alternative procedures. In spite of the fact that loss stochastic order is stronger than loss moment order, our proposals are at least as powerful as competing tests, and are robust to the correlation, autocorrelation and heteroskedasticity settings they consider. In addition, since our proposals do not require samples of the same size, their scope is also wider, and provided that they test the whole loss distribution instead of just loss moments, they can also be used to study forecast distributions as well. We illustrate the usefulness of our proposals by evaluating a set of real world forecasts.
Journal of Forecasting, 1993
Linear models are invariant under non-singular, scale-preserving linear transformations, whereas mean square forecast errors (MSFEs) are not. Different rankings may result across models or methods from choosing alternative yet isomorphic representations of a process. One approach can dominate others for comparisons in levels, yet lose to another for differences, to a second for cointegrating vectors and to a third for combinations of variables. The potential for switches in ranking is related to criticisms of the inadequacy of MSFE against encompassing criteria, which are invariant under linear transforms and entail MSFE dominance. An invariant evaluation criterion which avoids misleading outcomes is examined in a Monte Carlo study of forecasting methods.
Journal of Statistical Planning and Inference, 1981
Journal of Statistical Planning and Inference, 2010
We present and show applications of two new test statistics for deciding if one ARIMA model provides significantly better h -step-ahead forecasts than another, as measured by the difference of approximations to their asymptotic mean square forecast errors. The two statistics differ in the variance estimate whose square root is the statistic's denominator. Both variance estimates are consistent even when the ARMA components of the models considered are incorrect. Our principal statistic's variance estimate accounts for parameter estimation. Our simpler statistic's variance estimate treats parameters as fixed. The broad consistency properties of these estimates yield improvements to what are known as tests of type. These are tests whose variance estimates treat parameters as fixed and are generally not consistent in our context.
International Journal of Forecasting, 2006
Clements and Hendry (1993) proposed the Generalized Forecast Error Second Moment (GFESM) as an improvement to the Mean Square Error in comparing forecasting performance across data series. They based their conclusion on the fact that rankings based on GFESM remain unaltered if the series are linearly transformed. In this paper, we argue that this evaluation ignores other important criteria. Also, their conclusions were illustrated by a simulation study whose relationship to real data was not obvious. Thirdly, prior empirical studies show that the mean square error is an inappropriate measure to serve as a basis for comparison. This undermines the claims made for the GFESM.
This paper investigates multistep prediction errors for non-stationary autoregressive processes with both model order and true parameters unknown. We give asymptotic expressions for the multistep mean squared prediction errors and accumulated prediction errors of two important methods, plug-in and direct prediction. These expressions not only characterize how the prediction errors are influenced by the model orders, prediction methods, values of parameters and unit roots, but also inspire us to construct some new predictor selection criteria that can ultimately choose the best combination of the model order and prediction method with probability 1. Finally, simulation analysis confirms the satisfactory finite sample performance of the newly proposed criteria.
Foresight: The International Journal of Applied Forecasting, 2013
With this article, Foresight continues its examination of forecastabilityÐ the potential accuracy of our forecasting efforts Ð which is one of the most perplexing yet essential issues for the business forecasting profession. We first tackled the subject with a special feature section in our Spring 2009 issue. The introduction there indicated that assessing the forecastability of a historical time series can give us a basis for judging how successful our modeling has been (benchmarking), and how much improvement we can still hope to attain. ForesightOs Summer 2012 issue advanced the discussion with a feature article showing how to use a productOs DNA Ð product and market attributes such as the length, variability, and market concentration of sales Ð to develop benchmarks for forecast accuracy. The essential idea here is to better understand the specifics of those items we are trying to forecast and to set expectations accordingly. Certain key concepts emerged from the articles in tha...
Economics Research Institute Study Paper, 1991
suppliers, wholesalers and retailers to make faulty decisions regarding production, marketing and inventory carryovers. Wider knowledge of alternative forecasting procedures could help to increase predictive accuracy and enhance the efficiency of the forecasting function. Given the range of forecasting approaches and methods in use today, it is important to understand how procedures differ from each other and for what applications they are best suited. Performance evaluation of alternative forecasting procedures can provide a guide to relative predictive accuracy, costs, information requirements, and tradeoffs between those criteria. Forecasts can be obtained by (a) purely judgmental approaches, (b) causal or explanatory methods, (c) extrapolative (time series) methods or (d) any combination of the above methods (Makridakis et al. 1982). Choice of the appropriate technique for a particular forecasting application is based on criteria such as cost of a methodology (e.g., modeler and computer time), data requirements, end-user needs and technical sophistication, and forecast horizon. The characteristics of an individual commodity market and availability of relevant data will also influence model selection. A technique appropria~e to one commodity or time horizon may be unsuitable for forecasting another commodity or over a different horizon. A "best" forecasting method appropriate to all applications probably does not exist. The objectives of this research effort were: (1) to develop an analytical framework for conducting this and future forecasting competitions using agricultural variables; and (2) apply that framework to evaluate the forecasting performance of selected procedures used to generate out-of-sample predictions of two agricultural price series. The study was organized as a forecasting competition. Nine alternative forecasting procedures were
Technological Forecasting and Social Change, 1980
The Box-Jenkins approach to time series analysis, which is an efficient way of analyzing stationary time series, recommends differencing as a general method for transforming a nonstationary time series into a stationary one. This paper gives a methodological discussion of some other ways of transforming a nonstationary series, in particular removing linear trends. It is argued that in many cases removing trends is superior to differencing in several respects. For example, when the process generating the time series is an ARMA@,q) process added to a linear trend, differencing will produce an ARMA@,q + 1) process that violates the invertibility conditions and is therefore difficult to estimate. The discussion is extended to time series with seasonal patterns. PETER GARDENFORS is an Assistant Professor at the Department of Philosophy, University of Lund, Sweden. He is working on a research project on the methodology of forecasting sponsored by the Planning Division of the Research Institute of Swedish National Defense (FOA P). BENGT HANSSON holds a research position in decision theory with the Swedish Research Council for Humanities and Social Sciences. He also leads a project on "Efficient use of knowledge," sponsored by the Bank of Sweden Tercentenary Foundation. 'The main reference is Box and Jenkins 121. A good representation can also be found in Anderson [ 1]
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Forecasting, 2013
Journal of the Royal Statistical Society. Series A (General), 1979
Jahrbucher Fur Nationalokonomie Und Statistik, 2011
Journal of Econometrics, 2011
International Journal of Forecasting, 1992
International Journal of Forecasting, 1992
Microelectronics Reliability, 1996
Journal of Econometrics, 2007