Search Options
Home Media Explainers Research & Publications Statistics Monetary Policy The €uro Payments & Markets Careers
Suggestions
Sort by

Atsushi Inoue

1 February 2003
WORKING PAPER SERIES - No. 214
Details
Abstract
It is standard in applied work to select forecasting models by ranking candidate models by their prediction mean square error (PMSE) in simulated ou-of-sample (SOOS) forecasts. Alternatively, forecast models may be selected using information criteria (IC). We compare the asymptotic and finite-sample properties of these methods in terms of their ability to minimize the true out-of-sample PMSE, allowing for possible misspecification of the forecast models under consideration. We first study a covariance stationary environment. We show that under suitable conditions the IC method will be consistent for the best approximating models among the candidate models. In contrast, under standard assumptions the SOOS method will select overparameterized models with positive probability, resulting in excessive finite-sample PMSEs. We also show that in the presence of unmodelled structural change both methods will be inadmissible in the sense that they may select a model with strictly higher PMSE than the best approximating models among the candidate models.
JEL Code
C22 : Mathematical and Quantitative Methods→Single Equation Models, Single Variables→Time-Series Models, Dynamic Quantile Regressions, Dynamic Treatment Effect Models &bull Diffusion Processes
C52 : Mathematical and Quantitative Methods→Econometric Modeling→Model Evaluation, Validation, and Selection
C53 : Mathematical and Quantitative Methods→Econometric Modeling→Forecasting and Prediction Methods, Simulation Methods
1 November 2002
WORKING PAPER SERIES - No. 195
Details
Abstract
It is widely known that significant in-sample evidence of predictability does not garantuee significant out-of-sample predictability. This is often interpreted as an indiciation that in-sample evidence is likely to be spurious and should be discounted. In this paper we question this conventional wisdom. Our analysis shows that neither data mining nor parameter instability is a plausible explanation of the observed tendency of in-smaple tests to reject the no predictability null more often than out-of-sample tests. We provide an alternative explanation based on the higher power of in-sample tests of predictability. We conclude that results of in-sample tests of predictability will typically be more credible than results of out-of-sample tests.
JEL Code
C12 : Mathematical and Quantitative Methods→Econometric and Statistical Methods and Methodology: General→Hypothesis Testing: General
C22 : Mathematical and Quantitative Methods→Single Equation Models, Single Variables→Time-Series Models, Dynamic Quantile Regressions, Dynamic Treatment Effect Models &bull Diffusion Processes
C52 : Mathematical and Quantitative Methods→Econometric Modeling→Model Evaluation, Validation, and Selection