Kirstin Hubrich
;
Kenneth D. West

forecast evaluation of small nested model sets (replication data)

We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require re-estimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, while the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have the most accurate size, and the procedure that looks at the maximum t-statistic has the best power. We illustrate our procedures by comparing forecasts of different models for US inflation.

Data and Resources

Suggested Citation

Hubrich, Kirstin; West, Kenneth D. (2010): Forecast evaluation of small nested model sets (replication data). Version: 1. Journal of Applied Econometrics. Dataset. http://dx.doi.org/10.15456/jae.2022319.1309792779