Kirstin Hubrich
;
Kenneth D. West
You're currently viewing an old version of this dataset. To see the current version, click here.

forecast evaluation of small nested model sets (replication data)

We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require re-estimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, while the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have the most accurate size, and the procedure that looks at the maximum t-statistic has the best power. We illustrate our procedures by comparing forecasts of different models for US inflation.

Data and Resources

This dataset has no data

Suggested Citation

Hubrich, Kirstin; West, Kenneth D. (2010): Forecast evaluation of small nested model sets (replication data). Version: 1. Journal of Applied Econometrics. Dataset. https://journaldata.zbw.eu/dataset/forecast-evaluation-of-small-nested-model-sets?activity_id=8c8a75eb-46be-4f69-89d2-3e83f64b154b