M4 and M5 Forecasting Competitions

The M (for Makridakis) competitions allow competitors to test their time series forecasting methods in an objective fashion, where competitors submit forecasts for future values of the time series.  I find this to be similar in spirit to other forecasting competitions I’ve worked with, where competitors need to put themselves on the record by submitting predictions that can later be scored.

The most recent competition in the M series was the M4 competition, which had the challenge to forecast 100,000 time series at varying periodicities.  The overall winning method used a combination of traditional statistical methods (Holt-Winter exponential smoothing) with a neural network. What is interesting is that in general machine learning methods (including neural networks) did not do well in the competition, failing to beat benchmarks based on exponential smoothing. As Michael Gilliland of SAS observed in the latest issue of Foresight (free article available at https://foresight.forecasters.org/product/foresight-issue-57/), combining multiple models is usually most effective, with 9 of the 10 best performers featuring combinations or hybrid methods and the worst 10 performers were all individual models.

Some 35 articles about the competition and the methods used are available for free at:
<https://www.sciencedirect.com/journal/international-journal-of-forecasting/vol/36/issue/1>

There is now an M5 Competition underway <https://mofc.unic.ac.cy/m5-competition/>. M1-M4 focused on point forecasts. M5 will also ask for probabilistic forecasts for prediction intervals for the future forecast.