So you've built a model, the predictive plots look nice, but you want to synthesize this information down to a number. This is where measures of forecast accuracy come in. In this post we will recall some simple measures of forecast accuracy, and then I'll explain why I don't like them, unless you intend to use them to compare more than one model (i.e. as a relative performance measure).

In the following descriptions, let the actuals be and forecasts be , where .

**Root Mean Squared Error (RMSE):**

RMSE <- function(a,f){(1/length(a))*sqrt(sum((a-f)^2))}

**Mean Forecast Error (MFE):**

The ideal value for the MFE is 0. If MFE > 0, the model tends to under-forecast the actuals. If MFE < 0, the model tends to over-forecast the actuals.

MFE <- function(a,f){mean(a-f)}

**Mean Absolute Error (MAE):**

MAE <- function(a,f){mean(abs(a-f))}

**Tracking Signal (TS):**

The TS has a general rule-of-thumb: if then the model is assumed to be producing accurate forecasts.

TS <- function(a,f){sum(a-f)/MAE(a,f)}

**Mean Percentage Error (MPE):**

MPE <- function(a,f){mean((a-f)/a)}

**Mean Absolute Percentage Error (MAPE):**

MAPE <- function(a,f){mean(abs((a-f)/a))}

The problem with using many of the aforementioned measures when given a single model is that it can be difficult to specify a threshold for accuracy that makes intuitive sense. While these measures may be great for synthesizing estimates of a model into a single statistic, how truly 'accurate' are these estimates? An MAPE of 0.25 is obviously better than an MAPE of 0.55 (*ceteris paribus*), but can we specify an appropriate threshold for a class of general models? In theory, potentially, in practice, I'm not aware of any that I would confidently apply. Rule-of-thumbs can be defined (as in the case of the TS), but since many of these measures are not normalized, their interpretation would depend on the magnitude of the model's target variable. The root of the problem is that all of these error measures are unbounded (at least in one direction). If we know a value is to be bounded within a range, we may assess the extremeness of the value within that range. Such an analogous relationship does not exist with these forecast measures. The smaller the error, the better, but it is unclear how large a model's error can or should be before one should re-evaluate a model...and thus is the nature of statistics.

While it can be argued that there are ways to introduce a forecast criterion given a particular measure (and I would be interested in any papers that establish an appropriate criterion), it is wiser to avoid this in practice. Simple forecast measures are better left for making model-to-model comparisons of performance.