Measuring forecast error (MAPE, WMAPE, Bias)
Measuring Forecast Error: MAPE, WMAPE & Bias Forecast error measures how far off a forecast is from the actual value. There are three commonly used metri...
Measuring Forecast Error: MAPE, WMAPE & Bias Forecast error measures how far off a forecast is from the actual value. There are three commonly used metri...
Forecast error measures how far off a forecast is from the actual value. There are three commonly used metrics to quantify this error:
1. Mean Absolute Percentage Error (MAPE)
Imagine you have a line of best fit representing actual demand. A MAPE error would be the average absolute difference between each point on the line and the corresponding point on the forecast.
For example, if your forecast predicts 100 units, but the actual demand is 120, the MAPE would be 20 units.
2. Weighted Absolute Percentage Error (WMAPE)
WMAPE assigns more weight to errors closer to the actual value. This gives more significance to accurate predictions.
Imagine a forecast with equal errors at different positions. WMAPE would distribute the errors more heavily towards the ends, highlighting the importance of accurately predicting extreme values.
3. Bias
Bias measures the systematic difference between the forecast and the actual value.
For example, if your forecast consistently predicts higher demand than the actual demand, the bias would be positive.
Understanding these errors:
Lower MAPE, WMAPE, and Bias indicate a better fit between the forecast and the actual values.
Higher MAPE, WMAPE, and Bias indicate a worse fit.
Choosing the right metric depends on the specific forecasting context and how important it is to predict both accuracy and the actual value.
Examples:
A company might use MAPE to compare the performance of different forecasting models.
Another company might use WMAPE to account for different types of errors (e.g., forecasting demand for different product categories).
A forecasting model might be biased towards predicting high values, leading to higher WMAPE