Forecast Accuracy Measurement


Rick Blair

What’s your forecast accuracy telling you? Stop and ask a few questions
Forecast accuracy is an important performance metric in any effective S&OP process, but it can be measured in various ways. Comparing your company’s accuracy to an industry standard will be difficult to impossible if you don’t know the details behind the measurement. More importantly, the metric needs to resonate within your organization as a meaningful indicator of forecast relevance. So then…
What details should one consider for forecast accuracy measurement?

Here’s the Steelwedge Top Six:
1. Aggregation level: Are you measuring accuracy at a product SKU or family level? What about other hierarchy levels? Odds are your accuracy will appear to be better at an aggregated level such as family. This happens because variability of forecasts and actuals tend to cancel out one another as data is combined. The result is a smoothing of results and lowering of error calculations. Recommendation: Measure accuracy at the same level as the majority of forecasts are captured.
2. Error Calculation: In its most basic form, accuracy is a measure of the difference between a prediction and what actually happened. How far off were we? Error is equal to the difference between forecast and actual. Often, this is captured as a percentage value called percent error. Mean absolute percent error (MAPE) calculates the average of errors. Since we don’t want positives and negatives to cancel out each other, we use the absolute values of each error. There are other methods, but MAPE is fairly common. Weighted MAPE is a method used to give greater importance (weight) to items with greater activity. Amount of activity may be defined as the proportion a particular item is of the total. Recommendation: Keep it simple. Make sure people understand the measurement and how they can impact it.
3. Unit of Measure: “We forecast in both units and dollars. Which should we use for measuring accuracy?” Weighted accuracy measures, such as weighted MAPE, will give greater influence to items that constitute a greater portion of the sales volume. Higher dollar but low unit volume items will contribute much more to a measurement in dollars. Conversely, high unit volume, low dollar items will factor in more prominently using a unit based forecast. Which is preferable? It really depends on your business. Recommendation: Consider important business decisions made in the Executive S&OP meeting. Are they usually focused on $ or units?
4. Offset period: If we measure accuracy using the most recent forecast for a given period, it will likely be more accurate than a forecast made three months prior to a given period. That’s because we have better information as we get closer to the current period. But, how valuable is a forecast made in the very near term if the organization cannot act upon that forecast? It has virtually no value. The offset period defines the number of periods prior to an actual period for which a forecast will be measured against the given period’s actuals. For example, if our offset is 3 months, we can measure accuracy using actuals from August and the forecast for August that was captured in May. Recommendation: Set the offset period to most closely match the organization’s planning horizon.
5. Time buckets: Should we measure accuracy using weeks, months or quarters? Typically, you will want to measure accuracy in the same period buckets used to forecast. In some cases, where demand patterns follow a “hockey stick” high demand in the last month of a quarter, it may be more appropriate to use quarterly buckets. Recommendation: Measure accuracy using the same buckets you use to forecast unless there’s a compelling reason to move to a bigger bucket.
6. Which Forecast?: In a collaborative S&OP process, there may be several forecasts captured (Sales, Marketing, Demand Planning, Consensus, etc). Which should we use for accuracy measurement? If you’re only going to use one, then go with the forecast used by Operations to build or procure product. A typical example would be the Consensus Plan. Measuring accuracy against multiple forecasts will provide greater insights into potential areas for improvement. Recommendation: Measure accuracy using the forecast provided to Operations. Publish results throughout the organization. Also, measure accuracy across other forecasts to isolate areas for improvement.
Are there other aspects you’d add to this list? Please let us know.