Planning and forecasting throws up some major challenges – ensuring the shelves are stocked with gin and tonic, for example. Sam Tulip looks at how companies are rising to the challenge of generating and acting on reliable forecasts in an increasingly fast-paced and apparently unpredictable world…
Investing in forecasting has declined in importance as a priority over the past three years, according to the report “Dynamic Distribution Disruption 2019 – State of Retail Supply Chain Report”.
“In line with less focus on forecasting by retailers in this year’s survey, there appears to be a regression in capability also,” says Alex Hadwick, head of research at EyeforTransport. “There was a slight rise in the percentage reporting that they had no forecasting capabilities and a decline in those reporting more advanced forecasting systems. While this may reflect a stronger focus on streamlining the logistics process itself in an environment where spending priorities must be carefully weighed up, it nonetheless represents a capability gap. Stronger predictive analytics promises reduced wastage and greater efficiencies on top of existing technologies, although achieving accurate modelling can be a challenge for many companies”.
Curiously, investment in improved visibility of inventory is a much stronger priority. It isn’t obvious how better sight of inventory helps, if forecasting deficiencies mean that the firm still doesn’t know how much inventory it ought to have. Nonetheless, Hadwick expects forecasting to “return in importance in the future as it remains something that companies in the space are aware has a role to play”.
Some role. According to the US Census Bureau, quoted in a JDA Software paper “The future of forecasting”, the ratio of inventory to sales has been on the rise since 2011, “in response to new, more volatile, sales channels” – internet shopping. JDA also quote figurers from the research firm IHL Group that “in 2015 the cost to companies of overstocking was around $470 billion and of under-stocking $630 billion worldwide”.
Such numbers seem incredible, until the full range of cost impacts is considered. It isn’t just about stock mark-downs and write-offs on the one hand, and lost sales on the other. There are the inefficiencies in warehousing, handling and transport; sub-optimal procurement terms; increased administrative burdens; distorted promotion and marketing priorities; wasteful use of working capital – the list goes on.
But it also is a reminder that there are many different customers for demand forecasts, with different requirements for timescales, granularity, and indeed differing views of what is significant in a measurement of demand.
So, for example, the distribution network needs an item level forecast of what is needed on the shelves tomorrow. Production operations may need a range of forecasts from precise numbers of individual skus to be made next week, to much less specific forecasts of say labour required on a line in the next quarter.
Procurement may need forecasts to support supply contracts of a year or more in duration. Meanwhile, a finance department has quite different needs – it probably doesn’t care precisely which products are being manufactured or sold, but is keenly interested in for example the implications of seasonal patterns of inventory build-up and sales on cash flow requirements, or in what currencies are going to be received in return.
The upshot is that many companies in practice require several different demand forecasts, or at least several different ways of slicing and interpreting the forecast. In a 2015 paper on supply chain metrics, Gartner placed demand forecasts at the top of the hierarchy. “After all, a forecast is not simply a projection of future business; it is a request for product and resources that ultimately impacts almost every business decision the company makes across sales, finance, production management, logistics and marketing”.
That last quote is from a paper by Logility, “Eight Methods to Improve Forecast Accuracy in 2019”. Forecasting models are classically qualitative, quantitative, or hybrid. Qualitative models tap supposedly knowledgeable individuals for their subjective experience of how products, customers and markets are likely to behave in current and anticipated future conditions.
Quantitative models objectively apply maths to data sets, principally historic performance data, using defined and agreed logic. In practice, most firms have to develop some sort of hybrid model because, for example, there is insufficient data around new products (or new promotion strategies, or new channels to market) and/or there is a need to include information that is itself uncertain because it is also a forecast – future weather conditions being the obvious example.
The Logility paper says: “For many supply chain scenarios, it’s typically best to employ a variety of methods to obtain optimal forecasts. Ideally, managers should take advantage of several different methods and build them into the foundation of the forecast. The best practice is to use automated method switching to accommodate selection and deployment of the most appropriate forecast method for optimal results.
“Advanced demand planning and forecasting systems automate many of the functions required to select, model and generate multi-echelon forecasts, lifting the burden of manually intensive approaches and accelerating sensitivity to model changes as market conditions evolve. A best practices approach also must include the ability to incorporate personal expertise and weight the various factors in generating forecasts”.
The good news is that there are many good systems on the market that can automate the production of forecasts and feed them into planning systems while allowing forecasters to tweak the variables to play “what if?”, and to test for sensitivity.
However there are also many different possible mathematical and statistical techniques that can be applied, and which need to be chosen according to the characteristics of the products and markets in question. A moving average, for example, works for products where demand varies randomly with no significant seasonality or trend, whereas the more exotic “Modified Holt-Winters” is a best fit technique that works well with seasonal demand. There are approaches that work for new products, for end-of-lifecycle products, for slow-moving or intermittent demand, for products where demand is to some extent dependent on another product (the gin AND the tonic).
There is a whole range of newly emerging techniques and technologies around short-range “demand sensing”, applying things like machine learning pattern recognition to real time ePOS data, or natural language analysis to spot influential trends in social media. New, external, data sets can also be automatically incorporated to the forecast – weather and traffic predictions or economic forecasts. Relex Solutions customer WH Smith uses machine learning to interpret passenger number figures supplied by the airports where many of its stores are located which is helping improve sales forecast accuracy and drive down wastage in fresh goods.
One company may therefore apply different techniques to different products and markets, or to the same product at different stages in the life-cycle. For some demand patterns the choice of technique is really critical to generating a robust forecast, in other cases, less so, and the rough-cut methods that may be adequate to create a three-month production forecast may not be suitable to support daily distribution and replenishment activities. But even with the automation, there are limits to how many different techniques can sensibly be employed, and the law of diminishing returns kicks in.
And while forecasting systems may be far better than a human in discerning emerging trends and correlations, even with AI that is not the same thing as detecting and understanding causation. Whether using qualitative, quantitative, or hybrid approaches, managers have to base these on underlying business logics that properly represent the real world.
The logic has to be used to design the appropriate forecasting regime, and also to “reality check” the output. It is for example all too easy to design a system with an abnormal sensitivity to a small change in a variable, or that extends a trend line beyond what is feasible, or that spots a pattern in a random series of events (although human forecasters are rather good at that as well).
One of the challenges here is in distinguishing between dependent and independent variables. For example, a food retailer may in the summer find that sales of both steak and of salad are positively correlated with hot weather, presumably because of an increase in barbeques.
So steak sales and salad sales are dependent on temperature – forecast that and you can forecast sales? But in January, increased steak sales may be correlated with cold weather, while salad sales slump. That is a simple example, but even so it is not straightforward to develop the appropriate logic.
But if robust logics can be developed from deep understanding of how the relevant markets really work, then these can be used to create multi-variate demand signal management systems which don’t just predict future demand from past and current data and events, but can discover and analyse causal relationships. That means that it starts being possible not just to react to a demand signal, but to begin to assess its likely importance over the longer term. That in turn allows a more reasoned approach to managing and exploiting future demand.
According to JDA Software, machine learning and cognitive computing can take things a few stages further by enabling “probabilistic forecasting”. Any forecast has by definition some likelihood of error. Past experience may or may not give the firm some idea of how big that error might be, or in which direction, but often errors are hidden – if 50 per cent of skus are under-stocked at store level, and 50 per cent overstocked, it may look as though the forecast for the product family was 100 per cent accurate, when actually it was wrong for every single line!
Using machine learning, “hundreds of different data sources can be analysed to evaluate the influence of all input data – such as events like promotions, weather, Facebook posts and more – on customer demand, down to the stock keeping unit, location or time. Each calculated prediction quantifies the likelihood of different demand outcomes. This information then enables demand planners and advanced demand-supply matching algorithms to make informed decisions and ensure KPI alignment, optimally weighing the risks associated with different demand outcomes”.
In practice this means that the firm not only has a better idea of how likely the forecast is to be wrong by a given factor, but also in what direction – the risk of demand being significantly greater, or significantly less, than forecast is often not evenly distributed on a classic bell chart. Sometimes that is obvious – there may be a baseline of firm or regular orders that are highly unlikely to disappear, whereas more new orders might be received than are expected.
But often the imbalance is undetected even in hindsight – a persistent over-stock soon becomes embarrassingly obvious (“Oops, the manager over-ordered” said the Reduced Price stickers in a supermarket that, oddly, is no longer with us), but recurring stock-outs and resulting lost sales are probably under-reported – how do you know that a shopper didn’t buy something. Probablistic forecasting can suggest to managers where the risks lie, and where to focus attention on risk mitigation.
All of which should help make demand forecasting more useful (as a diagnostic tool rather than just as an instruction to production or distribution) as well as more accurate, whatever that means. As you might expect, there are, according to George E Palmatier in a paper on “Forecast Measurement and Evaluation” for Oliver Wight Associates, “literally an infinite number of ways to measure and present forecast measurements”. It rather depends who is asking the question, and as seen above, gross and systemic inaccuracies can be effectively smoother away by higher level or aggregate measurements.
So beyond looking at error or deviation at individual item, product family, and higher levels, Palmatier suggests establishing tolerances for individual item level forecast deviation and reporting the number of items with forecasts outside tolerance. This “also helps to identify bias in the forecast by observing the number of items above forecast versus the number of items below”.
But he raises another important point. “The objective in forecasting is to provide sufficient information in sufficient detail in sufficient time for manufacturing [in this case] to economically respond to change”. As cycle times reduce “time fences move closer in and the need to forecast accurately far into the future is diminished”. The corollary of that is that there is little virtue in making a next day sales forecast a few percentage points more accurate if the distribution network can only respond in three days.
On the other hand, remarkable improvements can be made even in very short cycle operations. Relex Solutions’ customer Booths, the NW England high-end supermarket chain competes on freshness, quality, and regional specialities, but lacks the negotiating power of its larger rivals and can’t insist on suppliers providing daily delivery for all items.
Nonetheless, improved demand forecasting, including better account taken of weather, and improved handling of product introductions and promotions, has reduced overall spillage by 10 per cent (over 20 per cent in chilled products) while improving shelf availability. This despite a challenging environment with very early order cut-offs for 24 hour delivery, and often very small volumes to be forecast.
These developments help explain why vendors are less likely now to talk about distinct forecasting and S&OP functions (even if that is how their systems still work) and more about Integrated Business Planning or, more generally, optimising.
Alexandra Sevelius of Relex, summarises the current state of play. “There is a lot of talk about using machine learning and AI in supply chain, but the discussion should revolve around finding the best possible solution to tackle the challenge we’re trying to solve. Machine learning may not always be the best choice. Rather, use a combination of different methods and algorithms because real-life problems cannot be solved with just one.
l Machine Learning and/or statistical analysis is used to find patterns and make predictions based on data
l Optimisation is used to make the best possible decision based on the predictions
l Heuristics and rules are needed since there are always situations where the quality or amount of data available for training a model or performing statistical analysis is too small
l The best option is to use a combination of all of these approaches and constantly evaluate methods to further improve results.
But always remember: the value you get from any type of method or algorithm boils down to the quality and quantity of the data you feed in”.
This article first appeared in Logistics Manager, April 2019.