Multi-echelon Inventory Optimization and Lean/Six Sigma

The Emerging Role of Optimization in Business Decisions

For many, there was a point in the past when the idea of “optimization” used to summon images of Greek letters juxtaposed in odd arrangements kept in black boxes that spewed out inscrutable results.  Optimization was sometimes considered a subject best left to impractical theorists, sequestered in small cubicles deep in the bowels of the building to which few paths led and from which there were no paths out.  From that perspective, optimization was something that had to be reserved for special cases of complex decisions that had little relevance for day-to-day operations.

That perception was never reality, and today, growing numbers of business managers now understand the role of optimization.  Those leaders who leverage it intelligently, are not just valuable assets, but absolutely essential to achieving and sustaining a more valuable enterprise.  Global competition mandates that executives never “settle” in their decisions, but that they constantly make higher quality decisions in less time.  Optimization helps decision-makers do just that.  The exponential increases in computing power along with advances in software have enabled the use of optimization in an ever-widening array of business decisions.

 

How Lean Thinking Helps

Lean principles are applied to drive out waste.  One of the most predominant lean tools used for identifying waste is Value Stream Mapping which helps identify eight wastes, including overproduction, waiting, over-processing, unnecessary inventory, handling and transportation, defects, and underutilized talent.  In inventory management, this often happens through a reduction of lead times and lot sizes.

The reduction of lead times and lot sizes through lean in manufacturing has focused on reducing setup time to eliminate waiting and work-in-process inventory, as well as the frequent use of physical and visible signals for replenishment of consumption.  One of the challenges is that consumption or “true demand” at the end of the value network is never uniform for each time period, despite efforts to level demand upstream.

Acting and deciding are closely related and need to be carefully coordinated so that the end result does not favor faster execution over optimizing complex, interdependent tradeoffs.

 

The Importance of Six Sigma

Six sigma pursues reduced variability in processes.  In manufacturing, this relates most directly to controlling a production process so that defective lots or batches do not result.  It has been encapsulated with the acronym of DMAIC:  design, measure, analyze, improve, control.

There has been a natural interest in the convergence of lean and six sigma in manufacturing and inventory management so that fixed constraints like lead time and lot size can be continuously attacked while, at the same time, identifying the root causes of variability and reducing or eliminating them.

There are obvious limitations to both efforts, of course.  Physics and economics of reducing lot size and lead time place limitations on lean efforts and six sigma is limited by physics and market realities (the marketplace is never static).

Until it is possible to economically produce a lot size of one with a lead time of zero and infinite capacity, manufacturers will need to optimize crucial tradeoffs. 

 

Crucial Tradeoffs for Manufacturers

In a manufacturing organization, 60% to 70% of all cash flow is often spent on the cost of goods sold – purchasing raw materials, shipping and storing inventory, transforming materials or components into finished goods, and distributing the final product to customers.  So, deciding just how much to spend on which inventory in what location and when to do it is crucial to success in a competitive global economy.  Uncertain future demand and variations in supply chain processes mandate continuous lean efforts to reduce lead times and lot/batch sizes as well as six sigma efforts to reduce and control variability.

As long as we operate in a dynamic environment, manufacturing executives will continue to face decisions regarding where (across facilities and down the bill of material) to make-to-order vs. make-to-stock and how much buffer inventory to position between operations to adequately compensate for uncertainty while minimizing waste.

Taken in complete isolation, the determination of a buffer for a make-to-stock finished good at the point of fulfillment for independent demand measured by service level (not fill rate) is not trivial, but it is tractable.  But, for almost every manufacturer, the combination of processes that link levels in the BOM and geographically dispersed suppliers, facilities and customers, means that many potential buffer points must be considered.  Suddenly, the decision seems almost impossible, but advances in inventory theory and multi-echelon inventory optimization have been developed and proven effective in addressing these tradeoffs, improving working capital position and growing cash flow.

 

So What?

In many cases, the key levers for eliminating waste and variability in any process are the decision points.  When decisions are made that consider all the constraints, multiple objectives, and dependencies with other decisions, significant amounts of wasted time and effort are eliminated, thereby reducing the variability inherent in a process where the tradeoffs among conflicting goals and limitations are not optimized.

Intuition or incomplete, inadequate analysis will only result in decisions that are permeated with additional cost, time and risk.  Optimization not only delivers a better starting point, it gives decision-makers insight about the inputs that are most critical to a given decision.  Put another way, a planner or decision-maker needs to know the inputs (e.g. resource constraints, demand, cost, etc.) in which a small change will change the plan and the inputs for which a change will have little impact.

Multi-echelon inventory optimization perfectly complements lean and six sigma programs to eliminate waste by optimizing the push/pull boundary (between make-to-stock and make-to-order) and inventory buffers as lean/six sigma programs drive down structural supply chain barriers (e.g. lead time and lot/batch size) and reduce variability (in lead times, internal processes and demand).

Given constant uncertainty in end-user demand and the economics of manufacturing in an extremely competitive global economy, business leaders cannot afford not to make the most of all the tools at their disposal, including lean, six sigma, and optimization.

Update on Forecasting vs. Demand Planning

Often, the terms, “forecasting” and “demand planning”, are used interchangeably. 

The fact that one concept is a subset of the other obscures the resulting confusion. 

Forecasting is the process of mathematically predicting a future event.

As a component of demand planning, forecasting is necessary, but not sufficient.

Demand planning is that process by which a business anticipates market requirements.  

This certainly involves both quantitative and qualitative forecasting.  But, demand planning requires holistic process that includes the following steps:

1.      Profiling SKU’s with respect to volume and variability in order to determine the appropriate treatment:

For example, high volume, low variability SKU’s will be easy to mathematically forecast and may be suited for lean replenishment techniques.  Low volume, low variability items maybe best     suited for simple re-order point.  High volume, high variability will be difficult to forecast and may require a sophisticated approach to safety stock planning.  Low volume, low variability SKU’s may require a thoughtful postponement approach, resulting in an assemble-to-order process.  This analysis is complemented nicely by a Demand Plan Sanity Check, which should be an on-going part of your forecasting process.

2.       Validating of qualitative forecasts from among functional groups such as sales, marketing, and finance
3.       Estimation of the magnitude of previously unmet demand
4.       Predicting underlying causal factors where necessary and appropriate through predictive analytics
5.       Development of the quantitative forecast including the determination of the following:

  • Level of aggregation
  • Correct lag
  • Appropriate forecasting model(s)
  • Best settings for forecasting model parameters
  • Deducting relevant consumption of forecast

6.      Rationalization of qualitative and quantitative forecasts and development of a consensus expectation
7.      Planning for the commercialization of new products
8.      Calculating the impact of special promotions
9.      Coordinating of demand shaping requirements with promotional activity
10.    Determination of the range and the confidence level of the expected demand
11.    Collaborating with customers on future requirements
12.    Monitoring the actual sales and adjusting the demand plan for promotions and new product introductions
13.    Identification of sources of forecast inaccuracies (e.g. sales or customer forecast bias, a change in the data that requires a different forecasting model or a different setting on an existing forecast model, a promotion or new product introduction that greatly exceeded or failed to meet expectations).

The proficiency with which an organization can anticipate market requirements has a direct and significant impact on revenue, margin and working capital, and potentially market share.  However, as an organization invests in demand planning, the gains tend to be significant in the beginning of the effort but diminishing returns are reached much more quickly than in many other process improvement efforts.

This irony should not disguise the fact that significant ongoing effort is required simply to maintain a high level of performance in demand planning, once it is achieved.

It may make sense to periodically undertake an exercise to (see #1 above) in order to determine if the results are reasonable, whether or not the inputs are properly being collected and integrated, and the potential for additional added value through improved analysis, additional collaboration, or other means.

I’ll leave you once again with a thought for the weekend – this time from Ralph Waldo Emerson, “You cannot do a kindness too soon, for you never know how soon it will be too late.”

Thanks for stopping by and have a wonderful weekend!

The Winding Road toward the “Autonomous” Supply Chain (Part 2)

3d-matrix

Last week, I began this train of thought with The Winding Road toward the ‘Autonomous’ Supply Chain (Part 1)”.  Now, as this weekend approaches, I conclude my piece, but I hope to spur your ideas.

Detect, Diagnose, Decide with Speed, Precision & Advanced Analytics

Detection of incidental challenges (e.g. a shipment that is about to arrive late, a production shortfall, etc.) in your value network can be significantly automated to take place in almost real-time.   Detection of systemic challenges will be a bit more gradual and is based on the metrics that matter to your business, capturing customer service, days of supply, etc., but it is the speed (and therefore, the scope) that is now possible that drives more value today from detection.

Diagnosing the causes of incidental problems is only limited by the organization and detail of your transactional data.  Diagnosing systemic challenges requires a hierarchy of metrics with respect to cause and effect (such as, or similar to, the SCOR® model).  Certainly, diagnosis can now happen with new speed, but it is the combination of speed and precision that makes a new level of knowledge and value possible through diagnosis.

With a clean, complete, synchronized data set and a proactive view of what is happening and why, you need to decide the next best action in a timeframe where it is still relevant.  You must optimize your tradeoffs and perform scenario (“what-if”) and sensitivity analysis.

Ideally, your advanced analytics will be on the same platform as your wrangled supra data set.  The Opalytics Cloud Platform (OCP) not only gives you state of the art data wrangling, but also provides pre-built applications for forecasting, value network design and flow, inventory optimization, transportation routing and scheduling, clustering and more.  OCP also delivers a virtually unlimited ability to create your own apps for decision modeling, leveraging the latest and best algorithms and solver engines.

Speed in detection, speed and precision in diagnosis, and the culmination of speed, precision and advanced analytics in decision-making give you the power to transpose the performance of your value network to levels not previously possible (see Figure above).  Much of the entire Detect, Diagnose, Decide cycle and the prerequisite data synchronization can be, and will be, automated by industry leaders.  Just how “autonomous” those decisions become remains to be seen.

As yet another week slips into our past, I leave you with a thought from Ralph Waldo Emerson, “There is properly no history, only biography.”

Have a wonderful weekend and thank you, again, for stopping by.

The Winding Road toward the “Autonomous” Supply Chain (Part 1)

There is a lot of buzz about the “autonomous” supply chain these days.  The topic came up recently at a conference I recently attended where a topic of discussion was the supply chain of 2030. But, before we turn out the lights and lock the door to a fully automated, self-aware, supply chain decision machine, let’s take a moment and put this idea into some perspective.  I’ve heard the driverless vehicle used as an analogy for the autonomous supply chain.  However, orchestrating the value network where goods, information and currency pulse between facilities and organizations, following the path of least resistance may prove to be considerably more complex than driving a vehicle.  Most sixteen-year-olds can successfully drive a car, but you may not want to entrust your global value network to them.

Before you can have an autonomous supply chain, you need to accelerate what I call the Detect, Diagnose, Decide cycle.  In fact, as you accelerate the cycle you may learn just how much autonomy may be possible and/or wise.

Detect, Diagnose, Decide

The work of managing the value network has always been to detect challenges and opportunities, diagnose the causes, and decide what to do next –

  1. Detect (and/or anticipate) market requirements and the challenges in meeting them
  2. Diagnose the causes of the challenges, both incidental and systematic
  3. Decide the next best action within the constraints of time and capital in relevant time

The Detect, Diagnose, Decide cycle used to take a month.  Computing power, better software, and availability of data shortened it to a week.  Routine, narrowly defined, short-term changes are now addressed even more quickly under a steady state – and a lot of controlled automation is not only possible in this case, but obligatory.  However, no business remains in a steady state, and changes from that state require critical decisions which add or destroy significant value.

Data Is the Double-edged Sword

digital-value-network-matrix

Figure 1

The universe of data is exploding exponentially from networks of organizations, people and things.  Yet, many companies are choking on their own ERP data, as they struggle to make decisions on incomplete, incorrect and disparate data.  So, while the need for the Detect, Diagnose, Decide cycle to keep pace grows more ever more imperative, some organizations struggle to do anything but watch.  The winners will be those who can capitalize on the opportunities that the data explosion affords by making better decisions through advanced analytics (see Figure 1).  The time required just to collect, clean, and synchronize data for analysis remains the fundamental barrier to a better detection, diagnosis and decisions in the value network.

A consolidated data store which can connect to source systems and on which data can be programmatically “wrangled” into a supra data set would be helpful in the extreme.  While this may seem like an almost insurmountable challenge, this capability exists today.  For example, the Opalytics Cloud Platform enables you to use Python to automatically validate, reconcile and synchronize data from various sources, forming the foundation of a better Detect, Diagnose, Decide cycle.

Thanks for taking a moment to stop by.  As we enter this weekend, remember that life is short, so we should live it well.

I’ll be back next week with Part 2.

Do You Need a Network Design CoE?

shutterstock_148723100

Licensed through Shutterstock. Copyright: Sergey Nivens

Whether you formally create a center of excellence or not, an internal competence in value network strategy is essential.  Let’s look at a few of the reasons why.

Weak Network Design Limits Business Success

From an operational perspective, the greatest leverage for revenue, margin, and working capital lies in the structure of the supply chain or value network.*

It’s likely that more than half of the cost and capabilities of your value network remain cemented in its structure, limiting what you can achieve through process improvements or even world-class operating practices.

You can improve the performance of existing value networks through an analysis of their structural costs, constraints, and opportunities to address common maladies like these:

  • Overemphasis on a single factor.  For example, many companies have minimized manufacturing costs by moving production to China, only to find that the “hidden” cost associated with long lead times has hurt their overall business performance.
  • Incidental Growth.  Many value networks have never been “designed” in the first place.  Instead, their current configuration has resulted from neglect and from the impact from mergers and acquisitions.
  • One size fits all.  If a value network was not explicitly designed to support the business strategy, then it probably doesn’t.  For example, stable products may need to flow through a low-cost supply chain while seasonal and more volatile products, or higher value customers, require a more responsive path.

It’s Never One and Done

At the speed of business today, you must not only choose the structure of your value network and the flow of product through that network, you must continuously evaluate and evolve both.  

Your consideration of the following factors and their interaction should be ongoing:

  1. Number, location and size of factories and distribution centers
  2. Qualifications, number and locations of suppliers
  3. Location and size of inventory buffers
  4. The push/pull boundary
  5. Fulfillment paths for different types of orders, customers and channels
  6. Range of potential demand scenarios
  7. Primary and alternate modes of transportation
  8. Risk assessment and resiliency planning

The best path through your value network structure for each product, channel and/or customer segment combination can be different.  It can also change over the course of the product life-cycle.

In fact, the best value network structure for an individual product may itself be a portfolio of multiple supply chains.  For example, manufacturers sometimes combine a low-cost, long lead-time source in Asia with a higher cost, but more responsive, domestic source.

Focus on the Most Crucial Question – “Why?”

The dynamics of the marketplace mandate that your value network cannot be static, and the insights into why a certain network is best will enable you to monitor the business environment and adjust accordingly.

Strategic value network analysis must yield insight on why the proposed solution is optimal.  This will always be more important than the “optimal” recommendation.

In other words, the context is more important than the answer.

The Time Is Always Now

For all of these reasons, value network design is more than an ad hoc, one-time, or even periodic project.  At today’s speed of competitive global business, you must embrace value network design as an essential competency applied to a continuous process.

You may still want to engage experienced and talented consultants to assist you in this process from time to time, but the need for continuous evaluation and evolution of your value network means that delegating the process entirely to other parties will definitely cost you money and market share.  

Competence Requires Capability

Developing your own competence in network design will require that you have access to enabling software.  The best solution will be a platform that facilitates flexible modeling with powerful optimization, easy scenario analysis, intuitive visualization, and collaboration.  

The right solution will also connect to multiple source systems, while helping you cleanse and prepare data. 

Through your analysis, you may find that you need additional “apps” to optimize particular aspects of your value network such as multi-stage inventories, transportation routing, and supply risk.  So, apps like these should be available to you on the software platform to use or tailor as required.  

The best platform will also accelerate the development of your own additional proprietary apps (with or without help), giving you maximum competitive advantage.  

You need all of this in a ubiquitous, scalable and secure environment.  That’s why cloud computing has become such a valuable innovation.  

If you found some of these thoughts helpful, and you are looking for value network capability to support your internal competence, you may want to have a look at the Opalytics Cloud PlatformYes, I work for Opalytics, but the Opalytics Cloud Platform has been built from the ground up for do deliver all of this.  

A Final Thought

I leave you with this final thought from Socrates:  “The shortest and surest way to live with honor in the world is to be in reality what we appear to be.”

 

*I prefer the term “value network” to “supply chain” because it more accurately describes the dynamic collection of suppliers, plants, outside processors, fulfillment centers, and so on, through which goods, currency and data flow along the path of least resistance (seeking the lowest price, shortest time, etc.) as value is exchanged and added to the product en route to the final customer.

A Demand Plan Sanity Check: Five Best Practices

Sch 1There is a process that is fast becoming a necessary and key component of both demand planning and sales and operations planning.  I have heard it described as “forecastability”and “demand curve analysis”, among other terms, but, here, I will call it a “Demand Plan Sanity Check” or DPSC for short.  I am seeing this across industries, but particularly in consumer products.  The concept is simple – how does one identify the critical few forecasts that require the skill and experience of demand planners, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference.

At a minimum, a DPSC must consider the following components:

  1. Consideration of every level and combination of the product and geographical hierarchies
  2. A very high quality quantitative forecast
  3. A statistically developed range of “sanity” out through time
  4. Metrics for measuring “sanity”
  5. Tabular and graphical displays that are interactive, intuitive, always available, and current.

If you are going to attempt to establish a DPSC, then you need to incorporate the following five best practices:

1.  Eliminate duplication.  When designing a DPSC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a statistical “sanity check”, so that planners can focus their energy on asking critical questions only about those cases.

2. Minimize human time and effort by maximizing the power of cloud computing.  Leverage the fast, ubiquitous computing power of the cloud to deliver results that are self-explanatory and always available everywhere, providing an immediately understood context that identifies invalid forecasts and minimizes the need for planners to sort through and compare massive amounts of data manually and individually.

3. Eliminate inconsistent judgments By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.  A DPSC tool should present the minimum essential data that will flag forecasts with questionable validity for planners so that they can leverage their skill, experience and intelligence on these exceptions rather than trying to apply their individual assessments to many different sets of data in order to identify the exceptions.

4. Reflect statistical realities.  Any calculations of upper and lower bounds of “sanity” should reflect the fact that uncertainty grows with each extension of a forecast into a future time period.  For example, the upper and lower limits of “sanity” for one period into the future should usually be narrower than the limits for two or three periods into the future.  These, in turn, should be narrower than the limits calculated for more distant future periods.  Respecting statistical realities also means reflecting seasonality and cyclical demand in addition to month-to-month variations.  It also means capturing the actual variability in demand and forecast error so that you do not force assumptions of normality onto the sanity check range(s).  Among other things, this will allow you to predict the likelihood of over and under-shipment.

5. Illustrate business performance, not just forecasting performance with “sanity” ranges.  The calculation of upper and lower “sanity” intervals should be applied, not only from time-period to time period, but also cumulatively across periods such as months in the fiscal year.

If you are engaged in demand planning or sales and operations planning, I’d like to know your thoughts on performing a Demand Plan Sanity Check.

Thanks again for stopping by Supply Chain Action.  As we leave the work week and recharge for the next, I leave you with the words of John Ruskin, “When skill and love work together, expect a masterpiece.”

Have a wonderful weekend!

The Time-to-Action Dilemma



dreamstime_m_26639042If you can’t answer these 3 questions in less than 10 minute
s
(and I suspect that you can’t), then your supply chain is not the lever it could be to
 drive more revenue with better margin and less working capital:
1) What are inventory turns by product category (e.g. finished goods, WIP, raw materials, ABC category, etc.)?  How are they trending?  Why?
2) What is the inventory coverageWhat will projected inventory be at by the start of a promotion or season.  Within sourcing, manufacturing or distribution constraints, what options do I have if my demand spikes or tanks?
3) What proportion (and how many) of your customer orders (or margin or revenue) shipped at 99% on-time and in-full?  How many at 98%? And so on . . . Do you understand the drivers?

The slack time that global competition is allowing you to have between planning and execution is collapsing at an accelerating rate.

You need to know the “What?” and the “Why? so you can determine what to do before it’s too late.  

You need to answer the questions that your ERP and APS can’t so your supply chain makes your business more valuable.

Since supply chain decisions are all about managing interrelated goals and trade-offs, data may need to come from various ERP systems, OMS, APS, WMS, MES, and more, so unless you have a platform that consolidates and blends data from end-to-end at every level of granularity and along all dimensions, you will always be reinventing the wheel when it comes to finding and collecting the data for decision support.  It will always take too long.  It will always be too late.

You need the kind of platform that will deliver diagnostic insights so that you can know not just what, but why.  And, once you know what is happening and why, you need to know what to do — your next best action, or at least viable options and their risks . . . and you need that information in context and “in the moment”.

In short, you need to detect opportunities and challenges in your execution and decision-making, diagnose the causes, and direct the next best action in a way that brings execution and decision-making together.

If you don’t have all three now – Detect, Diagnose and Direct – in a way that covers your end-to-end value network, you need to explore how you can get there.

As we approach the weekend, I’ll leave you with this thought to ponder:  Leadership comes from a commitment to something greater than yourself that compels maximum contribution, whether that is leading, following, or just getting out of the way.”
%d bloggers like this: