Manufacturing Breakthrough Blog
Friday June 3, 2016
In my last post we completed our discussion on paths of variation by demonstrating how significantly process variation can be reduced by reducing the total potential paths of variation. As a refresher, in our case study on paths of variation, by reducing the total paths of variation from thirty-two down to two, the standard deviation (i.e. σ) on the pinion diameters was reduced by nearly fifty percent with the positive side effect being scrap was reduced by forty percent!
In today’s post we will begin a discussion on performance metrics and why it is so important to select the correct ones.
One of the major differences between Theory of Constraints (TOC) thinking and traditional approaches to manufacturing (or any industry for that matter) is the use of performance metrics. Many companies still hold onto metrics like efficiency and utilization in non-constraints and probably will continue to do so. Deborah Smith and Jeff Herman have written an excellent chapter in the TOC Handbook (i.e. Chapter 14)  and I encourage everyone to read it. Smith and Herman’s chapter is entitled Resolving Measurement/Performance Dilemmas.
Smith and Harper tell us that metrics need to encourage the right behavior, but when you’re dealing with organizations of significant size and complexity, it’s always a challenge to construct a system of local metrics that:
- Encourage the local parts to do what is in the interest of the global objective.
- Provides relatively clear conflict resolution between and within the local parts.
- Provides clear and visible signals to management about local progress and status relative to the organizational objectives.
Smith and Harper present a “simple set of six general measurements” that all assume that a valid TOC model has been implemented. These six measurements are:
- Strategic Contribution
- Local OE (i.e. Operating Expense)
- Local Improvements/Waste
What I want to do in this posting is focus on the metric Stability. Not that the others aren’t important, but to me, getting control of the stability metric presents a huge opportunity for improvement. I may touch on a couple of the other measurements to help make a point, but the focus will be directly on the stability measurement.
The objective of the stability metric is to measure or at least get an idea of the amount of variation that is being passed throughout the system in question. As I’ve touched on in some of my most recent posts, we should all agree that having variation and volatility in the system is not conducive to stability. This is especially true when we’re talking about the system constraint, otherwise known as the drum (i.e. in Drum-Buffer-Rope scheduling), simply because the drum is the anchor point of our scheduling system or at least it should be. Any disruption of the drum schedule creates a lack of synchro-nization in the rest of the system as well as reducing the capacity of the constraint and the revenue stream.
One measure that is important is drum utilization, which is simply a measure of how well the constraint is being used to produce throughput compared to how well it should be doing. Utilization, which is usually expressed as a percentage, compares the actual time the constraint is used to produce throughput to the total time available. In other words, utilization is 100% minus the time lost due to things like starvation, blockage and downtime due to breakdowns. Keep in mind that every time the utilization of the constraint falls below 100%, we are losing potential revenue so it’s very important to track this metric and to record the causes of the reduction. Let’s look at some of the causes that we might experience.
- Starvation of the constraint occurs when it runs out of material being fed to it by an upstream process. The cause and the length of time the starvation lasted are very important, so record them.
- Unnecessary/Over-Production is simply a waste of the constraint’s capacity on things that aren’t required.
- Unplanned and Planned Downtime in the constraint takes away the opportunity to produce throughput. The cause and length of the downtime should be recorded.
- Blockages of the constraint occur when the constraint is prevented from running because the operation feeding it experiences downtime. This is somewhat different than starvation in that any upstream location could be the cause of starvation. Once again, record the reason and the length of time the constraint was blocked.
There are other reasons or factors that affect the stability of the constraint such as late releases, absenteeism, etc. but the four I listed above are the most important. So now that you’ve collected the causes and times associated with this stability metric, it should be easy for you to develop an action plan to improve the stability of the constraint. Simply create a Pareto chart of the causes and times and attack the top 20% that account for 80% of the stability problem. Pretty simple as long as you put the tracking mechanism in place.
In my next post, we’ll complete our series on performance metrics by discussing why selecting the right performance metrics is necessary for your company’s long term survival. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time.
Theory Of Constraints Handbook, James F. Cox and John G. Schleier, Chapter 14: Resolving Measurement/Performance Dilemmas, Deborah Smith and Jeff Herman,The McGraw-Hill Companies, 2010