Manufacturing Breakthrough Blog
Friday April 15, 2016
In my last post we provided a definition of variability and a way to calculate and quantify it. In analyzing variation we now know that sigma (σ) is a measure of absolute variability, but we also learned that relative variability is sometimes better when you are comparing variation. In today’s post, I will present five of the most prevalent sources of variation.
Sources of Variation
Hopp and Spearman  write about five different sources of variation that are found in manufacturing environments, it is my belief that they apply to virtually any environment where a process produces either products or services. These five sources are as follows:
- Natural variability: includes minor fluctuations in process time due to differences in operators, machines, and material and, in a sense, is a catch-all category, since it accounts for variability from sources that have not been explicitly called out (e.g. a piece of dust in the operator’s eye). Because many of these unidentified sources of variability are operator-related, there is typically more natural variability in a manual process than in an automated one. Even in a fully automated machining operation, the composition of the material might differ, causing processing speed to vary slightly. In most systems, natural processing times are low variability, so CV is less than 0.75.
- Random outages: Unscheduled downtime can greatly inflate both the mean and the coefficient of variation of process times. In fact, in many systems, this represents the single largest cause of variability. Hopp and Spearman refer to breakdowns as preemptive outages because they occur whether we want them to or not (e.g. they can occur right in the middle of a job). Power outages, operators being called away on emergencies, and running out of consumables are other possible sources of preemptive outages.
Hopp and Spearman refer to non-preemptive outages as stoppages that occur between, rather than during jobs and represent downtime that occurs, but for which we have some control as to when. For example, when a tool begins to wear and needs to be replaced, we can wait until the current job is finished before we stop production. Other common examples of non-preemptive outages include changeovers, preventive maintenance, breaks, meetings and shift changes. So how can we use these thoughts?
Suppose we are considering a decision of whether to replace a relatively fast machine requiring periodic setups with a slower, flexible machine that does not require setups. Suppose the fast machine can produce an average of one part per hour, but requires a two hour setup every four parts on average. The more flexible machine takes 1.5 hours to produce a part, but requires no setup. The effective capacity (EC) of the fast machine is:
EC = 4 parts/6 hours = 2/3 parts/hour
The effective process time is simply the reciprocal of the effective capacity, or 1.5 hours. Thus, both machines have an effective capacity of 1.5 hours. Traditional capacity analysis would consider only mean capacity and might conclude that both machines are equivalent. Traditional capacity analysis would not recommend one over the other, but if we consider the impact on variability, then the flexible machine, requiring no setup, would be my choice (and that of Hopp and Spearman). Replacing the faster machine with the more flexible machine would serve to reduce the process time CV and therefore make the line more efficient and effective. This, of course, assumes that both machines have equivalent natural variability.
- Setups: The amount of time a job spends waiting for the station to be set up for production. Setups are like changeovers in that they contain internal and external activities. Internal activities are those that must be done while the equipment is shut down while external activities can be completed while the equipment is still running. The key to reducing setup time is to turn as many internal activities into external activities, thus reducing waiting time.
- Operator availability: The amount of time a job spends waiting for an operator to be available to occupy the work station and begin to produce product. The best way to reduce this type of time delay is to create a flexible work force. Having to wait for a specialist operator is no longer acceptable. Companies today must cross-train operators so that if one is called away or is absent, another can step in and perform his or her tasks. This is especially critical in the constraint operation.
- Recycle: Just like breakdowns and setups, rework is a major source of variability in manufacturing processes. If we think of the additional processing time spent “getting the job right” as an outage, it’s easy to see that rework is completely analogous to setups because both rob the process of capacity and contribute greatly to the variability associated with processing times. Rework implies variability which in turn causes more congestion, WIP and cycle time.
One of the keys to understanding the impact of variability is that variability at one station can affect the behavior of other stations in the process by means of another type of variability referred to as flow variability. Hopp and Spearman  explain that flow refers to the transfer of jobs or parts from one station to another and if an upstream workstation has highly variable process times, the flow it feeds to downstream work stations will also be highly variable. In other words, variability propagates!
The concepts of processing time variability and flow variability are important considerations as we attempt to characterize the effects of variability in production lines, but it’s important to understand that the actual processing time (including setups, downtime, etc.) typically accounts for only about 5 to 10 percent of the total cycle time in a manufacturing plant (Hopp and Spearman). The vast majority of the extra time is spent waiting for various resources (e.g., work stations, transporting, storage, operators, incoming parts, materials and supplies, etc.). Hopp and Spearman refer to the science of waiting as queuing theory, or the theory of waiting in lines. Since jobs effectively “stand in lines” waiting to be processed, moved, etc., it is important to understand and analyze why queuing exists in your process. Doesn’t it make sense that if waiting accounts for the vast majority of time a product spends in the system, then one of the keys to throughput improvement is to identify and understand why waiting exists in your process?
In my next post, we’ll discuss the concept of queuing and some basic laws of variability. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond.
Until next time.
 Wallace J. Hopp and Mark L. Spearman, Factory Physics-Foundations of Manufacturing Management, 2nd Edition, Irwin McGraw-Hill, 2001