How Do You Get the Most Out of Any Process?

How Do You Get the Most Out of Any Process? | Quality Digest

How Do You Get the Most Out of Any Process?

N ow we come to the sixth way to use a process behavior chart. Here we are going to look at how one group of workers used their average and range chart to improve their process. Their part had only one critical dimension, and this dimension had a standard deviation of only 15 microns. What kind of high-tech product might this be? Read on.

A group of executives from the Body and Assembly Division of Ford Motor Co. were visiting the Tokai Rika plant in Japan when they noticed eight production workers gathered around a process behavior chart "engaged in active discussion." To the group from Ford it seemed that there must be a problem, which they expected to be an internal production problem, or an assembly plant problem, or a problem of too many rejects. The group's host inquired about the discussion, and they learned it was simply a routine review of a process that for the past five months had operated predictably, on target, and well within specifications. To substantiate this, the Tokai Rika personnel translated the chart, photocopied it, and presented it to the group before they left the plant. Thus we have a process behavior chart that provides a 20-month window on this production process and reveals how the Tokai Rika personnel used this chart as the locomotive for continual improvement.

The part being produced at the Tokai Rika plant was the outer shell of a cigar lighter. Fabrication involves a die stamping operation to form a cylinder with a flange on one end followed by a rolling operation that forms a groove in the cylinder known as the detent. A schematic of the rolling operation is shown on the left side of figure 1. The dimension tracked is that of the distance from the flange to the detent as shown on the right side of figure 1.

fig 1 nov

Figure 1: Tokai Rika's cigar lighter detent dimension and the rolling operation

They collect four pieces each day (at 10 a.m., 11 a.m., 2 p.m., and 4 p.m.) and measure the detent dimension for each piece. The data from the first 24 days of production are given in figure 2. These data were used to create the average and range chart in figure 3. (The values shown in figure 2 are the detent dimension expressed as the number of hundredths of a millimeter in excess of 15.00 mm. Thus, the target value of 15.90 mm for the dimension shown in figure 1 corresponds to a recorded value of 90. By writing only the two decimal places of each measurement, the job of data entry is simplified.)

fig 2 nov png
Figure 2: Tokai Rika's cigar lighter detent dimension data for days 1 to 24

fig 3 nov png

Figure 3: Tokai Rika's cigar lighter detent dimension data in an average and range chart for days 1 to 24

At Tokai Rika they computed limits using the first 24 subgroups. For these 24 subgroups the grand average is 89.80 and the average range is 3.04. These values result in the limits shown in figure 3. Interpreting the average and range chart for this baseline period, we would conclude that this process shows a reasonable degree of predictability. In addition, with specifications of 80 to 100, this process has a capability ratio of 2.25 and a centered capability ratio of 2.21.

The homogeneity of these data means that the parts we have measured may be taken as being representative of the parts we have not measured, and we may reasonably and realistically characterize the whole process stream using these 96 data. We do this by computing limits for individual values. Dividing the average moving range of 3.04 by the bias correction factor of 2.059 results in a Sigma(X) value of 1.48 hundredths of a millimeter, thus we find:

eq 1 nov

With a predictable process such as this we would expect to find virtually all of the individual values within the natural process limits. Here these limits range from 86 to 94 inclusive. While we do not have access to the parts not measured during the baseline period, we can compare these limits against the observed values obtained from this process after day 24. We do this with the histogram in figure 4, which shows the 244 data from the 61-day period of day 25 to day 85.

fig 4 nov png

Figure 4: Histogram of 244 detent dimension values from days 25 to 85

As expected, we find 243 out of 244 values in the interval of 86 to 94. It is important to note that the limits shown were not obtained from the data shown. We have extrapolated from the product produced in the baseline period (days 1 to 24) to characterize the product produced and measured after that baseline period (days 25 to 85). Moreover, if we can successfully extrapolate to characterize the product not yet produced, we can also extrapolate to characterize the product not measured as well.

Thus, with a production rate of 17,000 parts per day, we can say that on days 1 to 85 Tokai Rika produced 1.44 million parts. Virtually all of these parts fell between 15.86 mm and 15.94 mm. The whole histogram in figure 4 is only one-tenth of a milligram wide. That's four thousandths of an inch, which is the thickness of a typical piece of paper!

How can we justify using 96 data to characterize 1.44 million parts? The only justification for such an extrapolation is the predictability displayed by the underlying process in figure 3. When a process has been operated predictably in the past, it is reasonable to assume that, barring any deliberate changes, it will continue to be operated predictably in the foreseeable future. In figure 4 we see that this extrapolation makes sense. On the other hand, when a process has been operated unpredictably in the past, it is unreasonable to assume that it will spontaneously do better in the future.

Past behavior is always the best predictor of future behavior. Thus, the characterization of a process as either predictable or unpredictable is a fundamental dichotomy for data analysis. However, even the best process will still be subject to the effects of entropy. While we may expect our process to continue as before, we need to be aware of changes that may occur without warning. As this example continues we will see how the operators at Tokai Rika used the average and range chart not only to track this process, but also to improve it.

But first we need to address two questions that commonly occur at this point: "Are four pieces out of 17,000 enough?" and "Is one subgroup per day adequate?" Tokai Rika let the process behavior chart answer both of these questions. As we will see, four pieces per day is sufficient to detect changes that occur in this process. As long as you are detecting changes in the process when they occur, you have enough data.

Moreover, since changes in this process tend to develop slowly over a period of several days, one subgroup per day is adequate. You only need enough subgroups to detect changes in your process. So the key question regarding subgroup frequency is: "How quickly can your process change?"

The limits from figure 3 were extended forward and used to track and evaluate the production process. As may be seen in figure 5, they found evidence of a process change on days 35 and 36.

fig 5 nov gif

Figure 5: Tokai Rika's cigar lighter detent dimension data for days 1 to 60

They knew they had a problem with days 35 and 36. Looking back to the last time the process crossed the central line, Tokai Rika's production workers decided that this problem could have begun as early as day 29. Upon investigation, they found that the positioning collar had worn down and needed to be replaced (figure 1 shows that a worn collar would probably cause the measurements to increase). Recognizing this as a problem of tool wear, they did two things: They ordered a new positioning collar, and they turned the old collar over to get back on target while waiting for the new collar to arrive. This is indicative of a desire to operate right at the target value whenever possible. The new collar was installed on day 39, and they wrote Intervention Report No. 1 detailing what was found and want was done.

Following this intervention they decided to compute new limits for the process. They ran without limits for days 39 to 49 and used this period as their new baseline. With a grand average of 90.18 and an average range of 0.91, the new limits were considerably tighter than the previous limits. As they used these limits to track the process, they soon found evidence of another process change.

The averages for days 57, 58, and 59 are all below the lower limit, and it's fairly clear there was a shift in the process. (They did not make any notes about the ranges falling above their upper limit on days 53 and 54.) Moreover, unlike the excursion on days 29 to 36, in this case, there is no gradual change leading up to the first point outside the limit. Hence they noted that this was a sudden change and began to look for something broken. They began their investigation with the rolling operation. When no problems were found at rolling, they turned to the blanking operation.

As shown in figure 6, after the better part of two weeks had passed, they finally discovered what was making the detent dimensions smaller: There was a very small wrinkle on the flange due to a defect in the die.

fig 6 nov png

Figure 6: Tokai Rika's cigar lighter detent dimension data for days 39 to 85

Thus, they scheduled a repair for the die on the weekend between days 70 and 71. At the same time they modified the bolt holding the pressure pad since they had found that it was coming loose. Following these changes they wrote up Intervention Report No. 2 and proceeded to collect data for a new set of limits. The process average went up to 90.88, which is probably due to the fact that this positioning collar already had 32 days of wear prior to this new baseline period.

To understand how quickly and efficiently they were tracking and responding to the upsets that occur in this process, it might be helpful to look back at the histogram in figure 4. That histogram contains the data from days 25 to 85. During this period this process displayed two excursions and was subject to one tool change and one process upgrade and repair. In spite of the fact that the histogram contains the data from these excursions, we find all the values but one within the natural process limits computed from days 1 to 24! The signals of change were clear and unequivocal on the average chart, which allowed those who ran this process to respond in a timely manner. In the end, their interventions were sufficient to keep the product stream essentially homogeneous in spite of the process changes.

Note also that the purpose of the limits is to tell the story of what is happening in the process. It is not a matter of computing a single set of limits and using them forever. It is also not a matter of waiting until we have some minimum amount of data before we compute our limits. The baseline period of days 39 to 49 was essentially chosen based on the fact that day 49 was the last point on that sheet of paper. When a process is operated predictably any baseline will tell the same story as another baseline. When a process is operated unpredictably different baselines may tell different stories, but the point will not be one of "How good are the limits?" but rather "What can we do about the assignable causes?"

Figure 7 shows the limits from days 71 to 85 extended forward.

The averages for days 102 to 105 are above the upper limit and the points for days 95 to 105 are circled as another signal of a process change. This signal was interpreted as tool wear on the positioning collar. The first positioning collar lasted less than 40 days. This collar lasted about 55 days. Based on this experience, and possibly on their experience with previous cigar lighter shells, they decided upgrade to a tungsten coated positioning collar.

fig 7 nov png

Figure 7: Tokai Rika's cigar lighter detent dimension data for days 71 to 125

Since a tungsten coated collar was a special-order part, they had to wait a month to get the replacement collar. Since this was too long a time for the trick of turning the collar over like they did on days 37 and 38, and since the specifications of 80 to 100 provided them with a lot of elbow room, they decided that they could continue to operate with the worn positioning collar until the replacement arrived. While they frequently did whatever was needed to operate this process right at the target value, here they made a conscious and deliberate decision that they would allow this process to operate off target for a limited period of time. Based on their experience with this process they were confident that as long as the process average did not go above 95 they would not be in danger of making any nonconforming parts. The highest daily average shown is for day 123, which is 95.00.

Then, on day 126, the tungsten coated positioning collar was installed and Intervention Report No. 3 was written. Following this they used days 126 to 140 to compute new limits. With a grand average of 90.57 and an average range of 2.13 they are back on target, and the limits are as shown in figure 8.

fig 8 nov gif

Figure 8: Tokai Rika's cigar lighter detent dimension data for days 126 to 185

Between day 1 and day 140 they replaced the positioning collar twice, using an upgraded collar the second time. They repaired the die and also improved the bolt for the pressure pad. While these sound like fairly routine types of maintenance operations, notice that they are improving and upgrading the process as they maintain it. The cumulative effect of these improvements was a reduction in the process variation. The average range of 2.13 in figure 8 corresponds to a Sigma(X) value of 1.03 hundredths of a millimeter. Using the grand average of 90.57, we obtain natural process limits of 87.5 to 93.7, which are 30-percent narrower than those of figure 4. Thus, they have actually reduced the variance of the product stream by 52 percent, from 0.0219 mm 2 to 0.0106 mm 2 .

Figure 9 shows the histogram of the data from days 126 to 185 along with the natural process limits based on the limits from days 126 to 140. Since this histogram includes the data from days 172 to 180, where the process suffered another excursion, we do find five values just outside the natural process limits. Nevertheless, the limits in figure 9 are only 70 percent as wide as those in figure 4.

fig 9 nov png

Figure 9: Histogram of 240 detent dimension values from days 126 to 185

Over the course of the next year Tokai Rika maintained this reduced level of variation. Since 17,000 parts per day would meet the combined needs of all automotive manufacturers in Japan at the time of this chart, it would seem that Tokai Rika was the sole supplier for this part.

This process is about as predictable as any process will ever be. Yet in the nine-month period shown, it was subject to four upsets and excursions of magnitude three and four Sigma(X) above and below the target value. These excursions tell the tale. You simply cannot fix a process and then expect it to stay fixed. The process behavior chart called their attention to these unplanned process changes and allowed them to evaluate their deliberate process interventions. They improved this process and then they maintained it so that it would continue to operate up to its full potential.

This can be seen in figure 10, which shows the histogram of the 780 measurements for days 186 to 380. During this 195-day period they produced more than 3.3 million parts and had two more process excursions (one of which was a seven sigma excursion). Only 10 of these 780 values fall outside the natural process limits computed from the baseline of days 126 to 140! Sometime around day 140 the specifications were relaxed to become 75 to 105 as shown in figure 10. As a result of both the process improvements and this relaxation of the specifications, this process, which started off with a capability ratio of 2.25, ended up with a capability ratio of 5.0.

fig 10 nov png

Figure 10: World-class quality: Tokai Rika's histogram for days 186 to 380

Time after time I find that world-class quality has capability numbers in the 4.0, 5.0, and 6.0 range, yet here we are all concerned with getting capabilities of 1.5 to 2.0. Does that sound like a formula for being competitive?

For the rest of this story, and for the original charts, see Chapter Seven of Understanding Statistical Process Control, Third Edition , by Donald J. Wheeler and David S. Chambers (SPC Press, 2010). For the data for days 25 to 185 see Reducing Production Costs , by Donald J. Wheeler (SPC Press, 2010).

Continual improvement

When shown this example, one vice president at GM asked, "Didn't it cost more to make the cigarette lighters that good?" The answer is that it did not. This is simply what the Tokai Rika process was capable of producing—no more and no less. They were simply operating at full potential.

Moreover, since variation always creates costs, and since you can only achieve minimum variation when you operate your process predictably, they were actually reducing the excess costs of production and use by operating this process up to its full potential.

Operating a process predictably is not an impossible task. Neither does it cost more to do so. Tokai Rika owned this market. They had no competition. There was no economic incentive to improve this process. Yet improve it they did by simply attending to the exceptional variation in a timely manner, by finding the assignable causes, and by fixing what needed fixing.

To clarify how process behavior charts help us to operate a process at or near its point of economic equilibrium, we need to consider what the various elements of a chart represent. In his second book, Statistical Method from the Viewpoint of Quality Control (The Graduate School, The Department of Agriculture, Washington, 1939), Walter A. Shewhart showed how a process behavior chart (1) defines an ideal , (2) provides a methodology for moving toward that ideal, and (3) allows us to make a judgment about how close to the ideal we have come.

fig 11 nov png

Figure 11: The process potential

Specifically, in figure 11 the limits on a c hart define the process potential . They define the range of values that you are likely to see from a predictable process, and they approximate the range of values that you could expect to see when you learn how to operate your unpredictable process closer to its full potential. To use Shewhart's word, the three-sigma limits define the ideal of what the process can do when it is operated up to its full potential. They define what your process can accomplish.

fig 12 nov png

Figure 12: The process performance

The running record defines the actual process performance . As seen in figure 12, whenever a point goes outside the three-sigma limits it identifies a departure from the routine, a change in the process, and the presence of an assignable cause of exceptional variation. By identifying these points, the process behavior chart tells us when to look for assignable causes. When we can identify an assignable cause and move it from the set of uncontrolled factors to the set of control factors, we will be removing a significant source of variation from the product stream. By removing sources of variation from the product stream we are not merely maintaining the status quo, but are rather tightening up on the process variation and improving both the predictability of the process and the consistency of the process out­comes. So the process behavior chart gives us a methodology for actually moving our process toward the ideal.

fig 13 nov png

Figure 13: The comparison of performance with potential

Finally, in figure 13, by combining both the process potential and the process performance on the same graph, the process behavior chart allows us to make a judgment about how close to full potential our process is operating. The absence of points outside the limits tells of a reasonable degree of predictability. For an unpredictable process, the number of points outside the limits, and the extent to which they fall outside the limits, will quantify the degree of unpredictability.

Hence, a process behavior chart is in itself an operational definition of how to get the most out of any process. It defines the ideal, it provides a method for moving toward that ideal, and it allows a judgment about how close the process is to its point of economic equilibrium. It is this complete package that makes the process behavior chart the locomotive of continual improvement.

No special project teams are needed. No outside experts are required. The workers at Tokai Rika simply listened to the voice of their process and attended to the problems as they arose. As a consequence, the process variation was reduced. Time after time my clients tell me that as they learn how to operate their processes predictably they find that the process variation has also been reduced to one-half, one-third, one-fourth, or one-fifth of what it was before. I have clients that own their markets because they have the highest quality at the lowest price. This is the legacy of continual improvement. You cannot operate your process up to its full potential until you learn how to operate it predictably. And the only way to operate your process predictably is by means of a process behavior chart.

Summary

We started this series by considering report-card charts. Report-card measures tend to be highly aggregated and full of noise. We saw that while report-card charts can be used to tell the story, they are seldom specific enough to facilitate process improvement.

Next we looked at using a process behavior chart as a process monitor. There we discovered that they are one of the best process monitoring techniques ever invented. They routinely outperform PID controllers and other adjustment rules. However, we also discovered that merely adjusting the process aim after an upset does not equal what can be done when we learn how to operate a process predictably.

Next we saw how process behavior charts can help with process trials by focusing attention on those things that need to be fixed and verifying suspected cause-and-effect relationships.

We looked at the use of process behavior charts as a tool for examining a sequence of values for homogeneity as happens with incoming inspection.

We considered using both input and outcome charts as a way to learn about an existing process when experimentation is not an option. While every input may not be tracked on a chart, the outcome chart will still make it clear when the process is changing. In the absence of signals on the input charts, it becomes clear that there are important but unknown inputs that remain to be found. Since this is equivalent to studying wildlife in the field, I labeled it as an "every factor at the same time" (EFAST) approach.

However, in spite of these varied and important uses, the real purpose for which Shewhart invented the process behavior chart is continual improvement. (This is essentially the same as the EFAST approach without worrying about the input charts.) The control variables are held constant as always, but the uncontrolled variables are all free to change. When the process behavior chart signals a change in the process, we look for the assignable cause of that change. It may be as simple as tool wear or some other change in one of the known control variables, or it may be some overlooked, and uncontrolled, variable that has a dominant effect upon our process. As you fix the things that need fixing, when they need fixing, you will learn how to operate your process up to its full potential. You will gain new technical knowledge, and you will also discover some dumb things you have been doing. As you cease to do the dumb things and use the new technical knowledge, you will be using the process behavior charts for continual improvement.

After all, isn't it time we quit arguing about how good the parts have to be and instead start learning how to make them all the exactly the same?



Enviado desde mi iPad

Comentarios