Why do we use Statistical Process Control (SPC)?

During typical Lean Six Sigma courses each student is taught traditional SPC, as are most engineers and workers in a manufacturing environment.  I was taught the standard Shewhart SPC courses as a manufacturing process engineer and again when I went through six sigma training.

Statistical process control was developed by Walter Shewhart as a method to observe a manufacturing process so that the process operators could use the chart to manually control their processes.  An out-of-control condition was described to be caused by an “Assignable Cause” because the process operator would adjust the process back to nominal after verifying an assignable cause event.

Use the SPC chart to Drive actions by a process operator to restore the process to its original process performance.

The problem for me was that the application of SPC always seemed so perfect during class, but it was always causing problems when applied to the business.

The problems could be grouped in a few categories.

  • Out-of-control conditions indicated, but no real problems ever found.
  • The process appeared extraordinarily stable on the SPC chart, but the process output did not seem so stable.

In our manufacturing environment, we just learned to accept that certain processes were always indicating out-of-control conditions, but there was nothing we could do about it.  Just continue to operate with OOC conditions and stopped controlling the process based on the SPC chart.  Other processes looked great, with no values near the control limits, but the processes appeared to have a very unstable output, so we ignored these SPC charts as well and controlled them by other means.

In both cases the business had decided to spend resources to collect data to plot with an SPC chart, and then chose not to act on the signals from the chart.  You might see the same behavior in your organization.

This “BAD” behavior occurred, in my opinion, because we were taught a tool without being told about the limitations and assumptions required to use the tool properly.    Here are a list of the limitations and assumptions that were not taught to me!

1.  Individuals charts require the data to be relatively normally distributed before the control limits truly bound 99.73% of the data and act like true 3s limits.  If the data is not normally distributed, then you either transform the data or subgroup and use averages.

2. All of the variation sources that are used to compute the control limits is considered as the common cause variation sources.  Any variation source not included in the control limit calculations will be treated as a special cause source.  This is a key point because it is the most common error in SPC.  If you include multiple raw material batches,  multiple lots, multiple machines, or multiple operators in a single data stream and they do not all change with every data point, then the process will go out-of-control  with every change.  When these changes really are part of the common cause changes in a business.

3. X-bar-r charts are not the best chart to use because the control limits include only the within subgroup process variation.  Any variation within the process that happens between subgroups will be seen as a special cause source.  Nearly every SPC course speaks of the X-bar-r chart as the workhorse chart because the use of subgroup averages eliminates the need to consider the distribution of the original data, which is a true fact but it is immaterial to the choice of the X-bar-r chart.  After years of SPC application, I now believe the applicability of the X-bar-r chart is very small.  It should only be used for a single  high volume continuous process that has no expected between sample variation.

4. The entire set of attribute charts (P, np, u, & c) should never be used in a real business.  I have never seen a case where the use of the charts met the assumptions required for the control limits to  be actual 3s limits.  This is because each of these charts are effectively tests for a perfectly random data distribution.  P and np are only in=control if the data follows a random binomial distribution.  The u and c charts require a random poisson distribution to be stable.  To meet these conditions, the probability of a defect or defective part MUST be the same for every part or assembly processed.  In all four charts, the average defect count or rate is used to calculate the control limits.  That means that the process will be out of control if the true average rate or count fluctuates because of the operators, the raw materials, the type of defect, the machine used, or any other thing that changes routinely in business.    I guess the only reasonable use for a p-chart would be for examining a single defect type on a single machine for a single product or component.  A p-chart will not work to monitor a process that has multiple products processed (that may have different complexities and defect expectations) or multiple equipment combinations (that have different capability expectations).  A p-chart is never acceptable to look at an overall site or process performance!

Why are these concepts not taught, probably because most instructors are teaching just what they were told and they are not truly experienced in SPC usage.  It is a perfect example of Deming’s rule 4 of the funnel, since I am sure Shewhart understood these concepts in the 1930’s.

The only answer to this conundrum is to adopt the 30,000-foot-level control charting concepts for the cases where the assumptions are not met for the standard SPC tools.  Search this blog to read about these methods.