Different views of Common Cause variation – A MSA analog

If you follow Forrest Breyfogle and Smarter Solutions publications you will have heard of the 30,000-foot-level methods.  When I teach the concepts, students struggle a bit to understand it, but the light comes on bright when it is understood.

The most difficult type of student to teach these methods to is a Statistical Process Control (SPC) expert because they have been taught the Shewart SPC methods and probably “love” them, as I did when I was working as a process engineer in a manufacturing plant.  The SPC trained people struggle with the  belief that there are more than one way to apply control charts, since the 30,000-foot-methods are based on a few concepts that say that Shewhart SPC methods are not correct.

The academic answer is that Shewart held a process control view, where the purpose of the control chart is to identify every assignable change in a process so that the operator is able to adjust it back to its original performance.  Shewart used the title of  chance-cause for the natural variation of a process that is generated by random variation, which is not labled as common cause.

The 30,000-foot-level methods are based on a Deming view of process management.  In the Deming view, the process is addressed with more of a long term view.  Any known or assignable change that is part of the process management is considered as a common cause.  If the business must change shifts or change raw material batches to produce their product or service, then that change is deemed as common cause variation because it is part of the managed process variation.

Both views are correct in certain situations.  Shewhart defined common cause as the random variation in a process that is independent of any change.  Deming defined common cause as the random variation plus any known and managed changes in a process.

A change in raw material to produce a batch of product would be considered as an assignable cause (not common) in the SPC or Shewart model.

A change in raw material to produce a batch of product that is an managed change and expected to occur during the production of a customer order would be considered as a common cause event in the 30,000-foot-level methods or the Deming view.

Try to consider the difference of the two views as a segmentation of the variation performed in a continuous data Measurement System Analysis (MSA).  In a continuous MSA we work to estimate the precision of a measurement system that includes two components; repeatability and Reproducibility.

Repeatability is the variation described by multiple measurements under identical conditions (repeated).  This variation is considered as the minimum variation that is natural to the system.

Reproducibility is the variation described by multiple measurements under different conditions, such as a different operator, day, part, …  This variation is considered as the additional variation needed if you are to execute the measurement system throughout your organization over time.

Precision is the total  variation assignable to the measurement system.

In this model the repeatability and reproducibility are independent sources and they add together to estimate the precision.  The true analysis of this data is considered as a Nested ANOVA.

Considering a process output in the same paradigm, Shewhart was looking at the process performance under identical and repeated conditions.  It was as if he was considering only repeatability and wanted to detect any reproducibility variation.

Deming would want us to deal with the precision aspect of a process because this is the system response and what a customer would view as our process performance.  It is this view that the 30,000-foot-level concepts are following.

The difference in the two methods is the component of reproducibility.  In the Shewhart view, the reproducibility component is treated as an assignable cause because it is known changes that are needed to reproduce the process at different times.  In the Deming view, the reproducibility component is considered as a common cause variation source because it is a common and known change that is required to execute the process.  Both are right in their specific context.

Given this premise that a variation source analogous to reproducibility is the difference in the two methods, the choice of a control chart is impacted.  In the Shewhart view, all of the variation is purely random and any chart that uses control limits derived from random data distributions is acceptable.  The distribution specific control charts are the p, np, c, and u charts, which produce an out-of-control signal if the distribution requirements are not met.

The x-bar-r chart is reasonable for a Shewhart view because the control limits are derived from only the repeatability component (subgroup r).  The x-bar-r chart will demonstrate out of control signals if there is any variation source between sample periods that does not exist within the subgroup.  These non-subgroup variation sources are equivalent to the concept of reproducibility.

The only chart the does not consider only the short term or repeatability type of variation in its control limit calculations is the individuals moving range or ImR chart.  The control limits for an ImR chart are also based on the range values but it is the difference in two sequential data points.  The sequential points include the short term and between sample sources of variation or we could say the repeatability and reproducibility components of the process variation.

This is the reason that the individuals chart is the preferred chart to use in the 30,ooo-foot-level charting methods and by Don Wheeler when he talks about business behavior charts.  When looking at the overall business or process performance, we want to look at the consistency of the process that includes the known and managed process variation.

I hope this provides another coherent explanation of why SPC charting is not the same as 30,000-foot-level reporting.