30,000-foot-level Full of Problems OR Paradigm Shift?

By in
118

I am writing this response as author of the November 2006 3.4 per million article, which generated QP Mailbag feedback in the January and February issues of Quality Progress.

My response to Tim Folkerts’ comments in the January 2007 issue had been given in the discussion board: http://www.asq.org/discussionBoards/thread.jspa?forumID=2&threadID=3965. In this response I stated: The point relative to the data in Figure 2 not being a representative sample of the population plot shown in Figure 1 is valid. I hope that this inconsistency does not impact the overall message that you receive from the article. Readers of this article should simply view the general shape of the distribution shown in Figure 1 as the population and Figure 2 as a sample of data collected over time.

The 30,000-foot-level reporting concept that I described in November issues of Quality Progress for 2003, 2004, 2005, and 2006 should be considered as a paradigm shift to traditional control charting and process capability/performance metric reporting.

The implication of this charting technique is that this approach can be used as an operations reporting methodology, or process, for business-wide systems, which reduces organizational firefighting. This approach has many advantages over traditional red-yellow-green business scorecard systems. For those readers who are interested in the benefits of this approach, I suggest taking a look at all of the above noted articles on this topic.

To address John Flaig’s QP Mailbag comments in the February 2007 issue of Quality Progress, I will take a tack that avoids any semantic differences of opinion about the terms common- and special- cause variability.

Consider that a quality practitioner wants to create a control chart and make a process capability/performance metric statement about wait time in the checkout line at a grocery store.

One of the first things to consider is selecting a subgrouping frequency. Possibilities for this subgrouping include minute, hour, day, or week time periods. Using traditional control-charting techniques, most practitioners would probably select an and R control chart as the appropriate process tracking tool with a short subgrouping period; e.g., minute or hour.

I think that we would all agree that customer volume can vary by time of day and day of the week in popular grocery stores. I think that we would all agree that this difference in customer demand could affect wait time in the checkout line. For this illustration, let’s consider that this varying demand does affect checkout wait time.

In all likelihood, the described and R chart will be out of control (See Quality Progress November 2003 issue). Because of this out-of-control condition, we should not report any process capability/performance metric. Since the process is reported as unstable, a general statement about customer-experienced checkout wait times can be erroneous.

Consider now that you are the manager of the store. You asked your quality practitioner about checkout wait time. The quality practitioner then states that he cannot report this because the process is out of control. As a store manager, how would you feel? You wanted something that seemed reasonable and your quality guy told you that it cannot be done.

The reason checkout wait time would have out-of-control signals is that the volume of customers changes with the time of day and day of the week. When examined across an entire year, it is recognized that the hourly patterns repeat each day and the day-to-day pattern repeats each week, but there is no repeatable pattern week-to-week.

In the 30,000-foot-level reporting methodology, weekly wait time means and standard deviations, or log standard deviations, (See November 2003 article) are tracked over time in individuals charts. If these charts are in control, the process can be considered predictable. For predictable processes, individual checkout times would then be used to determine an estimate for the process capability/performance metric. Wait time improvements in the process capability/performance metric would focus on what could be done differently in the process to reduce weekly reported means and standard deviations.

For this situation, frequent sampling yields autocorrelated data reporting. Traditional control charting of this autocorrelated data would lead to time of day being considered an assignable cause. Traditional control-charting directives state that assignable causes should be fixed; however, it makes no sense to fix time of day. An alternative would be to use control charts to do a better job of managing the number of people in the checkout line. However, you still probably could not react quickly enough to eliminate these assignable causes so that the control chart would be in control; i.e., one could still probably not report a process capability/performance metric to the store manager.

With the alternative 30,000-foot-level reporting methodology, the store manager’s question is addressed head-on. This approach provides a high-level view of all customers’ experience. It is reasonable to assume that most customers shop in the grocery store on different days of the week, and at different times of the day. For the customer, time of day and day of week are potential sources for wait time variability. The 30,000-foot-level approach considers the time of day and demand variations as noise to the overall system. The 30,000-foot-level reporting intent is not to provide timely feedback for process adjustment.

Because of this situation, mathematically it is very important to select the most appropriate subgrouping frequency and control-charting technique, with process capability/performance metric reporting.

For the described situation, an initial 30,000-foot-level control-charting question is the selection of a subgrouping frequency. We want an infrequent subgrouping/sampling period so that typical process variation occurs between subgroups; i.e., we are breaking the autocorrelation. In addition, we want between-subgroup noise variability to impact the control limits.

For the described situation, we probably would expect that there could be differences in wait time between days of the week; hence, a weekly subgrouping would seem most appropriate. Secondly, since we want the variability between subgroups to impact our control limits, a general rule is that and R charting would not be appropriate (See Quality Progress November 2003 article). As noted earlier, assuming that there are multiple random samples within subgroupings, individuals charts tracking within-subgroup means and log standard deviations, or standard deviations, would be the most appropriate tool to assess if over time there is a process shift or unusual event.

If the process is in control, it is considered to be stable. It then is reasonable to assume that the process should maintain a similar level of performance in the future, unless something either good or bad occurs in the process. Since people outside the quality community typically have difficulty understanding the terminology in control, I think it is better to state that the process is stable and predictable. We then are able to consider past performance as a random sample of the future. When making this assessment, we are assuming that nothing new occurs in the system to either positively or negatively impact the system’s output.

When a process is predictable, the next question one should be asking is: What is predicted? I have found that probability plotting of the individual data point values from a stable, predictable, process to be an excellent means to make this statement. My Quality Progress November 2005 article describes this in more detail.

If there is interest, I could conduct a web meeting about my November 2006 and the other related 30,000-foot-level articles. Let me know if you are interested. My contact information is noted below.

Forrest W. Breyfogle III
[email protected]
512-918-0280

54321
(0 votes. Average 0 of 5)