Attribute Sample Size Determination and Answering the Right Question for ScorecardsAug 13th, 2009 by Forrest Breyfogle.
ORIGINAL QUESTION: If a process is producing 98% good material as measured (P/F) non-parametric. You were asked to pull a population from stock and screen it for this defect, how would you calculate the number of samples to pull if you wanted to be 95% confidant you would detect/find the 2% defect. Assume no gage error
MY RESPONSE: A random population sample of 149 with no failures provides 95% confidence that the failure rate of a population is equal to or less than 2% non-conformance. With this statement, we are assuming that the sample size is small relative to the lot population size; e.g., 1/10 or less.
However, if one failure occurred in the test, the population could still have a failure rate less than 2%. With one failure, the sample failure rate would even be less than the criterion; i.e., 1/149 is less than 2%. This form of testing is valid; however, we must remember that we are only addressing consumer risk, not producer risk of being in error.
For determining a test sample size where alpha and beta are both considered the sample size would be a lot larger.
For situations like this, it is important to step back to the big picture to consider whether there is a better approach/question to answer. With lot testing like this, we are forced to consider that the population is the batch. However, in most situations the batch is taken from a process, which often has an overall common cause percentage non-conformance.
If we can obtain time-series data from the process, we can then assess whether there are noticeable differences between lots via a control chart. If there no differences can be detected between these lots/subgroups, we can then combine subgrouping data from regions of stability to make a best estimate of the non-conformance rate for this region, with a confidence internal, if desired. We might consider that we get a somewhat free increase in our sample size with this approach.
The additional benefit from this approach is that if the stability is recent we can consider that this data are a random sample of the future. The benefit of viewing the situation like this is that if we don’t like what we are predicting we need to change the process inputs or process steps with the intent of getting a future reduced failure rates.
This 30,000-foot-level metric reporting methodology, along with why individuals control charts are in general preferable over p-charts, as described in the article, “30,000-foot-level Attribute Control Charting“.
Lean Six Sigma Black Belt training and the balanced scorecard methodology can gain much by incorporating these techniques.
Leave a Comment
You must be logged in to post a comment.