Risk Analysis

The problem with a stable process, it will soon fail?

I was talking with a friend this week about a problem with a company that we both know.  This manufacturing company has sustained a very capable and stable process for a number of years.  The process is well documented with good in-process metrics.  They even have strict rules on making changes to the equipment, which require qualification runs to ensure validity prior to an introduction into general use.

No problems for years, and then the quality fell off of the cliff.  Nearly 80% of the product from a long production campaign failed to meet requirements.  When it was investigated the following was found: Continue reading The problem with a stable process, it will soon fail?

Developing a new theory at work – Causation or Correlation

I am involved with statistical work to understand a process failure.  We know there was a process breakdown somewhere.  It is easy to recognize the artifact that was developed because of the problem.  But can the problem be fixed without understanding the generating cause.  Probably not.

Lean Six Sigma training and Root Cause Analysis provides great tools to understand how to narrow down a problem to a single step or a short period of time that something happened to create the problem.  OK, there is the easy part.  How do we remove the true causes?

The risk to a lightly trained improvement leader is to jump on the first “Cause like” factor that shows a correlation to the period that the problem occurred.  Take this moment to remember what your mentors told you… Continue reading Developing a new theory at work – Causation or Correlation

Statistical Test Equivalencies: t-test & ANOVA

As I was updating our Green  Belt course material to incorporate V17 screen shots I saw examples of data that could have been tested two uniquely different methods.

Both conditions could exist at the end of an improvement project when you are demonstrating your improvement with a hypothesis test.

These examples are quite important to me as a statistician and Lean Six Sigma instructor because they show the internal consistency of the inferential statistical methods.

With every available test applied to the data, you will arrive at the correct business decision. Continue reading Statistical Test Equivalencies: t-test & ANOVA

Server and PC backups

As I have posted before, “Life is a poor teacher, it provides the consequences before the lesson”  This week I have spent a lot of time with a client to restore a scorecard software that they had not put on their IT server backup program.  We spent 8+ hours using screen shots to rebuild the system.  It was tedious and all of us had other things that should have been getting done.

In the past 30 days, I personally have managed to create havoc on our internal systems, which were fixed through two restorations from backup files.  These two backups saved me a few days of work.

So what is the lesson?

We should fall back to our Lean Six Sigma control plan training and to the FMEA worksheet.  Every new improvement, process or IT system should be examined to make sure it is ready for production use.  The control plan will lead you to create a support system for the process/product that includes troubleshooting personel and backup hardware.  The FMEA worksheet will lead you through an analysis of all the potential failures of the system so that you will have a plan in place to either eliminate the risk or make it easy to keep working.

An FMEA by my client would have recognized the risk and would have triggered the inclusion into the IT backup program.  Luckily the folks here at Smarter Solutions took care of the risk.  They may have even put “Rick” as one of the causes that needed to be managed.  Who knows.

 

Enhanced by Zemanta

Reporting customer experience impact of a project in a prioritization matrix

I have been working with a company whose primary income is generated with customer transactions.  The industry is full of many competitors that provide the same basic services so that one of the primary business differentiators is the customer experience and the subsequent loyalty shown by customers when they continue doing business.

As you can imagine, the customer experience impact is a big factor in nearly every business decision.  To support this effort, the company has adopted a variant of the Net Promoter Score to measure the customer experience along with supporting many of the consumer survey businesses to evaluate its customer service.  By the way, they score quite well on these surveys because of their long-term focus on the customer.

I was acting as a facilitator at a workshop that was evaluating the business methods to select business improvement initiatives and projects.  It was well beyond the concepts used in Lean Six Sigma type projects because the goal was to create a work prioritization for all initiatives and projects that were going to be deployed into the retail part of the business.  These could be IT upgrades due to technology, changes in building layouts or designs, policy deployment, compliance initiatives, and, of course improvement projects.  There was a limited workforce that was being tasked with taking these initiatives and projects from the business and building a plan not only potentially to improve the task but to plan the deployment and manage the change in the business retail locations.  On the day of the workshop, they had identified close to 300 potential projects, a greater number than they had in their headcount.

As a Lean Six Sigma believer, I recommended a weighted prioritization matrix as the tool to use in this effort.  Since there was no prior work in this area, we also used the Analytical Hierarchy Process (AHP) to develop the weights.  Sounds good, right?

We worked the AHP relatively quickly and then went to the prioritization matrix.  The AHP used a scoring from 1 to 10 and the prioritization matrix used scoring from -5 to 0 to +5.  I know that negative values are not common in a prioritization matrix, but we recognized that the company had to make trade-offs in one area for a gain in another.  These could be as simple as a decision that would reduce operating expenses (a good thing) but might not be taken well by the customers (a bad thing).  The goal was to create a tool by which the company could adequately consider all the benefits and trade-offs in the decision to schedule each initiative and project for execution.

The process seemed to be running well until we actually used the prioritization matrix to score a few of the actual projects.  No one really liked the prioritization order that the tool recommended.  The order was not liked because it did not match the directives from above.  Specifically, a project to introduce an across-the-board price increase scored very low, but the corporate leadership placed it as a number one priority.

Price increases are expected to generate a bit of customer angst and complaints, but the business needed the increase to cover operating expenses, and its competitors were also introducing pricing increases so that the customer impact was considered as acceptable.

The prioritization tool marked the pricing change nearly at the bottom of the priorities.  This is a problem.  At the executive report-out we were told that the weightings were the problem and we needed to weigh the financial gains greater than the other factors, but they did understand that a high customer experience was in their mission and vision statement and it was also ingrained into the corporate culture, so that the executives were not comfortable with de-emphasizing the customer very much.

After the workshop, the lean six sigma team and I sat down and talked about this problem.  What we recognized is that the typical use of the prioritization matrix in this case was not adequate for considering the customer impact.  We had scored the customer experience from +5 (very positive impact) to -5 (strong negative impact).  It seemed OK at the time, but we realized the weakness was the consideration of the impact over time.  Most prioritization matrix considerations are one-time events such as cost, or process lead time change, or staffing changes and such.  But the customer experience is not such a factor because it measures a feeling or an emotion that is expected to change over time and may not impact the sales of the business at all.

After a long post-workshop discussion we considered a better operational definition for the customer experience scoring.

-5 = A catastrophic impact to the customer experience for the entire customer base that causes a long term 1% or greater loss in sales.

-3 = A measurable drop in the immediate customer experience scores for the entire customer base (or a catastrophic change for a small portion of the customer base) but no measurable change in sales.  CI scores are expected to return to past levels after a short time.

-1 = A minor but measurable drop in the customer experience scores, but it is the same for competitors.

0 = No significant change in the customer experience score or in sales.

+1 = A minor but measurable increase in customer experience scores, but it is the same for competitors.

+3 = A measurable immediate increase in customer experience scores for the entire customer base (or a spectacular change for a small portion of the customer base), but no clear change in sales.  CI scores are expected to return to past levels after a short time.

+5 = A spectacular impact that causes customers to recommend our business, leading to a greater than 1% increase in sales.

What is the big difference in this scoring?  We are accounting for the dynamics of a process output that is not deterministic.  This scoring guide is similar to scoring the risk to the business rather than the absolute change in the score by individuals.  A small change to all customer experience is probably equal to a big change to a few customers.  This is similar to considering it as a risk analysis rather than an absolute measurement.

Customer experience score changes do not always directly change the business financials.  Customer experience scores may spike on emotions but return to historical values in a relatively short time as the customers understand the changes or just get used to them.

Now we could not really just change all the prioritization scores after the workshop because we did not have the subject matter experts, but when we played with the scorings a bit and the new rankings seem more appropriate.

I have seen this same issue in companies that have a Safety Mantra in the culture.  Anything that degrades safety is given a extremely low score because no-one wants to say that they are for less safety.  But in reality, most of the negative safety issues were truly just changing the assumed probability of something that had never occurred.

Lessons:

All prioritization factors cannot be treated equally.

Human factors, such as customer experience or employee impact, must be treated differently because the impact will change over time and may not truly impact the business.

The customer experience score could be considered like a risk analysis, where it is not the severity of the event alone that matters.  You always consider the Severity * Occurrence rate to get the full measure of the risk.

 

Enhanced by Zemanta