Reporting customer experience impact of a project in a prioritization matrixAug 26th, 2012 by Rick Haynes.
I have been working with a company whose primary income is generated with customer transactions. The industry is full of many competitors that provide the same basic services so that one of the primary business differentiators is the customer experience and the subsequent loyalty shown by customers when they continue doing business.
As you can imagine, the customer experience impact is a big factor in nearly every business decision. To support this effort, the company has adopted a variant of the Net Promoter Score to measure the customer experience along with supporting many of the consumer survey businesses to evaluate its customer service. By the way, they score quite well on these surveys because of their long-term focus on the customer.
I was acting as a facilitator at a workshop that was evaluating the business methods to select business improvement initiatives and projects. It was well beyond the concepts used in Lean Six Sigma type projects because the goal was to create a work prioritization for all initiatives and projects that were going to be deployed into the retail part of the business. These could be IT upgrades due to technology, changes in building layouts or designs, policy deployment, compliance initiatives, and, of course improvement projects. There was a limited workforce that was being tasked with taking these initiatives and projects from the business and building a plan not only potentially to improve the task but to plan the deployment and manage the change in the business retail locations. On the day of the workshop, they had identified close to 300 potential projects, a greater number than they had in their headcount.
As a Lean Six Sigma believer, I recommended a weighted prioritization matrix as the tool to use in this effort. Since there was no prior work in this area, we also used the Analytical Hierarchy Process (AHP) to develop the weights. Sounds good, right?
We worked the AHP relatively quickly and then went to the prioritization matrix. The AHP used a scoring from 1 to 10 and the prioritization matrix used scoring from -5 to 0 to +5. I know that negative values are not common in a prioritization matrix, but we recognized that the company had to make trade-offs in one area for a gain in another. These could be as simple as a decision that would reduce operating expenses (a good thing) but might not be taken well by the customers (a bad thing). The goal was to create a tool by which the company could adequately consider all the benefits and trade-offs in the decision to schedule each initiative and project for execution.
The process seemed to be running well until we actually used the prioritization matrix to score a few of the actual projects. No one really liked the prioritization order that the tool recommended. The order was not liked because it did not match the directives from above. Specifically, a project to introduce an across-the-board price increase scored very low, but the corporate leadership placed it as a number one priority.
Price increases are expected to generate a bit of customer angst and complaints, but the business needed the increase to cover operating expenses, and its competitors were also introducing pricing increases so that the customer impact was considered as acceptable.
The prioritization tool marked the pricing change nearly at the bottom of the priorities. This is a problem. At the executive report-out we were told that the weightings were the problem and we needed to weigh the financial gains greater than the other factors, but they did understand that a high customer experience was in their mission and vision statement and it was also ingrained into the corporate culture, so that the executives were not comfortable with de-emphasizing the customer very much.
After the workshop, the lean six sigma team and I sat down and talked about this problem. What we recognized is that the typical use of the prioritization matrix in this case was not adequate for considering the customer impact. We had scored the customer experience from +5 (very positive impact) to -5 (strong negative impact). It seemed OK at the time, but we realized the weakness was the consideration of the impact over time. Most prioritization matrix considerations are one-time events such as cost, or process lead time change, or staffing changes and such. But the customer experience is not such a factor because it measures a feeling or an emotion that is expected to change over time and may not impact the sales of the business at all.
After a long post-workshop discussion we considered a better operational definition for the customer experience scoring.
-5 = A catastrophic impact to the customer experience for the entire customer base that causes a long term 1% or greater loss in sales.
-3 = A measurable drop in the immediate customer experience scores for the entire customer base (or a catastrophic change for a small portion of the customer base) but no measurable change in sales. CI scores are expected to return to past levels after a short time.
-1 = A minor but measurable drop in the customer experience scores, but it is the same for competitors.
0 = No significant change in the customer experience score or in sales.
+1 = A minor but measurable increase in customer experience scores, but it is the same for competitors.
+3 = A measurable immediate increase in customer experience scores for the entire customer base (or a spectacular change for a small portion of the customer base), but no clear change in sales. CI scores are expected to return to past levels after a short time.
+5 = A spectacular impact that causes customers to recommend our business, leading to a greater than 1% increase in sales.
What is the big difference in this scoring? We are accounting for the dynamics of a process output that is not deterministic. This scoring guide is similar to scoring the risk to the business rather than the absolute change in the score by individuals. A small change to all customer experience is probably equal to a big change to a few customers. This is similar to considering it as a risk analysis rather than an absolute measurement.
Customer experience score changes do not always directly change the business financials. Customer experience scores may spike on emotions but return to historical values in a relatively short time as the customers understand the changes or just get used to them.
Now we could not really just change all the prioritization scores after the workshop because we did not have the subject matter experts, but when we played with the scorings a bit and the new rankings seem more appropriate.
I have seen this same issue in companies that have a Safety Mantra in the culture. Anything that degrades safety is given a extremely low score because no-one wants to say that they are for less safety. But in reality, most of the negative safety issues were truly just changing the assumed probability of something that had never occurred.
All prioritization factors cannot be treated equally.
Human factors, such as customer experience or employee impact, must be treated differently because the impact will change over time and may not truly impact the business.
The customer experience score could be considered like a risk analysis, where it is not the severity of the event alone that matters. You always consider the Severity * Occurrence rate to get the full measure of the risk.