The summer 2011 issue of AMA’s Marketing Research magazine includes two articles that discuss the ever-popular whipping boy of marketing research, the Net Promoter® Score (NPS). Randy Hanson (“Life After NPS”) as well as Patrick Barwise and Seán Meehan (“Exploiting Customer Dissatisfaction”) evaluate both the positive and not-so-positive attributes of NPS, along with their ideas for enhancing or actually circumventing the NPS model.
As most researchers know, NPS delivers a metric derived from responses to a single survey question –
How likely is it that you would recommend this company to a friend or colleague?
The NPS score is derived by subtracting the percentage of “detractors” (i.e., respondents who answer this question anywhere from 0-6 on a 10-point scale) from the percentage of “promoters” (i.e., respondents who answer either ‘9’ or ‘10’). Fred Reichheld, the developer of NPS, writes about the virtues of NPS in his book The Ultimate Question. Reichheld asserts that the value of his ‘recommend question’ is that it focuses on behavior (“what customers would actually do”) and separates out drivers of “good profit” from “bad profit” thereby leading companies to future growth. As Reichheld puts it, the NPS metric produced from this one question is the “one number you need to grow.”
Hanson and Barwise/Meehan discuss many of the usual benefits associated with NPS – e.g., it is “intuitive” and easy to understand, and the built-in simplicity of the model–the single question, the simple calculation, the output of a single number–serves to gain the attention of top management who might otherwise ignore survey data – along with the oft-mentioned drawbacks – e.g., it is overly simplistic, reducing complex behavior and attitudes to a single question/number, and it is not widely correlated with its chief raison d’être, predicting growth.
These discussions leave out another all-important downside to the NPS. Namely, the NPS recommend question is frequently not a single question. While it may appear as a single, simple request, the recommend question is in reality embedded with multiple questions, each of which tugs at the respondent who weighs its appropriateness for the response. Entrenched in the recommend question are the questions of:
Who – Would I recommend this company to my best friend or to people [such as those at the office] who are friends but not close friends? Should I include my mother who I often think of as my best friend?
What – Under what circumstances would I recommend this company? If my “friend” needed one type of service or product from this company, I would give a high recommend rating; but if my “friend” needed something else, I would respond with a lower recommend rating. For instance, my bank offers great in-branch service as well as above-par rates on certificates of deposit but its online banking system is cumbersome and the standard checking account is laden with fees.
When – At what point in time should I base this response? Am I basing this recommendation on just one specific instance and not the other times I have purchased from this company? How can I honestly answer if I’m asked to base my answer on my most recent purchase which is not indicative of my overall experience with this company?
Like the donkey in Shrek, each of these sub-questions is shouting “pick me, pick me,” tormenting the respondent into either: a) opting for one scenario while ignoring all other possible situations (e.g., highly recommending my bank because my “friend” only cares about getting a good rate on a CD), or b) giving up and abandoning the survey.
My choice is typically to give up. Rather than muddy the researcher’s results with what amounts to a half-answer, I opt to drop out when confronted with this question as a survey taker. Because if you ask me if I would recommend Starbucks to a friend or colleague, I am thinking about: who to consider as a “friend or colleague”; whether this person actually drinks coffee or tea; how I really like Starbucks’ Caffè Mocha but not a fan of their cappuccino; whether this person likes Caffè Mocha, cappuccino, or neither; and how I received great service along with a great Caffè Mocha the last time I was in Starbucks but two earlier visits were disappointing with slow, unfriendly service and a Caffè Mocha that was mediocre.
So, should I answer ‘9 or ‘10’ and be categorized as a promoter, or give a rating somewhere between ‘0’ and ‘6’ and be labeled a detractor, or respond with a ‘7’ or ‘8’ and be branded “passive” – a “satisfied but unenthusiastic” customer? Or should I just not answer.
Reichheld states that “this single [recommendation] question allows companies to track promoters and detractors, producing a clear measure of an organization’s performance through its customers’ eyes.” It would be one thing if the NPS was actually asking just a single question. But the invisible questions that lie beneath Reichheld’s “ultimate question” present a real design issue. Requiring survey participants to consider various scenarios (undetectable by the researcher) is both confusing and frustrating for the respondent and, of course, impossible to analyze.
Hi Margret
Good post – and you make some good points about NPS.
But isn’t the problem you are describing in fact part of every question we ask. If I ask a respondent to tell me if Starbucks is “A brand I trust” don’t I have to sort through the context – and make the appropriate “translations” – what is Starbucks – is it a coffee – is it an in store experience or is it some variable combination of those. What is trust – I am sure that we have shared meaning for this – but that there are differences in what we think “trust” is for each of us.
I have been using an NPS type question in pretty much all of my work for the last 5 years or so and have found it very useful in…
1. Getting senior managements attention – often they feel things are as going well or as good as they can be – then they see that they have an NPS in the 20s and Apple has one in the 80s. This tends to get them to understand that there is room for improvement.
2. Tying the score to drivers of both satisfaction and dissatisfaction. This runs counter to the Bain approach of asking just the NPS question – and little else so that the results are uncluttered or unburdened by noise from other questions.
One thing that I have learned is that NPS needs to be asked along with awareness – in every study I have done over the last five years the highest NPS score is found with those most aware.
Bruce
LikeLike
Hi Bruce,
Good to see you here, and thank you for a great comment.
I think you make my point very well. For instance, I have also worked with the idea of “trust” and it is, indeed, a not-one-size-fits-all concept. Not unlike most of the other terminology — and phrasing — we use in our research. That is what makes the ‘recommend question’ problematic.
I think you and others have pretty much confirmed that gaining the attention of top management is a key driver for using the NPS. It is a reason, I just don’t think it is a sufficient reason.
And, yes, I would guess that awareness and the NPS score are closely linked. After all, it is difficult to recommend something that I am not aware of :).
Thanks again, Bruce. Your comments are much appreciated.
Margaret
LikeLike