The summer 2011 issue of AMA’s Marketing Research magazine includes two articles that discuss the ever-popular whipping boy of marketing research, the Net Promoter® Score (NPS). Randy Hanson (“Life After NPS”) as well as Patrick Barwise and Seán Meehan (“Exploiting Customer Dissatisfaction”) evaluate both the positive and not-so-positive attributes of NPS, along with their ideas for enhancing or actually circumventing the NPS model.
As most researchers know, NPS delivers a metric derived from responses to a single survey question –
How likely is it that you would recommend this company to a friend or colleague?
The NPS score is derived by subtracting the percentage of “detractors” (i.e., respondents who answer this question anywhere from 0-6 on a 10-point scale) from the percentage of “promoters” (i.e., respondents who answer either ‘9’ or ‘10’). Fred Reichheld, the developer of NPS, writes about the virtues of NPS in his book The Ultimate Question. Reichheld asserts that the value of his ‘recommend question’ is that it focuses on behavior (“what customers would actually do”) and separates out drivers of “good profit” from “bad profit” thereby leading companies to future growth. As Reichheld puts it, the NPS metric produced from this one question is the “one number you need to grow.”
Hanson and Barwise/Meehan discuss many of the usual benefits associated with NPS – e.g., it is “intuitive” and easy to understand, and the built-in simplicity of the model–the single question, the simple calculation, the output of a single number–serves to gain the attention of top management who might otherwise ignore survey data – along with the oft-mentioned drawbacks – e.g., it is overly simplistic, reducing complex behavior and attitudes to a single question/number, and it is not widely correlated with its chief raison d’être, predicting growth.
These discussions leave out another all-important downside to the NPS. Namely, the NPS recommend question is frequently not a single question. While it may appear as a single, simple request, the recommend question is in reality embedded with multiple questions, each of which tugs at the respondent who weighs its appropriateness for the response. Entrenched in the recommend question are the questions of:
Who – Would I recommend this company to my best friend or to people [such as those at the office] who are friends but not close friends? Should I include my mother who I often think of as my best friend?
What – Under what circumstances would I recommend this company? If my “friend” needed one type of service or product from this company, I would give a high recommend rating; but if my “friend” needed something else, I would respond with a lower recommend rating. For instance, my bank offers great in-branch service as well as above-par rates on certificates of deposit but its online banking system is cumbersome and the standard checking account is laden with fees.
When – At what point in time should I base this response? Am I basing this recommendation on just one specific instance and not the other times I have purchased from this company? How can I honestly answer if I’m asked to base my answer on my most recent purchase which is not indicative of my overall experience with this company?
Like the donkey in Shrek, each of these sub-questions is shouting “pick me, pick me,” tormenting the respondent into either: a) opting for one scenario while ignoring all other possible situations (e.g., highly recommending my bank because my “friend” only cares about getting a good rate on a CD), or b) giving up and abandoning the survey.
My choice is typically to give up. Rather than muddy the researcher’s results with what amounts to a half-answer, I opt to drop out when confronted with this question as a survey taker. Because if you ask me if I would recommend Starbucks to a friend or colleague, I am thinking about: who to consider as a “friend or colleague”; whether this person actually drinks coffee or tea; how I really like Starbucks’ Caffè Mocha but not a fan of their cappuccino; whether this person likes Caffè Mocha, cappuccino, or neither; and how I received great service along with a great Caffè Mocha the last time I was in Starbucks but two earlier visits were disappointing with slow, unfriendly service and a Caffè Mocha that was mediocre.
So, should I answer ‘9 or ‘10’ and be categorized as a promoter, or give a rating somewhere between ‘0’ and ‘6’ and be labeled a detractor, or respond with a ‘7’ or ‘8’ and be branded “passive” – a “satisfied but unenthusiastic” customer? Or should I just not answer.
Reichheld states that “this single [recommendation] question allows companies to track promoters and detractors, producing a clear measure of an organization’s performance through its customers’ eyes.” It would be one thing if the NPS was actually asking just a single question. But the invisible questions that lie beneath Reichheld’s “ultimate question” present a real design issue. Requiring survey participants to consider various scenarios (undetectable by the researcher) is both confusing and frustrating for the respondent and, of course, impossible to analyze.