Greg Allenby, marketing chair at Ohio State’s business school, published an article in the May/June 2014 issue of Marketing Insights on heterogeneity or, more specifically, on the idea that 1) accounting for individual differences is essential to understanding the “why” and “how” that lurks within research data and 2) research designs often mask these differences by neglecting the relative nature of the constructs under investigation. For instance, research concerning preference or satisfaction is useful to the extent it helps explain why and how people think differently as it relates to their preferences or levels of satisfaction, yet these are inherently relative constructs that only hold meaning if the researcher understands the standard (the “point of reference”) by which the current question of preference or satisfaction is being weighed – i.e., my preference (or satisfaction) compared to…what? Since the survey researcher is rarely if ever clued-in on respondents’ points of reference, it would be inaccurate to make direct comparisons such as stating that someone’s product preference is two times greater compared to someone else’s.
The embedded “relativeness” associated with responding to constructs such as preference and satisfaction is just one of the pesky problems inherent in designing this type of research. A related but different problem revolves around the personal interpretation given to the construct itself. This is particularly troublesome with respect to satisfaction. It frequently happens that, in the research design phase, both client and researcher readily agree to include something along the lines of “Please rate your overall satisfaction with…,” comfortable in using a ubiquitous question that everyone understands. After all, they are asking about “satisfaction,” what more is there to know? Answer: lots.
What makes satisfaction research – specifically, the satisfaction question – so puzzling lies in the frequent failure to recognize that the typical satisfaction question can be interpreted in many divergent ways, yet researchers rarely explore the meanings associated with “satisfaction” in the context being asked. Left on its own, the satisfaction question presents the researcher with ambiguous data based on confounding and multifaceted interpretations of “satisfaction.” “Please rate your overall satisfaction with your new car purchase.” Are you asking me about:
- Happiness – How happy I am to have a new car?
- Happiness – How happy I am to finally have this particular car which is the car I’ve always dreamed of owning?
- Expectations – Does the new car meet my preconceived needs or expectations?
- Expectations – Did the car-purchase experience meet my preconceived expectations?
- Loyalty – Has the purchase established or solidified my loyalty to the car dealer?
- Emotional gratification – Does the new car give me peace of mind?
- Quality of life – Has the quality of my life improved because commuting is now more pleasurable in my new car?
- Customer service – Was I kept well-informed during the purchase process?
- Customer service – Were the people I dealt with pleasant and enjoyable to work with?
- Customer service – Did the person I worked with understand my needs?
And etc. You get the idea. The point being made is that researchers never know exactly what they are measuring in satisfaction research; unless, of course, they make a specific effort to delve into respondents’ interpretations of the all-important satisfaction question. By not doing so, the researcher is left with a conundrum. On the one hand, the researcher might be able to report, for instance, that 90% of the customers sampled are “very satisfied” with their most recent purchase experience – and bask in the glow of smiles emanating from clients’ faces – but, on the other hand, have nothing to say about the meanings or associations given to “satisfaction” when these customers went about answering the question. This leaves a gaping hole rendering the research of limited value.
As discussed here and elsewhere in this blog, the goal among all researchers, in some shape or form, is to learn how people think. This presents researchers with their #1 challenge – to heighten their awareness of the myriad assumptions they harbor related to the constructs they hope to measure, and then to build a remedy into their research designs that addresses these assumptions by clarifying the thinking – the interpretations and meanings – that people use to personally define researchers’ constructs and formulate a response to their questions. Validity at its best.
Image captured from: http://gygrazok.deviantart.com/art/Entwined-Conundrum-56054549
The satisfaction question does indeed present a conundrum. On the one hand, simply asking customers how satisfied they are with a given purchase, does, as discussed above, provide rather vague results; the point of reference is relevant to the response. On the other hand however, there is the practical difficulty of getting consumers to respond to better, more in-depth surveys. Personally, I’m far more likely to respond to a short, one or two question multiple-choice survey about my satisfaction than to respond to a lengthy, 20-question survey, especially if the survey requires typed answers. Unfortunately, finding the balance between better surveys and surveys to which customers will respond is a knotty problem, and may involve distributing increased quantities of short surveys that ask more pointed questions.
LikeLike