Confusion & Misinterpretation Associated with Scale-point Terminology
April 27, 2012
Is there really any difference between “extremely” and “very”? What about “moderately” and “slightly”? And if something is “poor” does it really matter that someone else rated it “very poor”? Rating scales using this type of terminology have been around for a long time yet it is curious why they continue to show up in survey design.
In February 2012, Gallup posted the results of a survey question that asked registered voters about the importance of various issues on their decision to vote for one presidential candidate over another. The interviewer stated that he/she would be reading a list of issues and then instructed the respondent to
“…please tell me how important the candidates’ positions on that issue will be in influencing your vote for president – extremely important, very important, somewhat important, or not important.”
Similarly, the Kaiser Family Foundation’s February 2012 Kaiser Health Tracking Poll asked about the importance of health-related issues to the voting decision:
“…tell me how important [each of the following issues] will be to your vote. Would you say this issue will be extremely important to your vote for president, very important, somewhat important, or not too important to your vote?”
SurveyMonkey and others on the Web confuse things further by promoting a double-whammy scale effect with potential duplication at both the top and middle levels. For instance, the SurveyMonkey “Market Research Template” recommends the following question:
5. If our new product were available today, how likely would you be to use it instead of competing products currently available from other companies?
○ Extremely likely
○ Very likely
○ Moderately likely
○ Slightly likely
○ Not at all likely
M/A/R/C Research in its “Best Practices for Constructing Quantitative Rating Scales that Minimize Scale Use Bias” states that “anchors should be distinctly clear in meaning and create inter-scale-point ‘mentally interpreted distance’ that are as equal as possible.” It uses the example of “somewhat” and “moderately,” asserting that these are not distinctly different scale points and their use will ultimately lead to “scale use bias.”
Much of this “bias” can be attributed to the burden that questions (as those above) pose to respondents as well as to researchers. Asking respondents to ascertain the difference between “extremely” and “very” or “moderately” and “slightly” becomes a mental challenge as survey takers work their way through the four-step cognitive process: 1) interpreting the question to deduce the intended distinction between terms; 2) searching the mind for relevant information; 3) integrating that information into a judgment; and, 4) translating that judgment into a response. This cognitive burden is exacerbated by the potentially damaging effect of adding to the perceived length of the survey which in turn increases the likelihood of breakoffs.
Likewise, the researcher is left with the burden of interpreting survey responses which, of course, are a product of respondents’ interpretations of confusing terminology. Given the evidence that people generally don’t agree on what “very” anything – “very likely,” “very satisfied,” “very important” – means, it seems unreasonable to assume that the analyst can reach a realistic conclusion from extremely-very and moderately-slightly data. Unless, of course, the analyst relies on his/her own interpretation of the terms which, of course, is meaningless since the responses belong to respondents not researchers.
It is easy to argue that interpretation (and potential for misinterpretation) is an issue in any question design, but the unnecessary use of duplicating terms presents an added layer of burden – to both respondents and researchers – that should be avoided in good research design.