Question Design

Satisfaction Research & Other Conundrums

Greg Allenby, marketing chair at Ohio State’s business school, published an article in the May/June 2014 issue of Marketing Insights on heterogeneity or, more specifically, on the idea that 1) accounting for individual differences is essential to understanding Conundrumthe “why” and “how” that lurks within research data and 2) research designs often mask these differences by neglecting the relative nature of the constructs under investigation. For instance, research concerning preference or satisfaction is useful to the extent it helps explain why and how people think differently as it relates to their preferences or levels of satisfaction, yet these are inherently relative constructs that only hold meaning if the researcher understands the standard (the “point of reference”) by which the current question of preference or satisfaction is being weighed – i.e., my preference (or satisfaction) compared to…what? Since the survey researcher is rarely if ever clued-in on respondents’ points of reference, it would be inaccurate to make direct comparisons such as stating that someone’s product preference is two times greater compared to someone else’s.

The embedded “relativeness” associated with responding to constructs such as preference and satisfaction is just one of the pesky problems inherent in designing this type of research. A related but different problem revolves around the personal interpretation given Read Full Text

Looking Under the Hood: What Survey Researchers Can Learn from Deceptive Product Reviews

Eric Anderson and Duncan Simester published a paper in May 2013 titled “Deceptive Reviews: The Influential Tail.”  It talks about their analysis of many thousands of reviews for a major apparel “private label retailer” with the focus on a comparison of reviews looking under the hoodmade by customers who actually made a prior transaction (i.e., customers who actually purchased the item they were reviewing) and customers who had not made a prior transaction (i.e., customers who reviewed items they had not actually purchased).  Their comparisons largely revolved around four key measures or indicators that characterize deception in online reviews and messaging: 1) a greater number of words (compared to reviews from customers who had bought the item); 2) the use of simpler, shorter words; 3) the inappropriate reference to family (i.e., referring to a family event unrelated to the product being reviewed such as “I remember when my mother took me shopping for school clothes…”); and 4) the extraordinary use of exclamation points (i.e., “!!” or “!!!”).  Apparently, deceivers tend to overcompensate for their lack of true knowledge and wax eloquent about something they know nothing about.  This wouldn’t matter except that deceivers’ deceptive reviews (i.e., reviews from customers who have not purchased the item reviewed) are more likely to be Read Full Text

“I Wonder About God” & Other Poorly-Designed Questions

Question design is difficult.  Anyone who has run cognitive interviews or simply conducted a focus group has discovered that even the most carefully designed question may be interpreted far afield from its intended meaning.  While qualitative methods give researchers insight on how interpretations of a question vary (and how to better design the question to come closer to the researcher’s objective), the reality is that question design is rarely put to the test and given the scrutiny it deserves.  Time and budget limitations as well as researchers’ overconfidence in their question-design skills typically lead to a hastily crafted and executed questionnaire.

This is a critical problem not only because it transcends mode – question design is an issue in off- and online modes as well as across quantitative and qualitative methods – but, more importantly, it has a direct, potentially negative impact on analysis, which in turn leads to wrong conclusions, which in turn leads end users along a path of misguided next steps.

Of course some poorly-designed questions are intentional, particularly in an election season when partisan politics triumph over sound research design.  A recent highly-public Read Full Text