Looking Under the Hood: What Survey Researchers Can Learn from Deceptive Product Reviews
November 26, 2013
Eric Anderson and Duncan Simester published a paper in May 2013 titled “Deceptive Reviews: The Influential Tail.” It talks about their analysis of many thousands of reviews for a major apparel “private label retailer” with the focus on a comparison of reviews made by customers who actually made a prior transaction (i.e., customers who actually purchased the item they were reviewing) and customers who had not made a prior transaction (i.e., customers who reviewed items they had not actually purchased). Their comparisons largely revolved around four key measures or indicators that characterize deception in online reviews and messaging: 1) a greater number of words (compared to reviews from customers who had bought the item); 2) the use of simpler, shorter words; 3) the inappropriate reference to family (i.e., referring to a family event unrelated to the product being reviewed such as “I remember when my mother took me shopping for school clothes…”); and 4) the extraordinary use of exclamation points (i.e., “!!” or “!!!”). Apparently, deceivers tend to overcompensate for their lack of true knowledge and wax eloquent about something they know nothing about. This wouldn’t matter except that deceivers’ deceptive reviews (i.e., reviews from customers who have not purchased the item reviewed) are more likely to be negative (e.g., giving a lower product rating) compared to reviews from actual purchasers, which in turn has the unfortunate proven effect of damaging merchants’ sales.
The Anderson and Simester paper harkens back to the 2011 Research Design Review post concerning the vagueness of survey scale terms such as “very,” “most,” and “somewhat.” This post discusses research showing, for example, that a response of “somewhat likely” can actually be understood by the respondent to mean that the true likelihood of an event occurring is anywhere from 100% to nonexistent (0%). Yet this is not how “somewhat likely” data is typically interpreted and, indeed, it is often combined with “very likely” data to form an umbrella category of “likely” respondents.
Similar to deceptive reviews, quantitative research designs that allow for a wide range of subjectivity and individual interpretation fall victim to portraying false impressions leading to erroneous conclusions. Just as visitors to a website may think they are reading a legitimate product review from an actual purchaser/user, what researchers think they see in their data may not be anywhere near the reality respondents hoped to express in their responses.
As survey researchers we are well-advised to take a lesson from researchers such as Anderson and Simester by exploring the indicators – in our research designs as well as our data – that may lead us to deceive ourselves. By routinely “looking under the hood” of our quantitative research with qualitative methods that examine the reality of how and what respondents think, we will be enriched with the true meaning of the constructs our survey data purport to measure.