quantitative analysis

Looking Under the Hood: What Survey Researchers Can Learn from Deceptive Product Reviews

Eric Anderson and Duncan Simester published a paper in May 2013 titled “Deceptive Reviews: The Influential Tail.”  It talks about their analysis of many thousands of reviews for a major apparel “private label retailer” with the focus on a comparison of reviews looking under the hoodmade by customers who actually made a prior transaction (i.e., customers who actually purchased the item they were reviewing) and customers who had not made a prior transaction (i.e., customers who reviewed items they had not actually purchased).  Their comparisons largely revolved around four key measures or indicators that characterize deception in online reviews and messaging: 1) a greater number of words (compared to reviews from customers who had bought the item); 2) the use of simpler, shorter words; 3) the inappropriate reference to family (i.e., referring to a family event unrelated to the product being reviewed such as “I remember when my mother took me shopping for school clothes…”); and 4) the extraordinary use of exclamation points (i.e., “!!” or “!!!”).  Apparently, deceivers tend to overcompensate for their lack of true knowledge and wax eloquent about something they know nothing about.  This wouldn’t matter except that deceivers’ deceptive reviews (i.e., reviews from customers who have not purchased the item reviewed) are more likely to be Read Full Text

Content Analysis & Navigating the Stream of Consciousness

An article posted on Research Design Review back in 2010 discussed the work of William James and, specifically, his concept that consciousness “flows” like a river or stream.  The article goes on to say that James’ “stream of consciousness” is relevant to researcFlowing-streamhers of every stripe because we all share in the goal of designing research “to understand the subjective links within each individual.”  Yet these subjective links come at a price, not the least of which is the “messiness” of the analysis as we work towards identifying these links and finding meaning that addresses our objectives.

Whether it is the verbatim comments from survey respondents to open-end questions or the transcripts from focus group discussions or ethnographic interviews, the researcher is faced with the daunting job of conducting a content analysis that reveals how people think while at the same time answers the research Read Full Text

The Vagueness of Our Terms: Are Positive Responses Really That Positive?

John Tarnai, Danna Moore, and Marion Schultz from Washington State University presented a poster at the 2011 AAPOR conference in Phoenix titled, “Evaluating the Meaning of Vague Quantifier Terms in Questionnaires.”  Their research began with the premise that “many questionnaires use vague response terms, such as ‘most’, ‘some’, ‘a few’ and survey results are analyzed as if these terms have the same meaning for most people.”  John and his team have it absolutely right.  Quantitative researchers routinely design their scales while casting only a casual eye on the obvious subjectivity – varying among respondents, analytical researchers, and users of the research – built into their structured measurements.

One piece of the Tarnai, et al research asked residents of Washington State about the likelihood that they will face “financial difficulties in the year ahead.”  The question was asked using a four-point scale – very likely, somewhat likely, somewhat unlikely, and very unlikely – followed by a companion question that asked for a “percent from 0% to 100% that estimates the likelihood that you will have financial difficulties in the year ahead.”  While the results show medians that “make sense” – e.g., the median percent associated with “very likely” is 80%, the median for “very unlikely” is 0% – it is the spread of percent associations that is interesting.  For instance, some people who answered “very likely” also said Read Full Text