Nonresponse, and inaccuracies in the data due to nonresponse, is more than a quantitative issue. While qualitative researchers may shudder at the thought, the typically-ignored impact of nonresponse is just as important in the qualitative realm. Why is nonresponse in qualitative research important? Because we are conducting qualitative research. Not qualitative let’s get a few warm bodies around the table for our face-to-face focus group, but actually research methods that, like all research, demand certain protocols that address potential biasing effects. One of these is nonresponse. The warm bodies in our group discussion may make the moderator and client observers feel great – Thank goodness, someone showed up! – but the uncomfortable reality is that the people who chose not to participate – or were never contacted by a recruiter and asked to participate in the first place – greatly affect our research outcomes. Indeed, the trajectory of a group discussion Read Full Text
research error
Selection Bias & Mobile Qualitative Research
When I conduct a face-to-face qualitative study – whether it is a group discussion, in-depth interview, or in-situ ethnography – I am taking in much more than the behavior and attitudes of the research participants. Like most researchers, my scope goes way beyond the most vocal response to my questions or the behavior of store shoppers, but incorporates much more detail including the nuanced comments, the facial and body gestures, as well as the surrounding environment that may be impacting his or her thoughts or movements. So, while one of my face-to-face participants may tell me that he “just prefers” shopping at a competitor’s store for his hardware, I know from the entirety of clues throughout the interview that there is more to uncover which ultimately lands me on the real reason he avoids my client’s store – the unavailability of store credit. Likewise, the mobile research participant shopping at Walmart for coffeemakers may share her shopping experience via video and/or text but unintentionally omit certain components – e.g., the impact of competitive displays, product packaging, store lighting, surrounding shoppers – that would have been discovered in an in-person ethnography and contribute important insights.
Selection bias is inherent in nearly all research designs. At some level research participants are deciding what is important to communicate to the researcher and what is worthy of being ignored. From deciding whether to participate in a study, to the granularity of details they are willing to share, the participant – not the researcher – controls some measure of the research input. It is no wonder that many of the discussions concerning research design center on this issue, with survey researchers discussing at length the best method for sampling and selecting respondents (e.g., the next-birthday method in telephone studies), converting initial refusals, and effective probing techniques.
There is not much discussion on selection bias in qualitative research. One exception is an article by David Collier and James Mahoney* that addresses how selection bias undermines the validity of qualitative research. More focus on the issue of selection bias in qualitative research is warranted, particularly given the speed with which research designs today are evolving to keep up with new communication technology.
Mobile research is just one example of an increasingly popular qualitative research method. Mobile research provides for the first time a viable way to Read Full Text
The Vagueness of Our Terms: Are Positive Responses Really That Positive?
John Tarnai, Danna Moore, and Marion Schultz from Washington State University presented a poster at the 2011 AAPOR conference in Phoenix titled, “Evaluating the Meaning of Vague Quantifier Terms in Questionnaires.” Their research began with the premise that “many questionnaires use vague response terms, such as ‘most’, ‘some’, ‘a few’ and survey results are analyzed as if these terms have the same meaning for most people.” John and his team have it absolutely right. Quantitative researchers routinely design their scales while casting only a casual eye on the obvious subjectivity – varying among respondents, analytical researchers, and users of the research – built into their structured measurements.
One piece of the Tarnai, et al research asked residents of Washington State about the likelihood that they will face “financial difficulties in the year ahead.” The question was asked using a four-point scale – very likely, somewhat likely, somewhat unlikely, and very unlikely – followed by a companion question that asked for a “percent from 0% to 100% that estimates the likelihood that you will have financial difficulties in the year ahead.” While the results show medians that “make sense” – e.g., the median percent associated with “very likely” is 80%, the median for “very unlikely” is 0% – it is the spread of percent associations that is interesting. For instance, some people who answered “very likely” also said Read Full Text