It is not unusual for an in-depth interview (IDI) or focus group participant to wonder at some point in an interview or discussion if the participant “did okay”; that is, whether the participant responded to the researcher’s questions in the manner in which the researcher intended. For instance, an interviewer investigating parents’ healthy food purchases for their children might ask a mother to describe a typical shopping trip to the grocery store. In response, the mother might talk about the day of the week, the time of day, where she shops, and whether she is alone or with her children or someone else. After which she might ask the interviewer, Is that the kind of thing you were looking for? Is that what you mean? Did I do okay in answering your question? The interviewer’s follow up might be, Tell me something about the in-store experience such as the sections of the store you visit and the kinds of food items you typically buy.
It is one thing to misinterpret the intention of a researcher’s question – e.g., detailing the logistics of food purchasing rather than the actual food purchase experience – but another thing to adjust responses based on any number of factors influenced by the researcher-participant interaction. These interaction effects stem, in part, from the participant’s attempt to “do okay” in their role in the research process. Dr. Kathryn Roulston at the University of Georgia has written quite a bit about interaction in research interviews, including a just published edited volume Interactional Studies of Qualitative Research Interviews.
The dynamics that come into play in an IDI or focus group study – and in varying degrees, ethnographic research – are of great interest to qualitative researchers and important considerations in the overall quality of the research. This is the reason that a lot has been written about the researcher’s reflexive journal and its importance in Read Full Text
The Total Quality Framework (TQF) offers researchers a way to think about basic research principles at each stage of the qualitative research process – data collection, analysis, reporting – with the goal of doing something of value with the outcomes (i.e., the usefulness of the research). The first of the four components of the TQF is Credibility which pertains to the data collection phase of a qualitative study. A detailed discussion of Credibility can be found in this 2017 Research Design Review article.
This article – and in similar fashion to the companion articles associated with the other three components of the TQF – explains the chief elements that define Credibility, stating that “credible qualitative research is the result of effectively managing data collection, paying particular attention to the two specific areas of Scope and Data Gathering.” Although a great deal of the discussions thus far have been centered on traditional qualitative methods, the increasingly important role of technological solutions in qualitative research makes it imperative that the discussion of Credibility (and the other TQF components) expand to the digital world.
The online asynchronous focus group (“bulletin board”) method has been around for a long time. It is clearly an approach that offers qualitative researchers many advantages over the face-to-face mode while also presenting challenges to the integrity of research design. The following presents a snapshot of the online bulletin board focus group method through the lens of the two main ingredients of the TQF Credibility component – Scope and Data Gathering. This snapshot is not an attempt to name all the strengths and limitations associated with the Credibility of the online asynchronous focus group method but rather highlight a few key considerations.
Greg Allenby, marketing chair at Ohio State’s business school, published an article in the May/June 2014 issue of Marketing Insights on heterogeneity or, more specifically, on the idea that 1) accounting for individual differences is essential to understanding the “why” and “how” that lurks within research data and 2) research designs often mask these differences by neglecting the relativenature of the constructs under investigation. For instance, research concerning preference or satisfaction is useful to the extent it helps explain why and how people think differently as it relates to their preferences or levels of satisfaction, yet these are inherently relative constructs that only hold meaning if the researcher understands the standard (the “point of reference”) by which the current question of preference or satisfaction is being weighed – i.e., my preference (or satisfaction) compared to…what? Since the survey researcher is rarely if ever clued-in on respondents’ points of reference, it would be inaccurate to make direct comparisons such as stating that someone’s product preference is two times greater compared to someone else’s.
The embedded “relativeness” associated with responding to constructs such as preference and satisfaction is just one of the pesky problems inherent in designing this type of research. A related but different problem revolves around the personal interpretation given Read Full Text