In 2012, Research Design Review published 10 articles pertaining to qualitative research RMR logo no words greendesign.  These 10 posts have been compiled into one volume titled, “Qualitative Research Design: Selected articles from Research Design Review published in 2012.”  The most popular of these articles among RDR readers are “Designing a Quality In-depth Interview Study: How Many Interviews Are Enough?” published in September and “Insights vs. Metrics: Finding Meaning in Online Qualitative Research” published in June of 2012.

The first of these (i.e., regarding the optimal number of interviews) talks about the “two key moments” when a researcher needs to consider how many interviews to complete – once at the initial design phase and the other while in the field.  Consideration at the initial stage of research design centers on very practical matters like the nature of the research topic and the heterogeneity of the target population.   However, weighing whether “enough” IDIs have been completed while in the field – in the throes of actually completing interviews – is a more delicate and difficult matter.  While the idea of “saturation” or the point in time when responses no longer reveal ‘fresh insights’ is well accepted particularly among researchers dedicated to grounded theory, it is not “good enough” from a quality design perspective.  Rather than saturation, this article advises the qualitative researcher to review the IDI completions in the field and answer eight questions concerning their quality.  Questions such as, Did every IDI cover every question or issue important to the research? and Can the researcher identify the sources of variations and contradictions in the data?

The second most-popular article – concerning online qualitative research – focused on the distinction between actually gaining new ideas or insights from online qualitative versus simply capturing metrics.  The article promotes the belief that offline techniques (such as projective techniques) have their place online and that “the increasingly-loud buzz of social media metrics” or tracking shouldn’t distract qualitative researchers from the business of gaining true, meaningful insights.  The article concludes by saying, “All of this tracking has the potential to provide marketers with some idea of what some portion of their target audience is saying or doing at a particular moment in time – insight with a small ‘i’.  But let’s not confuse that with the ever-present need to understand how people think – Insight with a big ‘I’.”

These and eight other articles specific to qualitative research design can be found here.

Here is a topic you don’t read much about, particularly in the marketing research community: What is the optimal number of in-depth interviews to complete in an IDI study?  The appropriate number of interviews to conduct for a face-to-face IDI study needs to be considered at two key moments of time in the research process – the initial research design phase and the phase of field execution.  At the initial design stage, the number of IDIs is dictated by four considerations: 1) the breadth, depth, and nature of the research topic or issue; 2) the hetero- or homogeneity of the population of interest; 3) the level of analysis and interpretation required to meet research objectives; and 4) practical parameters such as the availability and access to interviewees, travel and other logistics associated with conducting face-to-face interviews, as well as the budget or financial resources.   These four factors present the researcher with the difficult task of balancing the specific realities of the research components while estimating the optimal number of interviews to conduct.  Although the number of required interviews tends to move in direct step with the level of diversity and Read Full Text

The Darshan Mehta (iResearch) and Lynda Maddox article “Focus Groups: Traditional vs. Online” in the March issue of Survey Magazine reminded me of the “visual biases” moderators, clients, and participants bring to the face-to-face research discussion.  While there are downsides to opting for Internet-based qualitative research, the ability to actually control for potential error stemming from visual cues – ranging from demographic characteristics (e.g., age, race, ethnicity, gender) to “clothing and facial expressions” – is a clear advantage to the online (non-Webcam) environment.   Anyone who  has conducted, viewed, or participated in a face-to-face focus group can tell you that judgments are easily made without a word being spoken.

An understanding or at least an appreciation for this inherent bias in our in-person qualitative designs is important to the quality of the interviewing and subsequent analysis as well as the research environment itself.  How does the interviewer change his/her type and format of questioning from one interviewee to another based on nothing more than the differences or contrasts the interviewer perceives between the two of them?  How do the visual aspects of one or more group participants elicit more or less participation among the other members of the group?  How do group discussants and interviewees respond and comment differently depending on their vision of the moderator, other participants, and the research environment?

The potential negative effect from the unwitting bias moderators/interviewers absorb in the research experience has been addressed to some degree.  Mel Prince (along with others) has discussed the idea of “moderator teams” as well as the “serial moderating technique.”  And Sean Jordan states that “moderator bias” simply needs to be “controlled for by careful behavior.”

There is clearly much more effort that needs to be made on this issue.  Creating teams of interviewers may mitigate but may also exasperate the bias effect (e.g., How do we sort out the confounding impact of multiple prejudices from the team?), and instilling “careful behavior” can actually result in an unproductive research session (e.g., Does the controlled, unemotional, sterile behavior of the moderator/interviewer elicit unemotional, sterile, unreal responses from research participants?).

How we conduct and interpret our qualitative research – whether we (consciously or unconsciously) choose to impose barriers to our questioning and analysis, proceed with caution through the intersection of not knowing and insight, or go full steam ahead – rests in great measure with our ability to confront the potential prejudice in the researcher, the client, and our research participants.

The researcher’s key to the executive suite is hanging in the spot where it has always been.  Our entry into the consumer and other B2B worlds may have strayed towards mobile and online methods – bulletin boards, surveys, communities, and social-media lurking – but successful research with the corporate executive still lies in the warm, personal connections we make in the face-to-face mode. We can try to defend other approaches as more efficient (in time and cost), innovative, and sexy, but the reality is that nothing reaps the richness of a person (the professional interviewer) sitting with another person (the executive interviewee) for the sole purpose of exploring topic-specific attitudes and behavior.

If success is measured by the depth of input and insight then there are at least six necessary components to the face-to-face executive interviewing design model: Read Full Text

Follow

Get every new post delivered to your Inbox.

Join 187 other followers