November 15, 2013
A focus group moderator’s guide will often include group exercises or facilitation techniques as alternative approaches to direct questioning. While many of these alternative tactics are not unique to the group discussion method, and are also used in in-depth interview research, they have become a popular device in focus groups, esp., in the marketing research field. These alternative approaches can be broadly categorized as either enabling or projective techniques, the difference being whether the moderator’s intent is to simply modify a direct question to make it easier for group participants to express their opinions (enabling techniques) or delve into participants’ less conscious, less rational, less socially-acceptable feelings by way of indirect exercises (projective techniques). Examples of enabling techniques are: sentence completion – e.g., “When I think of my favorite foods, I think of _____.” or “The best thing about the new city transit system is _____.”; word association – e.g., asking prospective college students, “What is the first word you think of when Read Full Text
September 30, 2012
The idea of conducting qualitative research interviews by way of asynchronous email messaging seems almost quaint by marketing research standards. The non-stop evolution of online platforms, that are increasingly loaded with snazzy features that equip the researcher with many of the advantages to face-to-face interviews (e.g., presenting storyboards or new product ideas, and interactivity between interviewer and interviewee), has made a Web-based solution an important mode option in qualitative research.
The email interview, however, has been taken up by qualitative researchers in other disciplines – most notably, social work, health sciences, and education – with great success. For example, Judith McCoyd and Toba Kerson report on a study that was ‘serendipitously’ conducted primarily by way of email (although face-to-face and telephone were other mode possibilities). These researchers found that not only did participants in the study – women who had terminated pregnancy after diagnosis of a fetal anomaly – prefer the email mode (they actually requested to be interviewed via email) but they were prone to give the researchers long, emotional yet thoughtful responses to interview questions. McCoyd and Kerson state that email responses were typically 3-8 pages longer than what they obtained from similar face-to-face interviews and 6-12 pages longer than a comparable telephone interview. The sensitivity of the subject matter and the sense of privacy afforded by the communication channel contributed to an outpouring of rich details relevant to the research objectives. Cheryl Tatano Beck in nursing, Kaye Stacey and Jill Vincent who researched professors of mathematics, and others have reported similar results.
Marketing researchers may feel far afield from the alternative world of research professionals in sociology, medicine, and education but there are clearly lessons here of Read Full Text
Here is a topic you don’t read much about, particularly in the marketing research community: What is the optimal number of in-depth interviews to complete in an IDI study? The appropriate number of interviews to conduct for a face-to-face IDI study needs to be considered at two key moments of time in the research process – the initial research design phase and the phase of field execution. At the initial design stage, the number of IDIs is dictated by four considerations: 1) the breadth, depth, and nature of the research topic or issue; 2) the hetero- or homogeneity of the population of interest; 3) the level of analysis and interpretation required to meet research objectives; and 4) practical parameters such as the availability and access to interviewees, travel and other logistics associated with conducting face-to-face interviews, as well as the budget or financial resources. These four factors present the researcher with the difficult task of balancing the specific realities of the research components while estimating the optimal number of interviews to conduct. Although the number of required interviews tends to move in direct step with the level of diversity and Read Full Text
April 17, 2011
The Darshan Mehta (iResearch) and Lynda Maddox article “Focus Groups: Traditional vs. Online” in the March issue of Survey Magazine reminded me of the “visual biases” moderators, clients, and participants bring to the face-to-face research discussion. While there are downsides to opting for Internet-based qualitative research, the ability to actually control for potential error stemming from visual cues – ranging from demographic characteristics (e.g., age, race, ethnicity, gender) to “clothing and facial expressions” – is a clear advantage to the online (non-Webcam) environment. Anyone who has conducted, viewed, or participated in a face-to-face focus group can tell you that judgments are easily made without a word being spoken.
An understanding or at least an appreciation for this inherent bias in our in-person qualitative designs is important to the quality of the interviewing and subsequent analysis as well as the research environment itself. How does the interviewer change his/her type and format of questioning from one interviewee to another based on nothing more than the differences or contrasts the interviewer perceives between the two of them? How do the visual aspects of one or more group participants elicit more or less participation among the other members of the group? How do group discussants and interviewees respond and comment differently depending on their vision of the moderator, other participants, and the research environment?
The potential negative effect from the unwitting bias moderators/interviewers absorb in the research experience has been addressed to some degree. Mel Prince (along with others) has discussed the idea of “moderator teams” as well as the “serial moderating technique.” And Sean Jordan states that “moderator bias” simply needs to be “controlled for by careful behavior.”
There is clearly much more effort that needs to be made on this issue. Creating teams of interviewers may mitigate but may also exasperate the bias effect (e.g., How do we sort out the confounding impact of multiple prejudices from the team?), and instilling “careful behavior” can actually result in an unproductive research session (e.g., Does the controlled, unemotional, sterile behavior of the moderator/interviewer elicit unemotional, sterile, unreal responses from research participants?).
How we conduct and interpret our qualitative research – whether we (consciously or unconsciously) choose to impose barriers to our questioning and analysis, proceed with caution through the intersection of not knowing and insight, or go full steam ahead – rests in great measure with our ability to confront the potential prejudice in the researcher, the client, and our research participants.