One of the most meaningful concepts in qualitative research is that of “Othering”; that is, the concept of “us” versus “them” that presents itself (knowingly or not) in the researcher-participant interaction. Othering is an important idea across all qualitative stereotypingmethods but it is in the in-depth interview – where the intensity of the interviewer-interviewee relationship is pivotal to the quality of outcomes – which the notion of Othering takes on particular relevance. As discussed elsewhere in Research Design Review, the interviewer-interviewee relationship in IDI research fosters an “asymmetrical power” environment, one in which the researcher (the interviewer) is in a position to make certain assumptions – and possibly misperceptions – about the interviewee that ultimately play a role in the final interpretations and reporting of the data. It is this potentially uneven power relationship that is central to the reflexive journal (which is discussed repeatedly in this blog).

In 2002, Qualitative Social Work published an article by Michal Krumer-Nevo titled, “The Arena of Othering: A Life-Story with Women Living in Poverty and Social Marginality.”1 This is a very Read Full Text

A focus group moderator’s guide will often include group exercises or facilitation techniques as alternative approaches to direct questioning.  While many of these alternative tactics are not unique to the group discussion method, and are also used in in-depth intercollageview research, they have become a popular device in focus groups, esp., in the marketing research field.  These alternative approaches can be broadly categorized as either enabling or projective techniques, the difference being whether the moderator’s intent is to simply modify a direct question to make it easier for group participants to express their opinions (enabling techniques) or  delve into participants’ less conscious, less rational, less socially-acceptable feelings by way of indirect exercises (projective techniques).   Examples of enabling techniques are: sentence completion – e.g., “When I think of my favorite foods, I think of _____.” or “The best thing about the new city transit system is _____.”; word association – e.g., asking prospective college students, “What is the first word you think of when Read Full Text

In 2012, Research Design Review published 10 articles pertaining to qualitative research RMR logo no words greendesign.  These 10 posts have been compiled into one volume titled, “Qualitative Research Design: Selected articles from Research Design Review published in 2012.”  The most popular of these articles among RDR readers are “Designing a Quality In-depth Interview Study: How Many Interviews Are Enough?” published in September and “Insights vs. Metrics: Finding Meaning in Online Qualitative Research” published in June of 2012.

The first of these (i.e., regarding the optimal number of interviews) talks about the “two key moments” when a researcher needs to consider how many interviews to complete – once at the initial design phase and the other while in the field.  Consideration at the initial stage of research design centers on very practical matters like the nature of the research topic and the heterogeneity of the target population.   However, weighing whether “enough” IDIs have been completed while in the field – in the throes of actually completing interviews – is a more delicate and difficult matter.  While the idea of “saturation” or the point in time when responses no longer reveal ‘fresh insights’ is well accepted particularly among researchers dedicated to grounded theory, it is not “good enough” from a quality design perspective.  Rather than saturation, this article advises the qualitative researcher to review the IDI completions in the field and answer eight questions concerning their quality.  Questions such as, Did every IDI cover every question or issue important to the research? and Can the researcher identify the sources of variations and contradictions in the data?

The second most-popular article – concerning online qualitative research – focused on the distinction between actually gaining new ideas or insights from online qualitative versus simply capturing metrics.  The article promotes the belief that offline techniques (such as projective techniques) have their place online and that “the increasingly-loud buzz of social media metrics” or tracking shouldn’t distract qualitative researchers from the business of gaining true, meaningful insights.  The article concludes by saying, “All of this tracking has the potential to provide marketers with some idea of what some portion of their target audience is saying or doing at a particular moment in time – insight with a small ‘i’.  But let’s not confuse that with the ever-present need to understand how people think – Insight with a big ‘I’.”

These and eight other articles specific to qualitative research design can be found here.

Consider the Email Interview

September 30, 2012

The idea of conducting qualitative research interviews by way of asynchronous email messaging seems almost quaint by marketing research standards.  The non-stop evolution of online platforms, that are increasingly loaded with snazzy features that equip the researcher with many of the advantages to face-to-face interviews (e.g., presenting storyboards or new product ideas, and interactivity between interviewer and interviewee), has made a Web-based solution an important mode option in qualitative research.

The email interview, however, has been taken up by qualitative researchers in other disciplines – most notably, social work, health sciences, and education – with great success.  For example, Judith McCoyd and Toba Kerson report on a study that was ‘serendipitously’ conducted primarily by way of email (although face-to-face and telephone were other mode possibilities).  These researchers found that not only did participants in the study – women who had terminated pregnancy after diagnosis of a fetal anomaly – prefer the email mode (they actually requested to be interviewed via email) but they were prone to give the researchers long, emotional yet thoughtful responses to interview questions.  McCoyd and Kerson state that email responses were typically 3-8 pages longer than what they obtained from similar face-to-face interviews and 6-12 pages longer than a comparable telephone interview.  The sensitivity of the subject matter and the sense of privacy afforded by the communication channel contributed to an outpouring of rich details relevant to the research objectives.  Cheryl Tatano Beck in nursing, Kaye Stacey and Jill Vincent who researched professors of mathematics, and others have reported similar results.

Marketing researchers may feel far afield from the alternative world of research professionals in sociology, medicine, and education but there are clearly lessons here of Read Full Text

Here is a topic you don’t read much about, particularly in the marketing research community: What is the optimal number of in-depth interviews to complete in an IDI study?  The appropriate number of interviews to conduct for a face-to-face IDI study needs to be considered at two key moments of time in the research process – the initial research design phase and the phase of field execution.  At the initial design stage, the number of IDIs is dictated by four considerations: 1) the breadth, depth, and nature of the research topic or issue; 2) the hetero- or homogeneity of the population of interest; 3) the level of analysis and interpretation required to meet research objectives; and 4) practical parameters such as the availability and access to interviewees, travel and other logistics associated with conducting face-to-face interviews, as well as the budget or financial resources.   These four factors present the researcher with the difficult task of balancing the specific realities of the research components while estimating the optimal number of interviews to conduct.  Although the number of required interviews tends to move in direct step with the level of diversity and Read Full Text

The Darshan Mehta (iResearch) and Lynda Maddox article “Focus Groups: Traditional vs. Online” in the March issue of Survey Magazine reminded me of the “visual biases” moderators, clients, and participants bring to the face-to-face research discussion.  While there are downsides to opting for Internet-based qualitative research, the ability to actually control for potential error stemming from visual cues – ranging from demographic characteristics (e.g., age, race, ethnicity, gender) to “clothing and facial expressions” – is a clear advantage to the online (non-Webcam) environment.   Anyone who  has conducted, viewed, or participated in a face-to-face focus group can tell you that judgments are easily made without a word being spoken.

An understanding or at least an appreciation for this inherent bias in our in-person qualitative designs is important to the quality of the interviewing and subsequent analysis as well as the research environment itself.  How does the interviewer change his/her type and format of questioning from one interviewee to another based on nothing more than the differences or contrasts the interviewer perceives between the two of them?  How do the visual aspects of one or more group participants elicit more or less participation among the other members of the group?  How do group discussants and interviewees respond and comment differently depending on their vision of the moderator, other participants, and the research environment?

The potential negative effect from the unwitting bias moderators/interviewers absorb in the research experience has been addressed to some degree.  Mel Prince (along with others) has discussed the idea of “moderator teams” as well as the “serial moderating technique.”  And Sean Jordan states that “moderator bias” simply needs to be “controlled for by careful behavior.”

There is clearly much more effort that needs to be made on this issue.  Creating teams of interviewers may mitigate but may also exasperate the bias effect (e.g., How do we sort out the confounding impact of multiple prejudices from the team?), and instilling “careful behavior” can actually result in an unproductive research session (e.g., Does the controlled, unemotional, sterile behavior of the moderator/interviewer elicit unemotional, sterile, unreal responses from research participants?).

How we conduct and interpret our qualitative research – whether we (consciously or unconsciously) choose to impose barriers to our questioning and analysis, proceed with caution through the intersection of not knowing and insight, or go full steam ahead – rests in great measure with our ability to confront the potential prejudice in the researcher, the client, and our research participants.

Follow

Get every new post delivered to your Inbox.

Join 182 other followers