November 14, 2012
Research design of any sort has to grapple with the pesky issue of bias or the potential distortion of research outcomes due to unintended influences from the researcher as well as research participants. This is a particularly critical issue in qualitative research where interviewers (and moderators) take extraordinary efforts to establish strong relationships with their interviewees (and group participants) in order to delve deeply into the subject matter. The importance of considering the implications from undo prejudices in qualitative research was discussed in the April 2011 Research Design Review post, “Visual Cues & Bias in Qualitative Research,” which emphasized that “there is clearly much more effort that needs to be made on this issue.” Reflexivity and, specifically, the reflexive journal is one such effort that addresses the distortions or preconceptions researchers’ unwittingly introduce in their qualitative designs.
Reflexivity is an important concept because it is directed at the greatest underlying threat to the accuracy of our qualitative research outcomes – that is, the social interaction component of the interviewer-interviewee relationship, or, what Steinar Kvale calls, “the asymmetrical power relations of the research interviewer and the interviewed subject” (see “Dialogue as Oppression and Interview Research,” 2002). The act of reflection Read Full Text
October 14, 2012
The most recent issue of the American Psychological Association’s Monitor on Psychology includes an interview with developmental psychologist, Jerome Kagan. In this interview he talks about psychology’s research “ghosts,” referring to the dubious generalizations psychologist’s make from their often-limited research. Kagan’s primary point is that “it’s absolutely necessary to gather more than one source of data, no matter what you’re studying,” and that these multiple sources of data should come from verbal and behavioral as well as physiological measures. Only by combining these various perspectives on an issue or situation – that is, utilizing data taken in different contexts and by way of alternative methods and modes – can the researcher come to a legitimate conclusion.
This is not unlike triangulation, esp., in the social and health sciences, which is used to gauge the trustworthiness of research outcomes. Triangulation is the technique of examining a specific research topic by comparing data obtained from: two or more methods, two or more segments of the sample population, and/or two or more investigators. In this way, the researcher is looking for patterns of convergence and divergence in the data. Triangulation is a particularly important design feature in qualitative research – where measures of validity and reliability can be elusive – because it furthers the researcher’s ability to gain a comprehensive view of the research question and come closer to a plausible interpretation of final results.
Where is this multifaceted process in the commercial world of qualitative marketing research? Academics talk about the importance of including some form of triangulation in research design yet there is not a lot of evidence that this occurs in marketing research. While there are an increasing number of ways to gather qualitative feedback – particularly via social media and mobile – that provide researchers with convenient sources of data, there needs to be more discussion on case studies that have utilized multiple data sources and methods to find reliable themes in the outcomes. Importantly, it is further hoped that marketing researchers use this contrast-and-compare approach to scrutinize the research issue from both traditional (e.g., face-to-face group discussions, in-depth interviews, in-home ethnography) and new (e.g., online based, smartphone) information-gathering strategies.
The triangulation concept is just one way that marketing researchers can begin to bring rigor to their research designs and manage the “ghosts” of groundless assumptions and misguided interpretations.
September 30, 2012
The idea of conducting qualitative research interviews by way of asynchronous email messaging seems almost quaint by marketing research standards. The non-stop evolution of online platforms, that are increasingly loaded with snazzy features that equip the researcher with many of the advantages to face-to-face interviews (e.g., presenting storyboards or new product ideas, and interactivity between interviewer and interviewee), has made a Web-based solution an important mode option in qualitative research.
The email interview, however, has been taken up by qualitative researchers in other disciplines – most notably, social work, health sciences, and education – with great success. For example, Judith McCoyd and Toba Kerson report on a study that was ‘serendipitously’ conducted primarily by way of email (although face-to-face and telephone were other mode possibilities). These researchers found that not only did participants in the study – women who had terminated pregnancy after diagnosis of a fetal anomaly – prefer the email mode (they actually requested to be interviewed via email) but they were prone to give the researchers long, emotional yet thoughtful responses to interview questions. McCoyd and Kerson state that email responses were typically 3-8 pages longer than what they obtained from similar face-to-face interviews and 6-12 pages longer than a comparable telephone interview. The sensitivity of the subject matter and the sense of privacy afforded by the communication channel contributed to an outpouring of rich details relevant to the research objectives. Cheryl Tatano Beck in nursing, Kaye Stacey and Jill Vincent who researched professors of mathematics, and others have reported similar results.
Marketing researchers may feel far afield from the alternative world of research professionals in sociology, medicine, and education but there are clearly lessons here of Read Full Text
Here is a topic you don’t read much about, particularly in the marketing research community: What is the optimal number of in-depth interviews to complete in an IDI study? The appropriate number of interviews to conduct for a face-to-face IDI study needs to be considered at two key moments of time in the research process – the initial research design phase and the phase of field execution. At the initial design stage, the number of IDIs is dictated by four considerations: 1) the breadth, depth, and nature of the research topic or issue; 2) the hetero- or homogeneity of the population of interest; 3) the level of analysis and interpretation required to meet research objectives; and 4) practical parameters such as the availability and access to interviewees, travel and other logistics associated with conducting face-to-face interviews, as well as the budget or financial resources. These four factors present the researcher with the difficult task of balancing the specific realities of the research components while estimating the optimal number of interviews to conduct. Although the number of required interviews tends to move in direct step with the level of diversity and Read Full Text
Joel Rubinson posted an interesting commentary on the GreenBook blog back in July titled, “When marketing research is like a sunset on Pluto.” In it he discusses behavioral economics, the “shortcuts” respondents take to find answers to our research questions, and how people tend “to access their memory in a faulty way.” Is it no wonder that “50% of respondents” who re-take an attitudinal survey express an opinion different from their earlier response?
Daniel Kahneman, a renowned psychologist long associated with the beginnings of behavioral economics, discusses “faulty” thinking in his February 2010 TED talk “The riddle of experience vs. memory.” Kahneman makes the point that the “experiencing self” is something different than the “reflective” or “remembering self.” To illustrate, he talks about happiness. Happiness, Kahneman states, belongs only to the moment when we are actually experiencing the feeling of happiness. When that moment has passed it is “lost forever.” When we reflect on that moment we can tell stories about the experience but we can never regain the experience of happiness itself. In this way, Kahneman says we can’t think of any circumstance that effects well-being without distorting its importance. This inability to “attend to the same things when we think about life [versus when we] actually live” is, according to Kahneman, a “real cognitive trap.”
In his blog post, Joel Rubinson delineates “seven tips” for marketing researchers to circumvent the cognitive trap and the distorted survey responses that come with it. A few of these tips are great design ideas – using “natural vocabulary” and answer choices, and at-the-moment methods – while others, Rubinson acknowledges, will be “frowned on by many purists,” such as adding a warm-up survey component to get the respondent’s “head and heart into the moment” (which, by the way, is not necessarily a bad idea).
How to marry quality design practices with the reality of the cognitive trap and ensuing distortions is an ongoing dilemma. The bigger question, however, is how to find a solution to this dilemma without shortchanging the rigor of your research design. Like the whole notion of change, you can ignore it (as in: pretend that quality standards don’t exist) or find an easy workaround (as in: apply the latest technology); or come face-to-face with the problem at hand by establishing researcher-end-user partnerships that are founded on making the painful commitment of the time and money required of a quality research product.
July 25, 2012
Question design is difficult. Anyone who has run cognitive interviews or simply conducted a focus group has discovered that even the most carefully designed question may be interpreted far afield from its intended meaning. While qualitative methods give researchers insight on how interpretations of a question vary (and how to better design the question to come closer to the researcher’s objective), the reality is that question design is rarely put to the test and given the scrutiny it deserves. Time and budget limitations as well as researchers’ overconfidence in their question-design skills typically lead to a hastily crafted and executed questionnaire.
This is a critical problem not only because it transcends mode – question design is an issue in off- and online modes as well as across quantitative and qualitative methods – but, more importantly, it has a direct, potentially negative impact on analysis, which in turn leads to wrong conclusions, which in turn leads end users along a path of misguided next steps.
Of course some poorly-designed questions are intentional, particularly in an election season when partisan politics triumph over sound research design. A recent highly-public Read Full Text
There are some who argue that idea generation among consumers is a frustrating task. After all, who knows a particular product category more than the manufacturer, its advertising agency, and other groups committed to the survival of the product (and the product line)? And it doesn’t help that facilitators of all kinds are guilty of asking consumers to be experts where they are not and to assume greater role playing in marketing decisions than is justified. Asking consumers to step outside of their worlds – to pretend to be someone (something) else – may seem foolhardy.
Consumer ideation, however, can be a useful approach, particularly when it is constructed with two key ingredients: 1) people who are product-involved; and 2) individuals who can provide fresh, new insights. Finding consumers who are product-involved is not difficult, but not all consumers are “creative” thinkers who can produce new perspectives or have the ability to look at something inside out and make sense of it, or take the familiar and make it strange. This takes a very special recruiting effort, which is one of the many differences between idea generation and focus group research.
Idea generation sessions or workshops are not focus group research discussions. Here are a few key ways in which consumer ideation – defined as a balance between loosely-structured brainstorming and the more structured, solution-oriented Synectic method – is differentiated from traditional focus group research:
- Research objectives. Focus groups are attempting to understand underlying beliefs and motivations for consumer behavior, compared to consumer ideation where the goal is to make the familiar strange and generate as many ideas or solutions as possible without asking consumers to justify or defend. Read Full Text