Every week new email invitations arrive asking me to participate in an online survey concerning some product or service I recently used.  And each time, as I read the stated Telephonereasons why I should comply with the request, I find myself taking a mental inventory of what I know or don’t know about the subject matter, what I can or cannot recall about my user experience, how positive or not positive the user experience was, and how important I think this product or service is in my life to be worthy of my time to answer their survey questions.

Last week I was asked by one of my trade organizations to participate in an online survey about their quarterly magazine.  Or is it a monthly magazine?  Maybe every two months?  I am not sure, but I do know that I receive it and I read it.  I stared at the email invitation taking the usual inventory, sifting through my usual battery of qualifying questions, pondering whether I should complete this survey or not.  Yes, I told myself, I remember receiving this magazine, I know that I read it when it arrives, but do I really have anything to say about this magazine?  My opinion of this magazine falls in some neutral territory Read Full Text

In 2013, Research Design Review posted five articles that directly speak to common design considerations in quantitative and qualitative research that address theStone bridge basic goal of understanding how people think.  These common concerns, and the articles where they are discussed, include: using effective content analysis procedures to reveal underlying subjective connections for each respondent/participant (“Content Analysis & Navigating the Stream of Consciousness”); the importance of design approaches that target people’s stories (“‘Tell Me What Happened’ & Other Stories”); research designs that incorporate good listening techniques with appropriate, well-constructed questions (“Listening: A Lesson from New Coke”); utilizing qualitative research to examine the thinking that helps explain quantitative data (“Looking Under the Hood: What Survey Researchers Can Learn from Deceptive Product Reviews”); and the role of Daniel Kahneman’s System 1 (intuitive) and System 2 (cognitive) thinking framework in considering behavior in the marketplace (“Fast & Slow Thinking in Research Design”).

These five articles have been compiled into one pdf document that can be accessed here.  Anyone who has read this blog since its inception in 2009 knows that a recurring theme revolves around research design issues that impact how well (or not) researchers gain an understanding of how people think.  There is no reason to believe that the tradition won’t continue in 2014.

Research Design Review published 13 articles in 2013 that dealt explicitly with qualitative 2013 Qualitative research design 5research design.  These range from general topics – such as the “10 Distinctive Qualities of Qualitative Research,” Unilever’s accreditation program (and the absence of design considerations in its priorities), and the “messiness” (even scariness) of qualitative research design to quantitative-leaning researchers and/or clients with a discussion on ways to overcome their trepidation – to method-specific issues – such as the implications of group composition and interactions in focus group research, the roles of “analytical sensibilities” and deception in ethnography, and the unique design factors associated with multi-method, case-centered research.  And, while all of the articles discuss quality design measures in some fashion, several posts extend quality considerations to particular facets of the research process, such as participant cooperation, the use of projective techniques, data validation, proposal writing, and the reporting of design in the final research document.

The 13 articles on qualitative research design from RDR in 2013 have been compiled into one pdf document which can be accessed here.  As always, it is hoped that greater awareness of the issues impinging on quality in qualitative research design will foster a greater appreciation for and discussion of these issues leading to more credible, analyzable, transparent, and ultimately more useful qualitative research studies.

The November/December issue of ESOMAR’s Research World is largely devoted to behavioral economics (BE), an increasingly-popular topic in marketing circles.  In it, Taste of slowvarious researchers discuss the virtues of embracing a BE model, with repeated reference specifically to Daniel Kahneman and his System 1-System 2 theory which is the foundation of his 2011 book Thinking, Fast and Slow.

The overall takeaway is the idea that marketing researchers would do well to focus their efforts on research that gets at System 1 thinking – intuitive, instinctive, automatic, fast thinking – rather than System 2 thinking – deliberative, “effortful,” attentive, slow thinking – because of its predominance in many of the decisions people make.  Indeed, Kahneman emphasizes in his book that, unbeknownst to many of us, System 1 (automatic, effortless) thinking exerts significant influence on our experiences and is “the secret author of many of the choices and judgments you make” (p. 13)[1].  And this, of course, can be a very Read Full Text

Eric Anderson and Duncan Simester published a paper in May 2013 titled “Deceptive Reviews: The Influential Tail.”  It talks about their analysis of many thousands of reviews for a major apparel “private label retailer” with the focus on a comparison of reviews looking under the hoodmade by customers who actually made a prior transaction (i.e., customers who actually purchased the item they were reviewing) and customers who had not made a prior transaction (i.e., customers who reviewed items they had not actually purchased).  Their comparisons largely revolved around four key measures or indicators that characterize deception in online reviews and messaging: 1) a greater number of words (compared to reviews from customers who had bought the item); 2) the use of simpler, shorter words; 3) the inappropriate reference to family (i.e., referring to a family event unrelated to the product being reviewed such as “I remember when my mother took me shopping for school clothes…”); and 4) the extraordinary use of exclamation points (i.e., “!!” or “!!!”).  Apparently, deceivers tend to overcompensate for their lack of true knowledge and wax eloquent about something they know nothing about.  This wouldn’t matter except that deceivers’ deceptive reviews (i.e., reviews from customers who have not purchased the item reviewed) are more likely to be Read Full Text

A focus group moderator’s guide will often include group exercises or facilitation techniques as alternative approaches to direct questioning.  While many of these alternative tactics are not unique to the group discussion method, and are also used in in-depth intercollageview research, they have become a popular device in focus groups, esp., in the marketing research field.  These alternative approaches can be broadly categorized as either enabling or projective techniques, the difference being whether the moderator’s intent is to simply modify a direct question to make it easier for group participants to express their opinions (enabling techniques) or  delve into participants’ less conscious, less rational, less socially-acceptable feelings by way of indirect exercises (projective techniques).   Examples of enabling techniques are: sentence completion – e.g., “When I think of my favorite foods, I think of _____.” or “The best thing about the new city transit system is _____.”; word association – e.g., asking prospective college students, “What is the first word you think of when Read Full Text

Last week, Susan Eliot posted a terrific piece on listening (a common theme on her blog The Listening Resource) titled “Listening For Versus Collecting Data.”  In it, she talks about the power imbalance – and, I would add, the insensitive mindset – implied by the idea that researchers are “collecting data from subjects” compared to the more useful notion that we are new coke2listening “one human to another.”  Eliot goes on to cite Martin Buber and his distinction of I-Thou and I-It interactions or relationships between people, with Eliot stating “When we look upon the other person as a ‘thou’ (a unique, sentient human being) rather than an ‘it’ (a data repository), we approach the research with a humanistic perspective, one that is likely to net us rich and meaningful data.”

Extolling the virtues of listening seems almost trite (we all claim to “listen” in some shape or form) yet why is it so very difficult?  It is difficult, not only among researchers where listening is (should be) a required skill but, among all of us where listening is a fundamental component of human interaction.

The October 18, 2013 NPR TED Radio Hour program “Haves and Have-Nots” presents two important examples on the importance of listening and, more particularly, the negative effects of not listening well.  The first is a TED talk given by Ernesto Sirolli titled “Want to help someone? Shut up and listen!” where he tells the story of an ill-fated attempt to teach people in Zambia Read Full Text

Through history, research people have discussed and debated the virtues and fallibilities of quantitative versus qualitative research.  “Versus” because there is typically a ‘one or the other’ mentality in thinking and talking about quantitative and qualitative research that may ultimately pit one against the other.  This dichotomy makes obvious sense from the standpoint of the very different purposes and approaches prescribed by these two research genres, frogfostering as it often does two very different types of researchers with sometimes radically different mind and skill sets.

There are situations – we can all probably think of some – when a survey or focus group (or IDI or observation) research design is opted for simply because it is the type of research that falls within someone’s comfort zone.  We go with what we know.  This is true of researchers; it is also true of corporate clients and other research funders.

Many qualitative researchers, for instance, are loath to venture into survey territory where the stark realities of black and white numbers, percentages, and correlations are too confining as they are mind-blowing.  And it is usually this qualitative-fear-of-quantitative that we hear so much about.  But what about survey researchers and the clients who find a safe haven in quantitative methods?  Do they share a similar dread of qualitative research and, if so, why? Read Full Text

Reliability, in the sense of being able to obtain identical findings from repeated executions of a qualitative research design, is debatable.  Validity, however, is another matter.  Validity, in the sense of whether the qualitative researcher is collecting the information (data) he or she claims to be gathering (i.e., ttrue colorshe accuracy of the data), is a topic worthy of much more discussion in the research community, or at the least a greater emphasis in our qualitative research designs.  While qualitative researchers may not be able to replicate their studies, they surely have the means to consider the authenticity of the data.

There was a Research Design Review post back in 2010 that discussed the importance and appropriateness of validity in qualitative research, including the idea that there are ready-made techniques for looking at validity in qualitative research and that, in some ways, validity is already built into our research methods.  To illustrate how qualitative researchers typically incorporate validity Read Full Text

Approximately two years ago, a post in Research Design Review described a quality framework that is recommended as a guide to researchers in their qualitative research designs.  This post – “Four Components of the Quality Framework for Qualitative Research Design” – talks about the benefits of grounding qualitative design in a framework by which the researcher can “judge the efficacy” as well as “examine the sources of variability and establish critical thinking in the Raising the barprocess of qualitative research design.”  The four components of the quality framework (QF) revolve around the idea that all qualitative research must be: credible, analyzable, transparent, and ultimately useful.

In the current post, qualitative researchers are encouraged to put the QF to work in a very important applied arena – i.e., the crafting and evaluating of research proposals.  For instance, a QF approach to qualitative research deserves prominence in: (a) the proposals written by graduate students working towards their theses and dissertations; (b) proposals written by researchers in the academic, government, not-for-profit, and commercial sectors responding to clients’ requests for proposal (RFPs); and (c) proposals written for grants.  Taking a quality perspective in the research proposal raises the bar on the Read Full Text


Get every new post delivered to your Inbox.

Join 187 other followers