market research

Fast & Slow Thinking in Research Design

The November/December 2013 issue of ESOMAR’s Research World was largely devoted to behavioral economics (BE), an increasingly-popular topic in marketing circles.  In it, Taste of slowvarious researchers discuss the virtues of embracing a BE model, with repeated reference specifically to Daniel Kahneman and his System 1-System 2 theory which is the foundation of his 2011 book Thinking, Fast and Slow.

The overall takeaway is the idea that marketing researchers would do well to focus their efforts on research that gets at System 1 thinking – intuitive, instinctive, automatic, fast thinking – rather than System 2 thinking – deliberative, “effortful,” attentive, slow thinking – because of its predominance in many of the decisions people make.  Indeed, Kahneman emphasizes in his book that, unbeknownst to many of us, System 1 (automatic, effortless) thinking exerts significant influence on our experiences and is “the secret author of many of the choices and judgments you make” (p. 13)[1].  And this, of course, can be a very Read Full Text

Managing Ghosts & the Case for Triangulation in Qualitative Research

The October 2012 issue of the American Psychological Association’s Monitor on Psychology includes an interview with developmental psychologist, Jerome Kagan.  In this interview he talks about psychology’s research “ghosts,” referring to the dubious generalizations psychologist’s make from their often-limited research.  Kagan’s primary point is that “it’s absolutely necessary to gather more than one source of data, no matter what you’re studying,” and that these multiple sources of data should come from verbal and behavioral as well as physiological measures.  Only by combining these various perspectives on an issue or situation – that is, utilizing data taken in different contexts and by way of alternative methods and modes – can the researcher come to a legitimate conclusion.

This is not unlike triangulation, esp., in the social and health sciences, which is used to gauge the trustworthiness of research outcomes.  Triangulation is the technique of examining a specific research topic by comparing data obtained from: two or more methods, two or more segments of the sample population, and/or two or more investigators.  In this way, the researcher is looking for patterns of convergence and divergence in the data.  Triangulation is a particularly important design feature in qualitative research – where measures of validity and transferability can be elusive – because it furthers the researcher’s ability to gain a comprehensive view of the research question and come closer to a plausible interpretation of final results.

Scholars teach the importance of including some form of triangulation in research designs yet there is not a lot of evidence that this occurs in the real world of applied qualitative research.  While there are an increasing number of ways to gather qualitative feedback – particularly via social media and mobile – that provide researchers with convenient sources of data, applied researchers would benefit from more discussion on case studies that have utilized multiple data sources and methods to find reliable themes in the outcomes.  Importantly, it is further hoped that applied researchers use this contrast-and-compare approach to scrutinize the research issue from both traditional (e.g., in-person group discussions, in-depth interviews, in-home ethnography) and newer (e.g., online based, mobile device) information-gathering strategies.

The triangulation concept is just one way that researchers can add rigor to their research designs and manage the potential “ghosts” of groundless assumptions and misguided interpretations.

A Best Practices Approach to Social Media Research

Last month’s post – “Insights vs. Metrics: Finding Meaning in Online Qualitative Research” – talked about “social media metric mania” and the value of off- and online qualitative research tools “that dig behind the obvious and attempt to reveal how people truly think.”  In light of these remarks, it is good to find researchers who are exploring social media research design and attempting to determine the necessary parameters to maximize quality output. The researchers at J.D. Power and Associates are doing just that.  In particular, Gina Pingitore, Chief Research Officer, and others at J.D. Power have written a couple of white papers discussing design issues such as validity, reliability, and best practices in social media research.  The research-on-research work they have conducted on these issues is applauded for its focus on establishing quality standards and for its overarching goal “to create more rigor around the processes that create social insights.”

The February 2012 paper – “The Validity of Social Media Data within the Wireless Industry” – looks at the volume and sentiment of social media content in relationship to results from their “traditional” syndicated survey.  They learned that: there is a direct relationship between the volume of posts in social media and Read Full Text