Unilever’s Qualitative Accreditation Program & Misdirected Quest for “Fresh Ideas”

Many researchers have discussed Unilever’s accreditation program for qualitative research.  Among others, the Market Research Society, ESOMAR’s Research World, and Katfresh ideas2hryn Korostoff (Research Rockstar) have all outlined what led up to this program, the objectives of the program, and the accreditation process.  In a nutshell, Unilever assessed the outcomes of their many qualitative studies around the globe and determined that the qualitative researchers Unilever has employed to conduct their qual studies have generally failed in providing management with a sufficient caliber of new ideas and insights that serve to move the company forward.

Manish Makhijani, a consumer insights director at Unilever, stated in an interview discussing the program that one of his top concerns with their qualitative research is the inconsistency in “the quality of insights and debriefs” among their qualitative researchers, emphasizing that “what matters in qual more than anything else is the quality of thinking that you put on the table.”  And, indeed, Makhijani brought home this point at the November 2012 ESOMAR conference when he presented the notion that “good” qualitative research is derived from “good thinkers,” i.e., qualitative researchers that possess these attributes:

  • Are strategic thinkers
  • Have deep foundation of skills
  • Have empathy with the wider Unilever context
  • Are conscientious
  • Have fresh ideas and thoughts

Who can argue with “good thinkers”?  Something we all aspire to be and think a lot about (hmm, a pun).  But why is the emphasis here on thinking associated with “fresh” and “strategic” interpretations of qualitative findings and not on how we got to those findings in the first place?  Not unlike quantitative researchers who ‘lie with statistics’, how are we to believe – i.e., what use is – the information delivered by qualitative researchers if all their “fresh ideas” are gleaned from a qualitative research design that is not credible, analyzable, or transparent?  The ultimate usefulness of our research outcomes does not hinge on the researcher’s “empathy” with the corporate context or even their “deep foundation of skills” (although the meaning of this is not clear) but with the researcher’s professional capability to design a qualitative study that minimizes coverage and measurement error (credibility), fully processes and verifies the data, e.g., by way of triangulation and deviant case analysis, (analyzability), and presents deliverables that are ‘thick’ with descriptions and explanations that account for the research steps and the nuances experienced along the way (transparency).

So, thinking is good but only to the extent that it begins at the beginning with the research objectives in conjunction with a sound, quality-constructed research design.  By emphasizing design, Makhijani might actually gain a new understanding about his issue of researcher inconsistency; specifically, why there is a “lack of consistency even when you are using the same agency or similar researchers over a period of time.”  Is this inconsistency a function of where the researcher falls on the scale of good thinkers or the integrity of the design that produced the outcomes that they are thinking about?  Good question.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.