Reflections on “Qualitative Literacy”

In March 2018, Mario Luis Small gave a public lecture at Columbia University on “Rhetoric and Evidence in a Polarized Society.” In this terrific must-read speech, Small asserts that today’s public Mario Luis Smalldiscourse concerning society’s most deserving issues – poverty, inequality, and economic opportunity – has been seriously weakened by the absence of “qualitative literacy.” Qualitative literacy has to do with “the ability to understand, handle, and properly interpret qualitative evidence” such as ethnographic and in-depth interview (IDI) data. Small contrasts the general lack of qualitative literacy with the “remarkable improvement” in “quantitative literacy” particularly among those in the media where data-driven journalism is on the rise, published stories are written with a greater knowledge of quantitative data and use of terminology (e.g., the inclusion of means and medians), and more care is given to the quantitative evidence cited in media commentary (i.e., op-eds).

Small explains that the extent to which a researcher (or journalist or anyone involved in the use of research) possesses qualitative literacy can be determined by looking at the person’s ability to “assess whether the ethnographer has collected and evaluated fieldnote data properly, or the interviewer has conducted interviews effectively and analyzed the transcripts properly.” This determination serves as the backbone of “basic qualitative literacy” which enables the research user to identify the difference between a rigorous qualitative study and a study that applied weak or less rigorous standards. And it is this basic literacy – which has advanced the public discourse of quantitative data – that is needed in the qualitative realm.

One of the ways users of qualitative research can effectively assess the quality of a reported study, according to Small, is the show of “cognitive empathy.” Small’s definition of cognitive empathy is not unlike the message from many articles in Research Design Review that discuss a central objective among all qualitative researchers; that is, understanding how people think*. Essentially, cognitive empathy boils down to the researcher’s ability to record the participant’s lived experience from the participant’s not the researcher’s point of view by way of understanding how the participant not the researcher thinks about a particular experience or situation.

Small does not discuss reflexive journals and the important impact they can have on aiding the qualitative researcher to gain the cognitive empathy the researcher seeks. Yet reflexivity and the reflexive journal play an important role in rigorous qualitative research designs. The reflexive journal has been discussed many times in RDR as one component (of many) to a quality approach to qualitative design. One such article is “Interviewer Bias & Reflexivity in Qualitative Research” which discusses the concept of reflexivity and how a heightened awareness of reflexivity “enables the interviewer to design specific questions for the interviewee that help inform and clarify the interviewer’s understanding of the outcomes” from the interviewee’s perspective. A subsequent article on the reflexive journal – “Reflections from the Field: Questions to Stimulate Reflexivity Among Qualitative Researchers” – offers specific questions or issues that encourage qualitative researchers to think about how they may be unintentionally influencing (biasing) their data and how they might modify their approach.

Without this reflection – without this true grasp of cognitive empathy – researchers weaken their studies by failing to internalize their participants’ lived experiences. With respect to public discourse, this failure in cognitive empathy can cripple our ability to comprehend, as Small says, “why people at the opposite end [of the political spectrum] think, vote, or otherwise act the way they do.”

*A few of these articles can be accessed in this 2014 post.

Image captured from: https://scholar.harvard.edu/mariosmall/about

4 comments

  1. Another problem with creating greater credibility for qualitative research in the media seems to be how evaluators evaluate the resulting information. With quantitative, sample size, appropriate methods, good questionnaire, margin of error, are used to evaluate how good a quant study might be. The reputation, education and accomplishments of the researcher are generally not known, and the firm is the proxy credibility for the study.

    Qualitative research is often NOT conducted by recognizable firms, at least outside the space. And, with a lack of knowledge about how to evaluate a qual methodology, evaluators certainly fall upon their experience with quant research as a guide.

    The style guides producted by the AP and Chicago associations could use some thought leadership to help editors, as evaluators, do a better job with this.

    Continued presence at AAPOR and Journalism conferences would also help with this much-needed dialogue.

    Like

    1. Thank you, Eric. All excellent points. As far as a “continued presence at AAPOR,” that is definitely in the works! Paul Lavrakas and I are organizing qualitative sessions for 2019, and I have just submitted an application to AAPOR Council for QUALPOR—a qualitative affinity group. Let me know if you are interested in either.

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.