The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp.15-17).
The field of qualitative research has paid considerable attention in the past half century to the issue of research “quality.” Despite these efforts, there remains a lack of agreement among qualitative researchers about how quality should be defined and how it should be evaluated (cf. Lincoln & Guba, 1985, 1986; Lincoln, 1995; Morse et al., 2002; Reynolds et al., 2011; Rolfe, 2006; Schwandt, Lincoln, & Guba, 2007). Some who seem to question whether quality can be defined and evaluated appear to hold the view that each qualitative research is so singularly unique in terms of how the data are created and how sense is made of these data that striving to assess quality is a wasted effort that never leads to a satisfying outcome about which agreement can be reached. Among other things, this suggests that validity – meaning, “the correctness or credibility of a description, conclusion, explanation, interpretation, or other sort of account” (Maxwell, 2013, p. 122) – is solely in the eye of the beholder and that convincing someone else that a qualitative study has generated valid and actionable findings is more an effort of subjective persuasion than an effort of applying dispassionate logic to whether the methods that were used to gather and analyze the data led to “valid enough” conclusions for the purpose(s) to which they were meant to serve.
Controversy also exists about how to determine the quality of a qualitative study. Arguments are made by some that the quality of a qualitative study is determined solely by the methods and processing that the researchers have used to conduct their studies. Others argue Read Full Text
There are lots of articles discussing question design, focusing on such things as how to mitigate various forms of bias, clearly communicate the intended meaning of the question, and facilitate response. Survey question wording is discussed in this “tip sheet” from Harvard University as well as in “Questionnaire Design” from Pew Research Center, and a recent article in Research Design Review discussed the not-so-simple “why” question in qualitative research (see “Re-considering the Question of ‘Why’ in Qualitative Research”).
Getting the question “right” is a concern of all researchers, but qualitative researchers have to be particularly mindful of the responses they get in return. It is not good enough to use an interview guide to ask a question, get an answer, and move on to the next question. And, it is often not good enough to ask a question, get an answer, interject one or two probing questions, and move on to the next question. Indeed, one of the toughest skills a qualitative interviewer has to learn is how to evaluate a participant’s answer to any given question. This goes way beyond evaluating whether the participant responded in line with the intention of the question or the potential sources of bias. Rather, this broader, much-needed evaluation of a response requires a reflexive, introspective consideration on the part of the interviewer.
Reflexivity is central to a qualitative approach in research methods. It is a topic that is discussed often in RDR – see “Interviewer Bias & Reflexivity in Qualitative Research,” “Reflections from the Field: Questions to Stimulate Reflexivity Among Qualitative Researchers,” and “Facilitating Reflexivity in Observational Research: The Observation Guide & Grid” – because of its role Read Full Text
A February 2017 article posted in Research Design Review discusses qualitative data transcripts and, specifically, the potential pitfalls when depending only on transcripts in the qualitative analysis process. As stated in the article,
Although serving a utilitarian purpose, transcripts effectively convert the all-too-human research experience that defines qualitative inquiry to the relatively emotionless drab confines of black-on-white text. Gone is the profound mood swing that descended over the participant when the interviewer asked about his elderly mother. Yes, there is text in the transcript that conveys some aspect of this mood but only to the extent that the participant is able to articulate it. Gone is the tone of voice that fluctuated depending on what aspect of the participant’s hospital visit was being discussed. Yes, the transcriptionist noted a change in voice but it is the significance and predictability of these voice changes that the interviewer grew to know over time that is missing from the transcript. Gone is an understanding of the lopsided interaction in the focus group discussion among teenagers. Yes, the analyst can ascertain from the transcript that a few in the group talked more than others but what is missing is the near-indescribable sounds dominant participants made to stifle other participants and the choked atmosphere that pervaded the discussion along with the entire group environment.
Missing from this article is an explicit discussion of the central role audio and/or video recordings – that accompany verbal qualitative research modes, e.g., face-to-face and telephone group discussions and in-depth interviews (IDIs) – play in the analysis of qualitative data. Researchers who routinely utilize recordings during analysis are more likely to derive valid interpretations of the data while also staying connected to Read Full Text