Qualitative Research

Analyzability & a Qualitative Content Analysis Case Study

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 284-285).

Kuperberg and Stone (2008) present a case study where content analysis was used as the primary research method. Gender & SocietyIt is an example of how many of the Total Quality Framework (TQF) concepts can be applied — not only to the in-depth interview, focus group, observation, and case centered methods, discussed elsewhere in Research Design Review, but — to qualitative content analysis. The discussion below spotlights aspects of this study relevant to one of the four TQF components, Analyzability.

Purpose & Scope
The primary purpose of this content analysis study was to extend the existing literature on the portrayal of women’s roles in print media by examining the imagery and themes depicted of heterosexual college-educated women who leave the workforce to devote themselves to being stay-at-home mothers (a phenomenon referred to as “opting out”) across a wide, diverse range of print publications. More specifically, this research set out to investigate two areas of media coverage: the content (e.g., the women who are portrayed in the media and how they are described) and the context (e.g., the types of media and articles).

This study examined a 16-year period from 1988 to 2003. This 16-year period was chosen because 1988 was the earliest date on which the researchers had access to a searchable database for sampling, and 2003 was the year that the term “opting out” (referring to women leaving the workforce to become full-time mothers) became popular. The researchers identified 51 articles from 30 publications that represented a wide diversity of large-circulation print media. The researchers acknowledged that the sample “underrepresents articles appearing in small-town outlets” (p. 502).

Analyzability
There are two aspects of the TQF Analyzability component — processing and verification. In terms of processing, the content data obtained by Kuperberg and Stone from coding revealed three primary patterns or themes in the depiction of women who opt out: “family first, child-centric”; “the mommy elite”; and “making choices.” The researchers discuss these themes at some length and support their findings by way of research literature and other references. In some instances, they report that their findings were in contrast to the literature (which presented an opportunity for future research in this area). Their final interpretation of the data includes their overall assertion that print media depict “traditional images of heterosexual women” (p. 510).

Important to the integrity of the analysis process, the researchers absorbed themselves in the sampled articles and, in doing so, identified inconsistencies in the research outcomes. For example, a careful reading of the articles revealed that many of the women depicted as stay-at-home mothers were actually employed in some form of paid work from home. The researchers also enriched the discussion of their findings by giving the reader some context relevant to the publications and articles. For example, they revealed that 45 of the 51 articles were from general interest newspapers or magazines, a fact that supports their research objective of analyzing print media that reach large, diverse audiences.

In terms of verification, the researchers performed a version of deviant case analysis in which they investigated contrary evidence to the assertion made by many articles that there is a growing trend in the proportion of women opting out. Citing research studies from the literature as well as actual trend data, the researchers stated that the articles’ claim that women were increasingly opting out had weak support.

Kuperberg, A., & Stone, P. (2008). The media depiction of women who opt out. Gender & Society, 22(4), 497–517.

Shared Constructs in Research Design: Part 3 — Validity

validity in research designNot unlike Part 1 (concerning sampling) and Part 2 (concerning bias) of the discussion that began earlier, the shared construct of validity in research design has also been an area of focus in several articles posted in Research Design Review. Most notable is “Quality Frameworks in Qualitative Research” posted in February 2021 in which validity is discussed within the context of the parameters or strategies various researchers use to define and think about the dimensions of rigor in qualitative research design. This article uses the Total Quality Framework (Roller & Lavrakas, 2015) and criteria of Lincoln and Guba (1985) to underscore the idea that quality approaches to design cuts across paradigm orientation, leading to robust and valid interpretations of the data.

Many other qualitative researchers, across disciplines, believe in the critical role that the shared construct of validity plays in research design. Joseph Maxwell, for example, discusses validity in association with his realism approach to casual explanation in qualitative research (Maxwell, 2004) and discusses in detail five unique dimensions of validity, including descriptive, interpretative, and theoretical validity (Maxwell, 1992). And of course, Miles & Huberman were promoting greater rigor by way of validity more than three decades ago (Miles & Huberman, 1984).

More recently, Koro-Ljungberg (2010) takes an in-depth look at validity in qualitative research and, with extensive literature as the backdrop, makes the case that “validity is in doing, as well as its (un)making, and it exhibits itself in the present paradox of knowing and unknowing, indecision, and border crossing” (p. 609). Matteson & Lincoln (2008) remind educational researchers that validity does not solely concern the analysis phase of research design but “the data collection method must also address validity” (p. 672). Creswell & Miller (2000) discuss different approaches to determine validity across three paradigm orientations — postpositivist, constructivist, and critical — and “lens” of the researcher, participants, and researchers external to the study.

Among qualitative health researchers, Morse (2020) emphasizes the potential weakness in validity when confusing the analysis of interpretative inquiry with that associated with “hard, descriptive data” (p. 4), and Morse et al. (2002) present five verification strategies and argue that validity (as well as reliability) is an “overarching” construct that “can be appropriately used in all scientific paradigms” (p. 19).

These researchers, and those discussed in Part 1 – Sampling and Part 2 – Bias, are admittedly a small share of those who have devoted a great deal of thought and writing concerning these shared constructs. The reader is encouraged to utilize these references to build on their understanding of these constructs in qualitative research and to grow their own library of knowledge.

 

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39(3), 124–130.

Koro-Ljungberg, M. (2010). Validity, responsibility, and aporia. Qualitative Inquiry, 16(8), 603–610. https://doi.org/10.1177/1077800410374034

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications.

Matteson, S. M., & Lincoln, Y. S. (2008). Using multiple interviewers in qualitative research studies: The influence of ethic of care behaviors in research interview settings. Qualitative Inquiry, 15(4), 659–674. Retrieved from http://qix.sagepub.com/cgi/doi/10.1177/1077800408330233

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educational Review, 62(3), 279–300.

Maxwell, J. A. (2004). Casual explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11.

Miles, M. B., & Huberman, A. M. (1984). Drawing valid meaning from qualitative data: Toward a shared craft. Educational Researcher, 13(5), 20–30. https://doi.org/10.3102/0013189X013005020

Morse, J. (2020). The changing face of qualitative inquiry. International Journal for Qualitative Methods, 19, 1–7. https://doi.org/10.1177/1609406920909938

Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1(2), 13–22.

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.

Shared Constructs in Research Design: Part 2 — Bias

Part 1 of the discussion of shared constructs — “Shared Constructs in Research Design: Part 1 – Sampling” — acknowledges the distinctiveness between quantitative and qualitative research while Research biashighlighting the notion that there are fundamental constructs common to a quality approach to research design regardless of method or, in the case of qualitative research, paradigm orientation. Three such constructs are sampling, bias, and validity. Part 1 of this discussion focused on sampling (prefaced by a consideration of paradigms in qualitative research and the importance of quality research design regardless of orientation). This article (Part 2) discusses bias.

Bias in qualitative research design has been the topic of a number of articles in Research Design Review over the years. One of these articles is a broad discussion on paying attention to bias in qualitative research and another explores social desirability bias in online research. An article written in 2014 examines the role of empathy in qualitative research and its potential for enhancing clarity while reducing the bias in qualitative data, and another article in RDR talks about visual cues and the importance of visual cues in mitigating sources of bias in qualitative research. Other articles concerning bias in RDR are specific to methods. For example, a couple of articles discuss mitigating interviewer bias in the in-depth interview method — “In-depth Interviewer Effects: Mitigating Interviewer Bias” and “Interviewer Bias & Reflexivity in Qualitative Research” — while another article focuses on ethnography and mitigating observer bias, and a fourth article considers the potential bias in mobile (smartphone) qualitative research.

Others in the field of psychology have discussed various aspects of bias in qualitative research. For example, Linda Finlay (2002) discusses the value of reflexivity as a tool to, among other things, “open up unconscious motivations and implicit biases in the researcher’s approach” (p. 225). Ponterotto (2005) looks at the varying role and understanding of bias across paradigm orientations in qualitative research among the postpositivists, constructivist–interpretivist researchers, and critical–ideological researchers. In psychiatry, Whitley & Crawford (2005) suggest ways to mitigate investigator bias and thereby increase the rigor in qualitative studies. Morrow (2005) asserts that “all research is subject to researcher bias” and highlights the subjectivity inherent in qualitative research and explores bracketing and reflexivity as a means of “making one’s implicit assumptions and biases overt to self and others” (p. 254). And researcher bias is central to the Credibility component of the Total Quality Framework (Roller & Lavrakas, 2015).

Social scientists such as Williams & Heikes (1993) examine the impact of interviewer gender on social desirability bias in qualitative research; while Armour, Rivaux, and Bell (2009) discuss researcher bias within the context of analysis and interpretation of two phenomenological studies. In a recent paper, Howlett (2021) reflects on the transition to online technical research solutions and the associated methodological considerations, such as the negative impact of selection bias due to weak recruitment and engagement strategies.

Among healthcare researchers, Arcury & Quandt (1999) discuss recruitment with a focus on sampling and the use of gatekeepers, with an emphasis on the potential for selection bias which they monitored by way of reviewing “the type of clients being referred to us, relative to the composition of the site clientele” (p. 131). Whittemore, Chase, & Mandle (2001) define quality in qualitative research by way of validity standards, including investigator bias — “…a phenomenological investigation will need to address investigator bias (explicitness) and an emic perspective (vividness) as well as explicate a very specific phenomenon in depth (thoroughness)” (p. 529). And Morse (2015), who is a pioneer in qualitative health research and has written extensively on issues of quality in qualitative research design, highlights the mitigation of researcher bias as central to the validity of qualitative design, offering “the correction of researcher bias” as one recommended strategy for “establishing rigor in qualitative inquiry” (p. 33).

Another shared and much discussed construct among qualitative researchers — validity — is the focus of Part 3 in this discussion.

Arcury, T. A., & Quandt, S. A. (1999). Participant recruitment for qualitative research: A site-based approach to community research in complex societies. Human Organization, 58(2), 128–133. Retrieved from http://www.metapress.com.proxy.wm.edu/content/t5g838w7u1761868/fulltext.pdf

Armour, M., Rivaux, S. L., & Bell, H. (2009). Using context to build rigor: Application to two hermeneutic phenomenological studies. Qualitative Social Work, 8(1), 101–122. https://doi.org/10.1177/1473325008100424

Finlay, L. (2002). Negotiating the swamp: The opportunity and challenge of reflexivity in research practice. Qualitative Research, 2(2), 209–230. Retrieved from http://qrj.sagepub.com/cgi/doi/10.1177/146879410200200205

Howlett, M. (2021). Looking at the ‘field’ through a Zoom lens: Methodological reflections on conducting online research during a global pandemic. Qualitative Research, 146879412098569. https://doi.org/10.1177/1468794120985691

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52(2), 250–260. https://doi.org/10.1037/0022-0167.52.2.250

Morse, J. M. (2015). Critical analysis of strategies for determining rigor in qualitative inquiry. Qualitative Health Research, 25(9), 1212–1222. https://doi.org/10.1017/CBO9781107415324.004

Ponterotto, J. G. (2005). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology, 52(2), 126–136. https://doi.org/10.1037/0022-0167.52.2.126

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach. New York: Guilford Press.

Whitley, R., & Crawford, M. (2005). Qualitative research in psychiatry. Canadian Journal of Psychiatry, 50(2), 108–114. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15807227

Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research. Qualitative Health Research, 11(4), 522–537. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11521609

Williams, C. L., & Heikes, E. J. (1993). The importance of researcher’s gender in the in-depth interview: Evidence from two case studies of male nurses. Gender and Society, 7(2), 280–291.