qualitative analysis

Generalizability in Case Study Research

Portions of the following article are modified excerpts from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 307-326)

Case study research has been the focus of several articles in Research Design Review. These articles range from discussions on case-centered research (i.e., case study and narrative research) generally — “Multi-method & Case-centered Research: When the Whole is Greater Than the Sum of its Parts,” “Lighting a Path to Guide Case-Centered Research Design: A Six-Step Approach,” and “Ethical Considerations in Case-Centered Qualitative Research” — to articles where the subject matter is specific to case study research — “Case Study Research: An Internal-External Classification.”generalizability

One of the controversies associated with case study research designs centers on “generalization” and the extent to which the data can explain phenomena or situations outside and beyond the specific scope of a particular study. On the one hand, there are researchers such as Yin (2014) who espouse “analytical generalization” whereby the researcher compares (or “generalizes”) case study data to existing theory1. From Yin’s perspective, case study research is driven by the need to develop or test theory, giving single- as well as multiple-case study research explanatory powers — “Some of the best and most famous case studies have been explanatory case studies” (Yin, 2014, p. 7).

Diane Vaughan’s research is a case study referenced by Yin (2014) as an example of a single-case research design that resulted in outcomes that provided broader implications (i.e., “generalized”) to similar contexts outside the case. In both The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA (1996) and “The Trickle-Down Effect: Policy Decisions, Risky Work, and the Challenger Tragedy” (1997), Vaughan describes the findings and conclusions from her study of the circumstances that led to the Challenger disaster in 1986. By way of scrutinizing archival documents and conducting interviews, Vaughan “reconstructed the history of decision making” and ultimately discovered “an incremental descent into poor judgment” (1996, p. xiii). More broadly, Vaughan used this study to Read Full Text

Qualitative Analysis: ‘Thick Meaning’ by Preserving Each Lived Experience

My approach to qualitative data analysis has nothing to do with Post-it Notes, clipping excerpts from transcripts (digitally or with scissors), or otherwise breaking participants’ input (“data”) into bite-size pieces. My approach Thick meaningis the opposite of that. My goal is to gain an enriched understanding of each participant’s lived experience associated with the research questions and objectives and, from there, develop an informed contextually nuanced interpretation across participants. By way of deriving “thick meaning” within and across participants, I hope to provide the sponsor of the research with consequential and actionable outcomes.

I begin the analysis process immediately after completing the first in-depth interview (IDI) or focus group discussion by writing down (typically, in a spreadsheet) what I think I learned from each participant or group discussion pertaining to the key research questions and objectives as well as any new, unexpected yet relevant topic areas. I do this by referring to my in-session notes (for IDIs) and the IDI or group discussion audio recording. I then give thoughtful study and internalize each participant’s lived experience associated with the research questions and objectives which enables me to gain an understanding of the complexities of any one thought or idea while also respectfully preserving the integrity of the individual or group of individuals. “Preserving the integrity of the individual or group of individuals” is an important component of this approach which is grounded in the belief that researchers have a moral obligation to make a concerted effort to uphold each participant’s individuality to the extent possible in the analytical process.

At the completion of the final IDI or focus group discussion, I begin reflecting more heavily on what I learned from each participant Read Full Text

Sample Size in Qualitative Research & the Risk of Relying on Saturation

Qualitative and quantitative research designs require the researcher to think carefully about how and how many to sample within the population segment(s) of interest related to the research objectives. In doing so, the researcher considers demographic and cultural diversity, as well as other distinguishing characteristics (e.g., usage of a particular service or product) and pragmatic issues Risk of Relying on Saturation(e.g., access and resources). In qualitative research, the number of events (i.e., the number of in-depth interviews, focus group discussions, or observations) and participants is often considered at the early design stage of the research and then again during the field stage (i.e., when the interviews, discussions, or observations are being conducted). This two-stage approach, however, can be problematic. One reason is that giving an accurate sample size prior to data collection can be difficult, particularly when the researcher expects the number to change as the result of in-the-field decisions.

Another potential problem arises when researchers rely solely on the concept of saturation to assess sample size when in the field. In grounded theory, theoretical saturation

“refers to the point at which gathering more data about a theoretical category reveals no new properties nor yields any further theoretical insights about the emerging grounded theory.” (Charmaz, 2014, p. 345)

In the broader sense, Morse (1995) defines saturation as “‘data adequacy’ [or] collecting data until no new information is obtained” (p. 147).

Reliance on the concept of saturation presents two overarching concerns: 1) As discussed in two earlier articles in Research Design ReviewBeyond Saturation: Using Data Quality Indicators to Determine the Number of Focus Groups to Conduct and Designing a Quality In-depth Interview Study: How Many Interviews Are Enough? – the emphasis on saturation has the potential to obscure other important considerations in qualitative research design such as data quality; and 2) Saturation as an assessment tool potentially leads the researcher to focus on the obvious “new information” obtained by each interview, group discussion, or observation rather than gaining a deeper sense of participants’ contextual meaning and more profound understanding of the research question. As Morse (1995) states,

“Richness of data is derived from detailed description, not the number of times something is stated…It is often the infrequent gem that puts other data into perspective, that becomes the central key to understanding the data and for developing the model. It is the implicit that is interesting.” (p. 148)

With this as a backdrop, a couple of recent articles on saturation come to mind. In “A Simple Method to Assess and Report Thematic Saturation in Qualitative Research” (Guest, Namey, & Chen, 2020), the authors present a novel approach to assessing sample size in the in-depth interview method that can be applied during or after data collection. This approach is born from Read Full Text