saturation

Actively Conducting an Analysis to Construct an Interpretation

It is not uncommon for researchers who are reporting the results of their quantitative studies to go beyond describing their numerical data and attempt to interpret the meaning associated with this data. For example, in a survey concerning services at a healthcare facility, the portion of respondents who selected the midpoint on a five-point scale to rate the improvement of these services from the year before might be interpreted as having a neutral opinion, i.e., these respondents believe the caliber of services has remained the same, neither better nor worse than a year earlier. And yet there are other interpretations of the midpoint response that may be equally viable. These respondents may not know whether the services have improved or not (e.g., they were not qualified to answer the question). Or, these respondents may believe that the services have gotten worse but are reluctant to give a negative opinion.

Survey researchers fall into this gray area of interpretation because they often lack the tools to build a knowledgeable understanding of vague data types, such as scale midpoints. Unless the study is a hybrid research design (i.e., a quantitative study that incorporates qualitative components), the researcher is left to guess respondents’ meaning.

In contrast, the unique attributes of qualitative research methods offer researchers the tools they need to construct informed interpretations of their data. By way of context, latent (coupled with manifest) meanings, the participant-researcher relationship, and other fundamentals associated with qualitative research, the trained researcher collects thick data from which to build an interpretation that addresses the research objectives in a profound and valuable manner for the users of the research.

Qualitative data analysis is a process by which the researcher is actively involved in the creation of themes from the data and the interpretation within and across themes to construct results that move the topic of investigation forward in some meaningful way. This active involvement is central to what it means to conduct qualitative research. Faithful to the principles that define qualitative research, researchers do not rest on manifest content, such as words alone, or on automated tools that exploit the obvious, such as word clouds.

This is another way of saying — as stated in this article on sample size and saturation — that “themes do not simply pop up…but rather are the result of actively conducting an analysis to construct an interpretation.” As Staller (2015) states, “In lieu of the language of ‘discovering’ things with its positivistic roots, the researcher is actually interpreting the evidence” (p. 147).

Braun and Clarke (2006, 2016, 2019, 2021) have written extensively about the idea that “themes do not passively emerge” (2019, p. 594, italics in original) from thematic analysis and that meaning

is not inherent or self-evident in data, that meaning resides at the intersection of the data and the researcher’s contextual and theoretically embedded interpretative practices – in short, that meaning requires interpretation. (2021, p. 210)

An article posted in 2018 in Research Design Review“The Important Role of ‘Buckets’ in Qualitative Data Analysis” — illustrates this point. The article discusses the analytical step of creating categories (or “buckets”) of codes representing shared constructs prior to building themes. As an example, the discussion focuses on three categories that were developed from an in-depth interview study with financial managers — Technology, Partner, Communication. The researcher constructed themes by looking within and across categories, considering the meaning and context associated with each code. One such theme was “strong partnership,” as illustrated below.

Themes from buckets

The theme “strong partnership” did not simply emerge from the data, it was not lying in the data waiting to be discovered. Rather, the researcher utilized their analytical skills, in conjunction with their constructed understanding of each participant’s contribution to the data, to create contextually sound, meaningful themes such as “strong partnership.” Then, with the depth of definition associated with each theme, the researcher looked within and across themes to build an interpretation of the research data targeted at the research objectives, and provided the users of the research with a meaningful path forward.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Braun, V., & Clarke, V. (2016). (Mis)conceptualising themes, thematic analysis, and other problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis. International Journal of Social Research Methodology, 19(6), 739–743. https://doi.org/10.1080/13645579.2016.1195588

Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, Vol. 11, pp. 589–597. https://doi.org/10.1080/2159676X.2019.1628806

Braun, V., & Clarke, V. (2021). To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative Research in Sport, Exercise and Health, 13(2), 201–216. https://doi.org/10.1080/2159676X.2019.1704846

Staller, K. M. (2015). Qualitative analysis: The art of building bridging relationships. Qualitative Social Work, 14(2), 145–153. https://doi.org/10.1177/1473325015571210

Sample Size in Qualitative Research & the Risk of Relying on Saturation

Qualitative and quantitative research designs require the researcher to think carefully about how and how many to sample within the population segment(s) of interest related to the research objectives. In doing so, the researcher considers demographic and cultural diversity, as well as other distinguishing characteristics (e.g., usage of a particular service or product) and pragmatic issues Risk of relying on saturation(e.g., access and resources). In qualitative research, the number of events (i.e., the number of in-depth interviews, focus group discussions, or observations) and participants is often considered at the early design stage of the research and then again during the field stage (i.e., when the interviews, discussions, or observations are being conducted). This two-stage approach, however, can be problematic. One reason is that giving an accurate sample size prior to data collection can be difficult, particularly when the researcher expects the number to change as the result of in-the-field decisions.

Another potential problem arises when researchers rely solely on the concept of saturation to assess sample size when in the field. In grounded theory, theoretical saturation

“refers to the point at which gathering more data about a theoretical category reveals no new properties nor yields any further theoretical insights about the emerging grounded theory.” (Charmaz, 2014, p. 345)

In the broader sense, Morse (1995) defines saturation as “‘data adequacy’ [or] collecting data until no new information is obtained” (p. 147).

Reliance on the concept of saturation presents two overarching concerns: 1) As discussed in two earlier articles in Research Design ReviewBeyond Saturation: Using Data Quality Indicators to Determine the Number of Focus Groups to Conduct and Designing a Quality In-depth Interview Study: How Many Interviews Are Enough? – the emphasis on saturation has the potential to obscure other important considerations in qualitative research design such as data quality; and 2) Saturation as an assessment tool potentially leads the researcher to focus on the obvious “new information” obtained by each interview, group discussion, or observation rather than gaining a deeper sense of participants’ contextual meaning and more profound understanding of the research question. As Morse (1995) states,

“Richness of data is derived from detailed description, not the number of times something is stated…It is often the infrequent gem that puts other data into perspective, that becomes the central key to understanding the data and for developing the model. It is the implicit that is interesting.” (p. 148)

With this as a backdrop, a couple of recent articles on saturation come to mind. In “A Simple Method to Assess and Report Thematic Saturation in Qualitative Research” (Guest, Namey, & Chen, 2020), the authors present a novel approach to assessing sample size in the in-depth interview method that can be applied during or after data collection. This approach is born from Read Full Text

Cognitive Interviewing: A Few Best Practices

Cognitive interviewing is a method used by survey researchers to investigate the integrity of their questionnaire designs prior to launching the field portion of the study. In the edited volume Cognitive Interviewing Methodology, Kristen Miller (2014) describes cognitive interviewing as “a qualitative method that examines the question-response process, specifically the processes and considerations used by respondents as they form answers to survey q4 attributes of the CI methoduestions,” further explaining that “through the interviewing process, various types of question-response problems that would not normally be identified in a traditional survey interview, such as interpretive errors and recall accuracy, are uncovered” (p. 2). In this way, survey researchers identify the users’ (i.e., survey respondents’) possible meaning and interpretation of survey questions – having to do with question structure or format and terminology – that may or may not deviate from the researcher’s intent. Importantly, the objective of the cognitive interview is not to simply determine whether a questionnaire item “makes sense” to an individual  but to go beyond that to explore the individual’s lived experience (personal context, attitudes, perceptions, behavior) in relationship to their interpretation and/or ability to answer a particular question.

Although not typically included under the “qualitative research” umbrella (with in-depth interviewing, focus group discussions, and observation), four of the 10 unique attributes associated with qualitative research are notably relevant to the cognitive interviewing method. They are the: importance of meaning, flexibility of design, participant-researcher relationship, and researcher skill set. These distinctive qualities of the cognitive interviewing method, and qualitative methods generally, define why researchers opt for Read Full Text