Research Analysis

The Important Role of “Buckets” in Qualitative Data Analysis

An earlier article in Research Design Review“Finding Connections & Making Sense of Qualitative Data” – discusses the idea that a quality approach to a qualitative research design incorporates a carefully considered plan for analyzing, and making sense of, the data in order to produce outcomes that are ultimately useful to the users of the research. Specifically, this article touches on the six recommended steps in the analysis process.* These steps might be thought of as a variation of the classic Braun & Clarke (2006) thematic analysis scheme in that the researcher begins by selecting a unit of analysis (and thus becoming familiar with the data) which is then followed by a coding process.

Unique to the six-step process outlined in the earlier RDR article is the step that comes after coding. Rather than immediately digging into the codes searching for themes, it is recommended that the researcher look through the codes to identify categories. These categories basically represent buckets of codes that are deemed to share a certain underlying construct or meaning. In the end, the researcher is left with any number of buckets filled with a few or many codes from which the researcher can identify patterns or themes in the data overall. Importantly, any of the codes within a category or bucket can (and probably will) be used to define more than one theme.

As an example, consider an in-depth interview study with financial managers of a large non-profit organization concerning their key considerations when selecting financial service providers. After the completion of 35 interviews, the researcher absorbs the content, selects the unit of analysis (the entire interview), and develops 75-100 descriptive codes. In the next phase of the process the researcher combs through the codes looking for participants’ thoughts/comments that convey similar broad meaning related to the research question(s). In doing so, Read Full Text

The Virtue of Recordings in Qualitative Analysis

A February 2017 article posted in Research Design Review discusses qualitative data transcripts and, Qualitative Research Recordingspecifically, the potential pitfalls when depending only on transcripts in the qualitative analysis process. As stated in the article,

Although serving a utilitarian purpose, transcripts effectively convert the all-too-human research experience that defines qualitative inquiry to the relatively emotionless drab confines of black-on-white text. Gone is the profound mood swing that descended over the participant when the interviewer asked about his elderly mother. Yes, there is text in the transcript that conveys some aspect of this mood but only to the extent that the participant is able to articulate it. Gone is the tone of voice that fluctuated depending on what aspect of the participant’s hospital visit was being discussed. Yes, the transcriptionist noted a change in voice but it is the significance and predictability of these voice changes that the interviewer grew to know over time that is missing from the transcript. Gone is an understanding of the lopsided interaction in the focus group discussion among teenagers. Yes, the analyst can ascertain from the transcript that a few in the group talked more than others but what is missing is the near-indescribable sounds dominant participants made to stifle other participants and the choked atmosphere that pervaded the discussion along with the entire group environment.

Missing from this article is an explicit discussion of the central role audio and/or video recordings – that accompany verbal qualitative research modes, e.g., face-to-face and telephone group discussions and in-depth interviews (IDIs) – play in the analysis of qualitative data. Researchers who routinely utilize recordings during analysis are more likely to derive valid interpretations of the data while also staying connected to Read Full Text

Analyzable Qualitative Research: The Total Quality Framework Analyzability Component

A March 2017 article in Research Design Review discussed the Credibility component of the Total Quality Framework (TQF). As stated in the March article, the TQF “offers qualitative researchers a way to think about the quality of their research designs across qualitative methods and irrespective of any particular paradigm or theoretical orientation” and revolves around the four phases of the qualitative research process – data collection, analysis, reporting, and doing something of value with the outcomes (i.e., usefulness). The Credibility piece of the TQF has to do with data collection. The main elements of Credibility are Scope and Data Gathering – i.e., how well the study is inclusive of the population of interest (Scope) and how well the data collected accurately represent the constructs the study set out to investigate (Data Gathering).

The present article briefly describes the second TQF component – Analyzability. Analyzability is concerned with the “completeness and accuracy of the analysis and interpretations” of the qualitative data derived in data collection and consists of two key parts – Processing and Verification. Processing involves the careful consideration of: Read Full Text