Analyzability

The Qualitative Analysis Trap (or, Coding Until Blue in the Face)

There is a trap that is easy to fall into when conducting a thematic-style analysis of qualitative data. The trap revolves around coding and, specifically, the idea that after a general familiarization with the in-depth interview or focus group discussion content the researcher pores over the data scrupulously looking for anything deemed worthy of a code. If you think this process is daunting for the seasoned analyst who has categorized and themed many qualitative data sets, consider the newly initiated graduate student who is learning the process for the first time.

Recent dialog on social media suggests that graduate students, in particular, are susceptible to falling into the qualitative analysis trap, i.e., the belief that a well done analysis hinges on developing lots of codes and coding, coding, coding until…well, until the analyst is blue in the face. This is evident by overheard comments such as “I thought I finished coding but every day I am finding new content to code” and “My head is buzzing with all the possible directions for themes.”

Coding of course misses the point. The point of qualitative analysis is not to deconstruct the interview or discussion data into bits and pieces, i.e., codes, but rather to define the research question from participants’ perspectives Read Full Text

Qualitative Data Processing: Minding the Knowledge Gaps

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 34-37).

Once all the data for a qualitative study have been created and gathered, they are rarely ready to be analyzed without further analytic work of some nature being done. At this stage the researcher is working with preliminary data from a collective datasetKnowledge gap that most often must be processed in any number of ways before “sense making” can begin.

For example, it may happen that after the data collection stage has been completed in a qualitative research study, the researcher finds that some of the information that was to be gathered from one or more participants is missing. In a focus group study, for instance, the moderator may have forgotten to ask participants in one group discussion to address a particular construct of importance—such as, the feeling of isolation among newly diagnosed cancer patients. Or, in a content analysis, a coder may have failed to code an attribute in an element of the content that should have been coded.

In these cases, and following from a Total Quality Framework (TQF) perspective, the researcher has the responsibility to actively decide whether or not to go back and fill in the gap in the data when that is possible. Regardless of what decision the researcher makes about these potential problems that are discovered during the data processing stage, the researcher working from the TQF perspective should keep these issues in mind when the analyses and interpretations of the findings are conducted and when the findings and recommendations are disseminated.

It should also be noted that the researcher has the opportunity to mind these gaps during the data collection process itself by continually monitoring interviews or group discussions. As discussed in this Research Design Review article, the researcher should continually review the quality of completions by addressing such questions as Did every interview cover every question or issue important to the research? and Did all interviewees provide clear, unambiguous answers to key questions or issues? In doing so, the researcher has mitigated the potential problem of knowledge gaps in the final data.

 

 

Image captured from: https://modernpumpingtoday.com/bridging-the-knowledge-gap-part-1-of-2/

Analyzable Qualitative Research: The Total Quality Framework Analyzability Component

A March 2017 article in Research Design Review discussed the Credibility component of the Total Quality Framework (TQF). As stated in the March article, the TQF “offers qualitative researchers a way to think about the quality of their research designs across qualitative methods and irrespective of any particular paradigm or theoretical orientation” and revolves around the four phases of the qualitative research process – data collection, analysis, reporting, and doing something of value with the outcomes (i.e., usefulness). The Credibility piece of the TQF has to do with data collection. The main elements of Credibility are Scope and Data Gathering – i.e., how well the study is inclusive of the population of interest (Scope) and how well the data collected accurately represent the constructs the study set out to investigate (Data Gathering).

The present article briefly describes the second TQF component – Analyzability. Analyzability is concerned with the “completeness and accuracy of the analysis and interpretations” of the qualitative data derived in data collection and consists of two key parts – Processing and Verification. Processing involves the careful consideration of: Read Full Text