Analyzability

Qualitative Data Processing: Minding the Knowledge Gaps

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 34-37).

Once all the data for a qualitative study have been created and gathered, they are rarely ready to be analyzed without further analytic work of some nature being done. At this stage the researcher is working with preliminary data from a collective datasetKnowledge gap that most often must be processed in any number of ways before “sense making” can begin.

For example, it may happen that after the data collection stage has been completed in a qualitative research study, the researcher finds that some of the information that was to be gathered from one or more participants is missing. In a focus group study, for instance, the moderator may have forgotten to ask participants in one group discussion to address a particular construct of importance—such as, the feeling of isolation among newly diagnosed cancer patients. Or, in a content analysis, a coder may have failed to code an attribute in an element of the content that should have been coded.

In these cases, and following from a Total Quality Framework (TQF) perspective, the researcher has the responsibility to actively decide whether or not to go back and fill in the gap in the data when that is possible. Regardless of what decision the researcher makes about these potential problems that are discovered during the data processing stage, the researcher working from the TQF perspective should keep these issues in mind when the analyses and interpretations of the findings are conducted and when the findings and recommendations are disseminated.

It should also be noted that the researcher has the opportunity to mind these gaps during the data collection process itself by continually monitoring interviews or group discussions. As discussed in this Research Design Review article, the researcher should continually review the quality of completions by addressing such questions as Did every interview cover every question or issue important to the research? and Did all interviewees provide clear, unambiguous answers to key questions or issues? In doing so, the researcher has mitigated the potential problem of knowledge gaps in the final data.

 

 

Image captured from: https://modernpumpingtoday.com/bridging-the-knowledge-gap-part-1-of-2/

Analyzable Qualitative Research: The Total Quality Framework Analyzability Component

A March 2017 article in Research Design Review discussed the Credibility component of the Total Quality Framework (TQF). As stated in the March article, the TQF “offers qualitative researchers a way to think about the quality of their research designs across qualitative methods and irrespective of any particular paradigm or theoretical orientation” and revolves around the four phases of the qualitative research process – data collection, analysis, reporting, and doing something of value with the outcomes (i.e., usefulness). The Credibility piece of the TQF has to do with data collection. The main elements of Credibility are Scope and Data Gathering – i.e., how well the study is inclusive of the population of interest (Scope) and how well the data collected accurately represent the constructs the study set out to investigate (Data Gathering).

The present article briefly describes the second TQF component – Analyzability. Analyzability is concerned with the “completeness and accuracy of the analysis and interpretations” of the qualitative data derived in data collection and consists of two key parts – Processing and Verification. Processing involves the careful consideration of: Read Full Text

The Limitations of Transcripts: It is Time to Talk About the Elephant in the Room

Transcripts of qualitative in-depth interviews and focus group discussions (as well as ethnographers’ field notes and recordings) are typically an important component in the data analysis process. It elephant-in-the-roomis by way of these transcribed accounts of the researcher-participant exchange that analysts hope to re-live each research event and draw meaningful interpretations from the data. Because of the critical role transcripts often play in the analytical process, researchers routinely take steps to ensure the quality of their transcripts. One such step is the selection of a transcriptionist; specifically, employing a transcriptionist whose top priorities are accuracy and thoroughness as well as someone who is knowledgeable about the subject category, sensitive to how people speak in conversation, comfortable with cultural and regional variations in the language, etc.*

Transcripts take a prominent role, of course, in the utilization of any text analytic or computer-assisted qualitative data analysis software (CAQDAS) program. These software solutions revolve around “data as text,” with any number of built-in features to help sort, count, search, diagram, connect, quote, give context to, and collaborate on the data. Analysts are often instructed to begin the analysis process by absorbing the content of each transcript (by way of multiple readings) followed by a line-by-line inspection of the transcript for relevant code-worthy text. From there, the analyst can work with the codes taking advantage of the various program features.

An important yet rarely discussed impediment to deriving meaningful interpretations from Read Full Text