research analysis

The Virtue of Recordings in Qualitative Analysis

A February 2017 article posted in Research Design Review discusses qualitative data transcripts and, specifically, the potential pitfalls when depending only on transcripts in the qualitative analysis process. As stated in the article,

Although serving a utilitarian purpose, transcripts effectively convert the all-too-human research experience that defines qualitative inquiry to the relatively emotionless drab confines of black-on-white text. Gone is the profound mood swing that descended over the participant when the interviewer asked about his elderly mother. Yes, there is text in the transcript that conveys some aspect of this mood but only to the extent that the participant is able to articulate it. Gone is the tone of voice that fluctuated depending on what aspect of the participant’s hospital visit was being discussed. Yes, the transcriptionist noted a change in voice but it is the significance and predictability of these voice changes that the interviewer grew to know over time that is missing from the transcript. Gone is an understanding of the lopsided interaction in the focus group discussion among teenagers. Yes, the analyst can ascertain from the transcript that a few in the group talked more than others but what is missing is the near-indescribable sounds dominant participants made to stifle other participants and the choked atmosphere that pervaded the discussion along with the entire group environment.

Missing from this article is an explicit discussion of the central role audio and/or video recordings – that accompany verbal qualitative research modes, e.g., face-to-face and telephone group discussions and in-depth interviews (IDIs) – play in the analysis of qualitative data. Researchers who routinely utilize recordings during analysis are more likely to derive valid interpretations of the data while also staying connected to Read Full Text

Analyzable Qualitative Research: The Total Quality Framework Analyzability Component

A March 2017 article in Research Design Review discussed the Credibility component of the Total Quality Framework (TQF). As stated in the March article, the TQF “offers qualitative researchers a way to think about the quality of their research designs across qualitative methods and irrespective of any particular paradigm or theoretical orientation” and revolves around the four phases of the qualitative research process – data collection, analysis, reporting, and doing something of value with the outcomes (i.e., usefulness). The Credibility piece of the TQF has to do with data collection. The main elements of Credibility are Scope and Data Gathering – i.e., how well the study is inclusive of the population of interest (Scope) and how well the data collected accurately represent the constructs the study set out to investigate (Data Gathering).

The present article briefly describes the second TQF component – Analyzability. Analyzability is concerned with the “completeness and accuracy of the analysis and interpretations” of the qualitative data derived in data collection and consists of two key parts – Processing and Verification. Processing involves the careful consideration of: Read Full Text

Words Versus Meanings

There is a significant hurdle that researchers face when considering the addition of qualitative methods to the-suntheir research designs.  This has to do with the analysis – making sense – of the qualitative data.  One could argue that there are certainly other hurdles that lie ahead, such as those related to a quality approach to data collection, but the greatest perceived obstacle seems to reside in how to efficiently analyze qualitative outcomes.  This means that researchers working in large organizations that hope to conduct many qualitative studies over the course of a year are looking for a relatively fast and inexpensive analysis solution compared to the traditionally more laborious thought-intensive efforts utilized by qualitative researchers.

Among these researchers, efficiency is defined in terms of speed and cost.  And for these reasons they gravitate to text analytic programs and models powered by underlying algorithms.  The core of modeling solutions – such as word2vec and topic modeling – rests on “training” text corpora to produce vectors or clusters of co-occurring words or topics.  There are any number of programs that support these types of analytics, including those that incorporate data visualization functions that enable the researcher to see how words or topics congregate (or not), producing images such as these Read Full Text