A March 2017 article in Research Design Review discussed the Credibility component of the Total Quality Framework (TQF). As stated in the March article, the TQF “offers qualitative researchers a way to think about the quality of their research designs across qualitative methods and irrespective of any particular paradigm or theoretical orientation” and revolves around the four phases of the qualitative research process – data collection, analysis, reporting, and doing something of value with the outcomes (i.e., usefulness). The Credibility piece of the TQF has to do with data collection. The main elements of Credibility are Scope and Data Gathering – i.e., how well the study is inclusive of the population of interest (Scope) and how well the data collected accurately represent the constructs the study set out to investigate (Data Gathering).
The present article briefly describes the second TQF component – Analyzability. Analyzability is concerned with the “completeness and accuracy of the analysis and interpretations” of the qualitative data derived in data collection and consists of two key parts – Processing and Verification. Processing involves the careful consideration of: Read Full Text
There is a significant hurdle that researchers face when considering the addition of qualitative methods to their research designs. This has to do with the analysis – making sense – of the qualitative data. One could argue that there are certainly other hurdles that lie ahead, such as those related to a quality approach to data collection, but the greatest perceived obstacle seems to reside in how to efficiently analyze qualitative outcomes. This means that researchers working in large organizations that hope to conduct many qualitative studies over the course of a year are looking for a relatively fast and inexpensive analysis solution compared to the traditionally more laborious thought-intensive efforts utilized by qualitative researchers.
Among these researchers, efficiency is defined in terms of speed and cost. And for these reasons they gravitate to text analytic programs and models powered by underlying algorithms. The core of modeling solutions – such as word2vec and topic modeling – rests on “training” text corpora to produce vectors or clusters of co-occurring words or topics. There are any number of programs that support these types of analytics, including those that incorporate data visualization functions that enable the researcher to see how words or topics congregate (or not), producing images such as these Read Full Text
A recent webinar on the ins-and-outs of qualitative research stated that qualitative data could be quantified by simply counting the codes associated with some aspect of the data content, such as the number of times a particular brand name is mentioned or a specific sentiment is expressed towards a topic of interest. The presenter asserted that, by counting these codes, the researcher has in effect “converted” qualitative to quantitative data.
This way of thinking is not unlike those who contend that useful quantitative data can be calculated with qualitative findings by counting the number of “votes” for a particular concept or some aspect of the research subject matter. Let’s say a moderator asks group participants to rate a new product idea on a modest four-point scale from “like very much” to “do not like at all.” Or, an interviewer conducting qualitative in-depth interviews (IDIs) asks each of the 30 participants to rate their agreement with statements pertaining to the advantages of digital technology on a scale from “strongly agree” to “strongly disagree.” It is the responses to these types of questions that some researchers gather up as votes and report as quantitative evidence.
By asserting that codes and votes can be counted and hence transform a portion of qualitative findings Read Full Text