A March 2017 article in Research Design Review discussed the Credibility component of the Total Quality Framework (TQF). As stated in the March article, the TQF “offers qualitative researchers a way to think about the quality of their research designs across qualitative methods and irrespective of any particular paradigm or theoretical orientation” and revolves around the four phases of the qualitative research process – data collection, analysis, reporting, and doing something of value with the outcomes (i.e., usefulness). The Credibility piece of the TQF has to do with data collection. The main elements of Credibility are Scope and Data Gathering – i.e., how well the study is inclusive of the population of interest (Scope) and how well the data collected accurately represent the constructs the study set out to investigate (Data Gathering).
The present article briefly describes the second TQF component – Analyzability. Analyzability is concerned with the “completeness and accuracy of the analysis and interpretations” of the qualitative data derived in data collection and consists of two key parts – Processing and Verification. Processing involves the careful consideration of: Read Full Text
The February 2017 issue of Qualitative Psychology, the journal of the Society for Qualitative Inquiry in Psychology (SQIP, a section of Division 5 of the American Psychological Association) starts off with an article titled “Recommendations for Designing and Reviewing Qualitative Research in Psychology: Promoting Methodological Integrity” (Levitt, Motulsky, Wertz, Morrow, & Ponterotto, 2017). This paper is a report from the SQIP Task Force on Resources for the Publication of Qualitative Research whose purpose it is “to provide resources to support the design and evaluation of qualitative research” and, by way of this paper, offers “a systematic methodological framework that can be useful for reviewers and authors as they design and evaluate research projects” (p. 7).
Importantly, the “methodological framework” recommended by the authors is decidedly not a procedural playbook and not a checklist or a how-to guide. Giving researchers “rules” to follow Read Full Text
Transcripts of qualitative in-depth interviews and focus group discussions (as well as ethnographers’ field notes and recordings) are typically an important component in the data analysis process. It is by way of these transcribed accounts of the researcher-participant exchange that analysts hope to re-live each research event and draw meaningful interpretations from the data. Because of the critical role transcripts often play in the analytical process, researchers routinely take steps to ensure the quality of their transcripts. One such step is the selection of a transcriptionist; specifically, employing a transcriptionist whose top priorities are accuracy and thoroughness as well as someone who is knowledgeable about the subject category, sensitive to how people speak in conversation, comfortable with cultural and regional variations in the language, etc.*
Transcripts take a prominent role, of course, in the utilization of any text analytic or computer-assisted qualitative data analysis software (CAQDAS) program. These software solutions revolve around “data as text,” with any number of built-in features to help sort, count, search, diagram, connect, quote, give context to, and collaborate on the data. Analysts are often instructed to begin the analysis process by absorbing the content of each transcript (by way of multiple readings) followed by a line-by-line inspection of the transcript for relevant code-worthy text. From there, the analyst can work with the codes taking advantage of the various program features.
An important yet rarely discussed impediment to deriving meaningful interpretations from Read Full Text