Qualitative Analysis

Qualitative Data Analysis: The Unit of Analysis

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 262-263).

As discussed in two earlier articles in Research Design Review (see “The Important Role of ‘Buckets’ in Qualitative Data Analysis” and “Finding Connections & Making Sense of Qualitative Data”), the selection of the unit of analysis is one of the  first steps in the qualitative data analysis process. The “unit of analysis” refers to the portion of content that will be the basis for decisions made during the development of codes. For example, in textual content analyses, the unit of analysis may be at the level of a word, a sentence (Milne & Adler, 1999), a paragraph, an article or chapter, an entire edition or volume, a complete response to an interview question, entire diaries from research participants, or some other level of text. The unit of analysis may not be defined by the content per se but rather by a characteristic of the content originator (e.g., person’s age), or the unit of analysis might be at the individual level with, for example, each participant in an in-depth interview (IDI) study treated as a case. Whatever the unit of analysis, the researcher will make coding decisions based on various elements of the content, including length, complexity, manifest meanings, and latent meanings based on such nebulous variables as the person’s tone or manner.

Deciding on the unit of analysis is a very important decision because it guides the development of codes as well as the coding process. If a weak unit of analysis is chosen, one of two outcomes may result: 1) If the unit chosen is too precise (i.e., at too much of a micro-level than what is actually needed), the researcher will set in motion Read Full Text

The Qualitative Analysis Trap (or, Coding Until Blue in the Face)

There is a trap that is easy to fall into when conducting a thematic-style analysis of qualitative data. The trap revolves around coding and, specifically, the idea that after a general familiarization with the in-depth interview or focus group discussion content the researcher pores over the data scrupulously looking for anything deemed worthy of a code. If you think this process is daunting for the seasoned analyst who has categorized and themed many qualitative data sets, consider the newly initiated graduate student who is learning the process for the first time.

Recent dialog on social media suggests that graduate students, in particular, are susceptible to falling into the qualitative analysis trap, i.e., the belief that a well done analysis hinges on developing lots of codes and coding, coding, coding until…well, until the analyst is blue in the face. This is evident by overheard comments such as “I thought I finished coding but every day I am finding new content to code” and “My head is buzzing with all the possible directions for themes.”

Coding of course misses the point. The point of qualitative analysis is not to deconstruct the interview or discussion data into bits and pieces, i.e., codes, but rather to define the research question from participants’ perspectives Read Full Text

Qualitative Data Processing: Minding the Knowledge Gaps

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 34-37).

Once all the data for a qualitative study have been created and gathered, they are rarely ready to be analyzed without further analytic work of some nature being done. At this stage the researcher is working with preliminary data from a collective datasetKnowledge gap that most often must be processed in any number of ways before “sense making” can begin.

For example, it may happen that after the data collection stage has been completed in a qualitative research study, the researcher finds that some of the information that was to be gathered from one or more participants is missing. In a focus group study, for instance, the moderator may have forgotten to ask participants in one group discussion to address a particular construct of importance—such as, the feeling of isolation among newly diagnosed cancer patients. Or, in a content analysis, a coder may have failed to code an attribute in an element of the content that should have been coded.

In these cases, and following from a Total Quality Framework (TQF) perspective, the researcher has the responsibility to actively decide whether or not to go back and fill in the gap in the data when that is possible. Regardless of what decision the researcher makes about these potential problems that are discovered during the data processing stage, the researcher working from the TQF perspective should keep these issues in mind when the analyses and interpretations of the findings are conducted and when the findings and recommendations are disseminated.

It should also be noted that the researcher has the opportunity to mind these gaps during the data collection process itself by continually monitoring interviews or group discussions. As discussed in this Research Design Review article, the researcher should continually review the quality of completions by addressing such questions as Did every interview cover every question or issue important to the research? and Did all interviewees provide clear, unambiguous answers to key questions or issues? In doing so, the researcher has mitigated the potential problem of knowledge gaps in the final data.

 

 

Image captured from: https://modernpumpingtoday.com/bridging-the-knowledge-gap-part-1-of-2/