Qualitative Analysis

Supporting Observational Research

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 217-219) which is a qualitative methods text covering in-depth interviews, focus group discussions, ethnography, qualitative content analysis, case study, and narrative research.

An important element in the Total Quality Framework Analyzability component is Verification, i.e., taking steps to establish some level of support for the data gathered in order to move the researcher closer to achieving high quality outcomes. The verificationSupporting qualitative data tools at the ethnographer’s disposal go beyond those identified for the in-depth interview (IDI) and group discussion methods in that they include the technique of expanded observation. For example, Lincoln and Guba (1985) stated that it is “more likely that credible findings and interpretations” will come from ethnographic data with “prolonged engagement” in the field and “persistent observation” (p. 301). The former refers to spending adequate time at an observation site to experience the breadth of stimuli and activities relevant to the research, and the purpose of the latter (i.e., persistent observation) is “to identify those characteristics and elements in the situation that are most relevant to the problem or issue” (p. 304)—that is, to provide a depth of understanding of the “salient factors.” Both prolonged engagement and persistent observation speak to the idea of expanding observation in terms of time as well as diligence in exploring variables as they emerge in the observation. Although expanding observations in this way may be unrealistic due to the realities of deadlines and research funding, it is an important verification approach unique to ethnography. When practicable, it is recommended that researchers maximize the time allotted for observation and train observers to look for the unexpected or examine more closely seemingly minor occurrences or variables that may ultimately support (or contradict) the observer’s dominant understanding.

The ultimate usefulness of expanded observation is not unlike deviant or negative case analysis (see earlier link). In both instances, the goal is to identify and investigate observational events (or particular variables in these events) that defy explanation or otherwise contradict the general patterns or themes that appear to be emerging from the data. For example, a researcher conducting in-home nonparticipant observations of young mothers Read Full Text

Qualitative Data Analysis: The Unit of Analysis

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 262-263).

As discussed in two earlier articles in Research Design Review (see “The Important Role of ‘Buckets’ in Qualitative Data Analysis” and “Finding Connections & Making Sense of Qualitative Data”), the selection of the unit of analysis is one of the  first steps in the qualitative data analysis process. The “unit of analysis” refers to the portion of content that will be the basis for decisions made during the development of codes. For example, in textual content analyses, the unit of analysis may be at the level of a word, a sentence (Milne & Adler, 1999), a paragraph, an article or chapter, an entire edition or volume, a complete response to an interview question, entire diaries from research participants, or some other level of text. The unit of analysis may not be defined by the content per se but rather by a characteristic of the content originator (e.g., person’s age), or the unit of analysis might be at the individual level with, for example, each participant in an in-depth interview (IDI) study treated as a case. Whatever the unit of analysis, the researcher will make coding decisions based on various elements of the content, including length, complexity, manifest meanings, and latent meanings based on such nebulous variables as the person’s tone or manner.

Deciding on the unit of analysis is a very important decision because it guides the development of codes as well as the coding process. If a weak unit of analysis is chosen, one of two outcomes may result: 1) If the unit chosen is too precise (i.e., at too much of a micro-level than what is actually needed), the researcher will set in motion Read Full Text

The Qualitative Analysis Trap (or, Coding Until Blue in the Face)

There is a trap that is easy to fall into when conducting a thematic-style analysis of qualitative data. The trap revolves around coding and, specifically, the idea that after a general familiarization with the in-depth interview or focus group discussion content the researcher pores over the data scrupulously looking for anything deemed worthy of a code. If you think this process is daunting for the seasoned analyst who has categorized and themed many qualitative data sets, consider the newly initiated graduate student who is learning the process for the first time.

Recent dialog on social media suggests that graduate students, in particular, are susceptible to falling into the qualitative analysis trap, i.e., the belief that a well done analysis hinges on developing lots of codes and coding, coding, coding until…well, until the analyst is blue in the face. This is evident by overheard comments such as “I thought I finished coding but every day I am finding new content to code” and “My head is buzzing with all the possible directions for themes.”

Coding of course misses the point. The point of qualitative analysis is not to deconstruct the interview or discussion data into bits and pieces, i.e., codes, but rather to define the research question from participants’ perspectives Read Full Text