Qualitative Data Analysis: The Unit of Analysis

The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 262-263).

As discussed in two earlier articles in Research Design Review (see “The Important Role of ‘Buckets’ in Qualitative Data Analysis” and “Finding Connections & Making Sense of Qualitative Data”), the selection of the unit of analysis is one of the  first steps in the qualitative data analysis process. The “unit of analysis” refers to the portion of content that will be the basis for decisions made during the development of codes. For example, in textual content analyses, the unit of analysis may be at the level of a word, a sentence (Milne & Adler, 1999), a paragraph, an article or chapter, an entire edition or volume, a complete response to an interview question, entire diaries from research participants, or some other level of text. The unit of analysis may not be defined by the content per se but rather by a characteristic of the content originator (e.g., person’s age), or the unit of analysis might be at the individual level with, for example, each participant in an in-depth interview (IDI) study treated as a case. Whatever the unit of analysis, the researcher will make coding decisions based on various elements of the content, including length, complexity, manifest meanings, and latent meanings based on such nebulous variables as the person’s tone or manner.

Deciding on the unit of analysis is a very important decision because it guides the development of codes as well as the coding process. If a weak unit of analysis is chosen, one of two outcomes may result: 1) If the unit chosen is too precise (i.e., at too much of a micro-level than what is actually needed), the researcher will set in motion Read Full Text

The “Real Ethnography” of Michael Agar

Several years ago, when working on Applied Qualitative Research Design, I began reading the works of Michael Agar. To simply say that Agar was an anthropologist would be cutting him short; and, indeed, Anthropology News, in an article published shortly after Agar’s death in May 2017, described him as

“a linguistic anthropologist, a cultural anthropologist, almost an South Asianist, a drug expert, a medical anthropologist, an applied anthropologist, a practicing anthropologist, a public anthropologist, a professional anthropologist, a professional stranger, a theoretical anthropologist, an academic anthropologist, an independent consultant, a cross cultural consultant, a computer modeler, an agent-based modeler, a complexity theorist, an environmentalist, a water expert, a teacher…”

One doesn’t need to look far to be enlightened as well as entertained by Mike Agar – On the “Scribblings” page of his Ethknoworks website, he lightheartedly rants about the little money most authors make in royalties stating “If you divide money earned by time invested in writing and publishing, you’ll see that you’d do better with a paper route in Antarctica.” It may be this combined ability to enlighten and entertain that drew me to Agar and keeps me ever mindful of the words he has written and the ideas he instilled.

For some reason I come back to his 2006 article “An Ethnography By Any Other Name…”. In it, Agar explores the question “What is a real ethnography?” with discussions of debates (“tension”) between anthropologists and sociologists, and about various nuances such as whether applied anthropology is actually “real” given that “ethnography no longer meant a year or more by yourself in a village far Read Full Text

Pigeonholing Qualitative Data: Why Qualitative Responses Cannot Be Quantified

A recent webinar on the ins-and-outs of qualitative research stated that qualitative data could be quantified by simply counting the codes associated with some aspect of the data content, such as the number of times a particular brand name is mentioned or a specific sentiment is expressed towards a Pigeonholetopic of interest.  The presenter asserted that, by counting these codes, the researcher has in effect “converted” qualitative to quantitative data.

This way of thinking is not unlike those who contend that useful quantitative data can be calculated with qualitative findings by counting the number of “votes” for a particular concept or some aspect of the research subject matter.  Let’s say a moderator asks group participants to rate a new product idea on a modest four-point scale from “like very much” to “do not like at all.”  Or, an interviewer conducting qualitative in-depth interviews (IDIs) asks each of the 30 participants to rate their agreement with statements pertaining to the advantages of digital technology on a scale from “strongly agree” to “strongly disagree.”  It is the responses to these types of questions that some researchers gather up as votes and report as quantitative evidence.

By asserting that codes and votes can be counted and hence transform a portion of qualitative findings Read Full Text