At the 2015 AAPOR conference in Florida, Paul Lavrakas and I taught a short course on qualitative research design. The bulk of the class was spent on applying the unique constructs and techniques associated with the Total Quality Framework (TQF) to five qualitative research methods – in-depth interviews, focus group discussions, ethnography, qualitative content analysis, and case-centered research (i.e., case study and narrative research). But before jumping into the application of the TQF, we began by talking about the distinctive attributes of qualitative research, particularly the emphasis on context and interconnectedness that is inherent in qualitative data. Indeed, we stressed the complexity – the “messiness” – of qualitative data collection and analysis, along with the unparalleled researcher skills (such as flexibility) needed to perform high-quality and ultimately useful qualitative research.
This course was one of a handful of discussions pertaining to qualitative research at a conference that is heavily weighted toward survey methods. As both a qualitative and quantitative researcher, it is interesting to sit in session after session, learning of the latest work in survey research, wearing both hats. Most striking in these presentations are survey researchers’ usual uncertainties and frustrations with the constructs they are trying to measure. This is not new. Survey researchers have always struggled with making heads or tails of their data, with the goal of producing data that near-perfectly aligns with respondents’ thinking (i.e., construct validity). One presenter expressed her attempts to achieve construct validity as “trying to get it all to line up.”
Philip Brenner – whose work has been discussed elsewhere in this blog – continues to look for “the perfect series of questions” that will account for the many ways people interpret “church attendance.” Kristen Miller is using various techniques to explore the “very subjective” construct of pain, i.e., the fact that there are varying interpretations of questions pertaining to “pain.” Erica Yu is concerned about relieving survey respondent burden but worries about the subjective nature of “burden” and how to define “perceived burden” – or what is “burdensome” – which would enable her to modify the questionnaire design to reduce this “burden.” And, Josh Pasek, Michael Schober, and others are exploring ways to link Twitter messages with survey data, forcing these researchers to make various assumptions in order to address uncertainties having to do with: how individuals use Twitter, tweeters’ true identities, and the “real” (subjective) meaning in their messages.
Which brings us back to qualitative research. As much as survey research serves many essential roles in our society and “we” are better for it, there are times when the obsession to “get it all to line up” – to neatly account for all interpretations of church attendance, pain, burden, and even our tweets – becomes a fool’s game. Without, that is, the help from qualitative inquiry. It would be useful, for instance, to add a qualitative component to quantitative studies that enabled respondents to explain their meaning throughout the survey by which respondents could be skipped to appropriate areas in the questionnaire.
Otherwise, a totally quantitative data-driven approach, that excludes a qualitative measure of how people think about the constructs of interest, will continue to leave survey researchers uncertain and frustrated as they go about the business of “trying to get it all to line up.”