It is easy to fall into the trap of relying on the “why” question when conducting qualitative research. After all, the use of qualitative research is often supported with the claim that qualitative methods enable the researcher to reach beyond quantitative numerical data to grasp the meaning and motivations – that is, the why – associated with particular attitudes and behavior. And it is in this spirit that researchers frequently find themselves with interview and discussion guides full of “why” questions – Why do you say you are happy? Why do you prefer one political candidate over another? Why do you diet? Why do you believe in God? Why do you use a tablet rather than a laptop computer?
Yet “why” is rarely the question worth asking. In fact, asking “why” questions can actually have a negative effect on data collection (i.e., Credibility) and contribute bias to qualitative data. This happens for many reasons, here are just four:
The “why” question potentially
• Evokes rationality. By asking the “why” question, researchers are in essence asking participants to justify their attitudes and behavior. In contemplating a justification, it is not unusual for participants to seek Read Full Text
A recent webinar on the ins-and-outs of qualitative research stated that qualitative data could be quantified by simply counting the codes associated with some aspect of the data content, such as the number of times a particular brand name is mentioned or a specific sentiment is expressed towards a topic of interest. The presenter asserted that, by counting these codes, the researcher has in effect “converted” qualitative to quantitative data.
This way of thinking is not unlike those who contend that useful quantitative data can be calculated with qualitative findings by counting the number of “votes” for a particular concept or some aspect of the research subject matter. Let’s say a moderator asks group participants to rate a new product idea on a modest four-point scale from “like very much” to “do not like at all.” Or, an interviewer conducting qualitative in-depth interviews (IDIs) asks each of the 30 participants to rate their agreement with statements pertaining to the advantages of digital technology on a scale from “strongly agree” to “strongly disagree.” It is the responses to these types of questions that some researchers gather up as votes and report as quantitative evidence.
By asserting that codes and votes can be counted and hence transform a portion of qualitative findings Read Full Text
Samantha Heintzelman and Laura King, at the University of Missouri, published an article in American Psychologist in 2014 titled, “Life is Pretty Meaningful.” In this article the authors discuss their work that explores the answer to the “lofty” question “How meaningful is life, in general?” To do this, Heintzelman and King examined two broad categories of data sources: 1) large-scale surveys – six representative surveys conducted in the U.S. and a worldwide poll; and 2) articles published in the literature that explicitly report on research studies utilizing one of two established measures of meaning in life – the Purpose in Life Test (PIL) and Meaning in Life Questionnaire (MLQ). The large-scale surveys asked yes-and-no questions such as “Did you feel that your life has meaning [in the past 12 months]?” as well as agree-disagree rating scale items such as “My life has a real purpose.” Their analysis of these surveys concluded that “for most people, life is meaningful [and] comparatively few Read Full Text