Mobile research – specifically, research by way of smartphone technology – has become a widely used and accepted design option for conducting qualitative and survey research. The advantages of the mobile mode are many, not the least of which are: the high incidence of smartphone ownership in the U.S. (more than 60% in 2015), the ubiquitous influence smartphones have on our lives, the dependence people have on their smartphones as their go-to channel for communicating and socializing, and the features of the smartphone that offer a variety of response formats (e.g., text, video, image) and location-specific (e.g., geo-targeting, geo-fencing) capabilities.
From a research design perspective, there are also several limitations to the mobile mode, including: the small screen of the smartphone (making the design of standard scale and matrix questionnaire items – as well as the user experience overall – problematic), the relatively short attention span of the respondent or participant precipitated by frequent interruptions, the potential for errors due to the touch screen technology, and connectivity issues.
Another important yet often overlooked concern with mobile research is the potential for bias associated with the smartphone response format and location features mentioned earlier. Researchers have been quick to embrace the ability to capture video and photographs as well as location information yet they have not universally exercised caution when integrating these features into their research designs. For example, a recent webinar in which a qualitative researcher presented the virtues of mobile qualitative research – esp., for documenting in-the-moment experiences – espoused the advantages of Read Full Text
A recent webinar on the ins-and-outs of qualitative research stated that qualitative data could be quantified by simply counting the codes associated with some aspect of the data content, such as the number of times a particular brand name is mentioned or a specific sentiment is expressed towards a topic of interest. The presenter asserted that, by counting these codes, the researcher has in effect “converted” qualitative to quantitative data.
This way of thinking is not unlike those who contend that useful quantitative data can be calculated with qualitative findings by counting the number of “votes” for a particular concept or some aspect of the research subject matter. Let’s say a moderator asks group participants to rate a new product idea on a modest four-point scale from “like very much” to “do not like at all.” Or, an interviewer conducting qualitative in-depth interviews (IDIs) asks each of the 30 participants to rate their agreement with statements pertaining to the advantages of digital technology on a scale from “strongly agree” to “strongly disagree.” It is the responses to these types of questions that some researchers gather up as votes and report as quantitative evidence.
By asserting that codes and votes can be counted and hence transform a portion of qualitative findings Read Full Text
Analysis is probably the biggest obstacle to the broader utilization of qualitative research methods. Other aspects of qualitative research – such as data collection (which is discussed at length throughout Research Design Review as it relates to applying quality standards) – may require a certain degree of resources and deliberation but are not difficult to achieve. Obtaining a representative list of potential participants, for example, or honing the necessary skills to mitigate interviewer bias and gain cooperation from participants demand concentrated efforts on the part of the qualitative researcher but there are fairly straightforward, well-documented procedures to accomplish these goals.
Analysis, however, is difficult and it is the reason why many survey researchers are loath to incorporate a qualitative component – open-ended questions in a survey questionnaire or a Read Full Text