Last month’s post – “Insights vs. Metrics: Finding Meaning in Online Qualitative Research” – talked about “social media metric mania” and the value of off- and online qualitative research tools “that dig behind the obvious and attempt to reveal how people truly think.” In light of these remarks, it is good to find researchers who are exploring social media research design and attempting to determine the necessary parameters to maximize quality output. The researchers at J.D. Power and Associates are doing just that. In particular, Gina Pingitore, Chief Research Officer, and others at J.D. Power have written a couple of white papers discussing design issues such as validity, reliability, and best practices in social media research. The research-on-research work they have conducted on these issues is applauded for its focus on establishing quality standards and for its overarching goal “to create more rigor around the processes that create social insights.”
The February 2012 paper – “The Validity of Social Media Data within the Wireless Industry” – looks at the volume and sentiment of social media content in relationship to results from their “traditional” syndicated survey. They learned that: there is a direct relationship between the volume of posts in social media and market share (face validity), there is a higher correlation with their survey results when “high precision sound bites” are used in the query rather than overall social media data, and less-than strong correlations are found between customers’ survey product/service ratings and social media sentiment; however, social media data tends to be more related to advocacy rather than experience measures.
The March 2012 paper – “The Dividends of Improving Best Practices for Social Media Research” – goes much further in testing the accuracy in social media data and highlighting the importance of query details, along with internal quality-control measures, to the ultimate usefulness of results. The authors hinge much of their discussion on six previously-derived “best practices” (also discussed in ESOMAR’s November/December 2011 Research World):
- Be specific in defining your topic
- Establish the right balance between precision and recall
- Avoid sentiment expressions in queries
- Employ well-trained analysts
- Utilize separate QA teams
- Ensure proper feedback
By comparing the outcomes from two analysts – one guided by the aforementioned best practices (as well as the JD Power quality control team) and another who developed queries without utilizing the prescribed best practices or incorporating quality-control oversight – clear differences emerged in terms of both the volume and quality (reliability, validity) of extracted posts. The analyst following the best-practices approach delivered a fairly consistent number of negative-sentiment posts (i.e., the absence of false negatives), a significantly greater number of posts specific to product quality, and a greater (realistic) variation in the volume (i.e., a couple of months when volume spiked). The superior outcomes from the analyst guided by best practices are attributed to the:
- successful exclusion of non-consumer-generated posts (e.g., new stories);
- specificity of the query terms (e.g., “mobile service” vs. “mobile”); and,
- complexity of query phrases and use of Boolean expressions (e.g., “can’t get” AND [email OR message OR voicemail OR text] versus simple nouns and adjectives (e.g., “coverage”).
J.D. Power is just one among many who are devoted to adding rigor to our social media research methods. Given the increasing use of social media research data by businesses large and small, it is the responsibility of all social media researchers to consider the implications of their research designs and adopt an approach the maximizes the reliability, the validity – that is, the accuracy and usefulness – of the outcomes.
Very timely article as I work towards re-launching and external social collaboration platform geared in part towards facilitating ongoing dialogue with communities of practice and look for best practices guidelines for capturing and reporting metrics focused more on the “How” different facilitation changed the group dynamics and the quality of the collaboration versus just the “what” how many posts there were. Thanks for sharing.