A recent online survey questionnaire asked me to compare my salary in 2009 with that in 2010. Because the question was impossible to answer (I won’t know my 2010 salary for another eight months), I didn’t respond and attempted to move forward to the next question. Unfortunately the salary question was mandatory and so I was left with the choice of either abandoning the study or falsifying a response in order to qualify for the next question. I opted for the latter (I was keen on learning where the battery of questions would take me), then immediately upon submitting my completion sent a message to the survey’s sponsor alerting him to “a real problem” with the salary question and its potential impact on the final data. Online research is replete with examples of questionnaire designs that force respondents into a response when a response is either not possible (given the inadequacy of the answer options) or extremely difficult (given the ambiguity or confusion with the question being asked). Whatever the reason, these types of must-answer questions contribute unnecessary error to the design and ultimate study findings.
However, as much as I am annoyed by online programmers’ insistence on a response to unanswerable questions – along with the irritating error messages that attempt to coerce a response – I admit that forcing respondents to re-examine a question and think it through for a second time is not a bad thing. While I would argue that a second consideration still leaves the respondent with a dilemma (having to choose among inappropriate answer options or having to respond to a question that is not understood), at least the survey-taker is given the opportunity to reflect more closely and possibly make a response that approximates reality. Having said that, I would contend that it is more likely that the respondent will simply drop out of the survey or do exactly what I did – that is, pick an answer option based on no truth (in my case, I chose a neutral response) and move on.
The incentives and other motivations that we build into our designs make it likely that many respondents will choose to falsify their responses rather than dropping out of the study altogether. But regardless of what is driving people to respond one way or another the fact remains that – if you ask someone a question you will get an answer, any answer. This is a tenet that I and other researchers live by. It is not good enough to just ask a question, because you will for sure get an answer and it may not be what you bargained for. Getting responses to our questions can be pretty exciting; but once you realize that people will answer questions no matter how they are structured – confusing, misleading, nonsensical – it should give the researcher serious pause and spur deep consideration of question construction.
Even if the answer options are adequate and the question itself is clear, an added burden on the question designer is to keep the question neutral in tone or at least non-biasing. I recently received via USPS a “2010 Congressional District Census” from the Republican Party (Note: I do not want to talk politics here, but let it be said that I don’t recall ever getting anything from the Republican Party in the past). There are lots I could complain about this direct-mail piece – not the least of which is frugging – but the “survey” questions are particularly interesting. One question reads, “Do you think the record trillion dollar federal deficit the Democrats are creating with their out-of-control spending is going to have disastrous consequences for our nation?” How neutral is that? And yet I am fairly sure that the Republican Party will get the answer they are looking for, not only from Republicans but by affiliates from other parties as well who, with good intentions, simply answered the question they were asked.
So, be careful out there – because if you ask someone a question, you may very well get an answer.
A Matter of Definition
When I first read your blog about must-answer questions, I laughed and thought, “Well, this gives new meaning to the term ‘demand characteristic’.
Demand characteristics is a term used in psychology experiments to describe a cue that makes participants aware of what the experimenter expects to find or how participants are expected to behave. You see, if I were you, I would have assumed the survey designers wanted me to enter an anticipated annual salary as if I were a salaried worker – a guestimate demanded or forced on a non-salaried consultant. And just as demand characteristics can change the outcome of an experiment because participants will often change their behavior to conform to expectations, so too did you change your response, but not in the same way I would have responded if in your shoes. Hmmm.
Then I read your sentence: “There are lots I could complain about this direct-mail piece – not the least of which is frugging – but the “survey” questions are particularly interesting.”
OK, so another definition. I’m not a marketer, so I was not familiar with the term “frugging,” and I admit it, I was afraid to ask because I was not sure I wanted an answer! So I went looking.
First I found the following definition: To do the “frug”. A dance derived from the twist in which the feet do not move but the hips and arms are moved energetically in time to the music (as referred to in the song “Rock Lobster”). Hmmm…”crazy dancing” doesn’t quite seem right in your sentence.
I then found this obscure definition, “fruggin” which is: an oven fork or pole. Funny.
Finally, I came upon the marketing definition which read, Frugging: Fund raising under the guise of research. This definition goes on to say that frugging is one of the reasons why potential participants in market research projects are reluctant to take part. Bingo!
Ahhh. Participant reluctance. This defining reasoning (circuitous though it might be), led me to make an observation about questions, answers and participation. Not only should researchers be aware that “if you ask, answers will come,” but also that “if I answer, an action is expected”.
In some organizational research I have conducted, the caution was not just in asking questions and getting valid answers, but also with asking only those questions of stakeholders that the client anticipated addressing or fixing in some way. For example, if you raise a question about a need, challenge or opportunity, there is a strong suggestion that you will do something about it.
Too often, survey questions have done more damage than good, selling the idea of change and promise under the guise of authentic research (sort of “social sugging”, I guess),only to disappoint the participant, employee, or constituent, when there is no action taken to the survey answers. The result? People begin to think that organizational surveys are just a waste of time, and fail to participate in future surveys. So, I’ve redefined my understanding this way: It’s not just how you ask the question, but whether you ask the question at all.
LikeLike