A recent online survey questionnaire asked me to compare my salary in 2009 with that in 2010. Because the question was impossible to answer (I won’t know my 2010 salary for another eight months), I didn’t respond and attempted to move forward to the next question. Unfortunately the salary question was mandatory and so I was left with the choice of either abandoning the study or falsifying a response in order to qualify for the next question. I opted for the latter (I was keen on learning where the battery of questions would take me), then immediately upon submitting my completion sent a message to the survey’s sponsor alerting him to “a real problem” with the salary question and its potential impact on the final data. Online research is replete with examples of questionnaire designs that force respondents into a response when a response is either not possible (given the inadequacy of the answer options) or extremely difficult (given the ambiguity or confusion with the question being asked). Whatever the reason, these types of must-answer questions contribute unnecessary error to the design and ultimate study findings.
However, as much as I am annoyed by online programmers’ insistence on a response to unanswerable questions – along with the irritating error messages that attempt to coerce a response – I admit that forcing respondents to re-examine a question and think it through for a second time is not a bad thing. While I would argue that a second consideration still leaves the respondent with a dilemma (having to choose among inappropriate answer options or having to respond to a question that is not understood), at least the survey-taker is given the opportunity to reflect more closely and possibly make a response that approximates reality. Having said that, I would contend that it is more likely that the respondent will simply drop out of the survey or do exactly what I did – that is, pick an answer option based on no truth (in my case, I chose a neutral response) and move on.
The incentives and other motivations that we build into our designs make it likely that many respondents will choose to falsify their responses rather than dropping out of the study altogether. But regardless of what is driving people to respond one way or another the fact remains that – if you ask someone a question you will get an answer, any answer. This is a tenet that I and other researchers live by. It is not good enough to just ask a question, because you will for sure get an answer and it may not be what you bargained for. Getting responses to our questions can be pretty exciting; but once you realize that people will answer questions no matter how they are structured – confusing, misleading, nonsensical – it should give the researcher serious pause and spur deep consideration of question construction.
Even if the answer options are adequate and the question itself is clear, an added burden on the question designer is to keep the question neutral in tone or at least non-biasing. I recently received via USPS a “2010 Congressional District Census” from the Republican Party (Note: I do not want to talk politics here, but let it be said that I don’t recall ever getting anything from the Republican Party in the past). There are lots I could complain about this direct-mail piece – not the least of which is frugging – but the “survey” questions are particularly interesting. One question reads, “Do you think the record trillion dollar federal deficit the Democrats are creating with their out-of-control spending is going to have disastrous consequences for our nation?” How neutral is that? And yet I am fairly sure that the Republican Party will get the answer they are looking for, not only from Republicans but by affiliates from other parties as well who, with good intentions, simply answered the question they were asked.
So, be careful out there – because if you ask someone a question, you may very well get an answer.