0% found this document useful (0 votes)
31 views3 pages

Subtitle

4

Uploaded by

757rustam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views3 pages

Subtitle

4

Uploaded by

757rustam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd

This segment is on "don't know" options.

Always a tricky question, should you offer


respondents a don't know
answer category in the questionnaire or should
you not? Howard Schuman and Stanley Presser had in their 1981 book, "Questions and
Answers in
Attitude Surveys", a nice example here. The question asked respondents on whether
they are in favor or opposed to the Agricultural Act
of 1978. The interesting thing about this question
is that this agriculture act actually didn't
exist. So, most people, as you can see in this
graph here, they should say don't know because there is no favor or
opposed to something that doesn't exist. However, in the first set of bars that you
see here, this piece here, the question was
asked, just like this, "Are you in favor or oppose?" and there was no filtering out
of people
that say they have never heard or they don't know what the
Agricultural Act of 1978 is. In this condition, we had about 70% who
were volunteered the answer don't know. In this other segment here, where first
people were asked whether they do know something about the
Agricultural Act of 1978, almost 90% said they don't and only the
remaining set was asked whether they favor or oppose
this particular act. Now there are various methods for
offering don't know options. You can have a full filter up front, "Do
you have an opinion on that?" or "Do you know
about that/" Or you can have a quasi-filter, which is, "Do you agree, disagree, or
do
you not have an opinion on that?" or "Do you favor or oppose or do you
not have an opinion?" And that would be in a formulation for a telephone survey
because in self
administered mode, obviously, you would have to write down the answer
category no opinion, even if it's not part of the
questionnaire. Now the question is, should you do this
or not? The issue here is, will the respondent be
provided with the option "don't know" or do they have to themselves say, "No I
don't know anything about this", or "I don't
have an opinion"? The views of Converse and Presser is that
you should offer to filter out respondents who don't know much
and thus can't have an attitude. That was sort of in the 80s, that conclusion from
these studies that
they did. You can do this and execute this kind of recommendation with filters with
increasing strength. So the quasi-filter we talked about, you
can have the blunt filter, but you could also have
justified full filters. For example, "Have you been interested
enough in this to favor one side or another?" or "Have you thought about this
issue?", or "Have you heard and read about this issue?" to soften the filtering in
case you're
worried about that. Now, all these methods aside, Jon Krosnick and
actually together with Stanley Presser worked on this later and so more
recently the views on using "don't know" option have changed a bit
and they are sort of grounded in two cognitive models on
the "don't know" response option. The first perception is, we optimize our
model in here with the notion that there's four
situations in which a respondent might want to say
"don't know", along the lines of the response
process model. The first one is, so they interpret the
question and realize they don't know, which means that the meaning
of the question is not clear, right? The search in the memory can be leading to
"don't know", which means that there's no information found at all
in their head on this topic. The "don't know" can arise, the third step,
if they integrate the information into a
judgment but in the formulation of their judgment, so there
might be ambivalence and conflict among the information that they
have in their head or there's insufficient information to
justify an opinion. And the last one would be in formulating the answer, so
translating the judgment into a response category. And here, it would be that the
meaning of the response alternatives is not clear, so they can't really match
their thinking to an answer category. And it was only the second step here, the
search for memory and "don't know", that really would mean that if
you force people into an answer, meaning you don't provide a "don't know"
option, they would be giving you a non meaningful
answer. So for all of the reasons, except this
one, pushing people to offer opinions might
yield to meaningful responses. If you take another cognitive model as a
baseline here to evaluate these "don't know" responses,
that of a satisfying respondent. We'll mention this later again but to
point out this term satisficing is from Herbert Simon in the 50s, he described
economic behavior with this term, and it's being adopted by Krosnick and
Duane Alwin and brought us to match behavior and
questionnaire design. So anyway, if we have the satisficer in
mind here, who does just enough to sort of get through the
questionnaire, then some respondents sometimes look for cues
in a question to allow them to skip all interpretation or
retrieval to justify the answer. So, you know, I say "I don't know", because
I don't want to go through these four steps and think, "Do I
understand, can I formulate an answer, can I even retrieve what is in my head?" and
the like. So they just pick an easy answer category. Most likely, this behavior
might be seen
when the respondent's ability to answer that
question is low or the respondent motivation is low or there are too high of
cognitive demands
going on here. So here in this situation, if pushed to offer opinions, these people
would offer
meaningful opinions and, again, that would be an argument to
actually do not offer "don't know" options. There's some experimental evidence out
there comparing offering versus omitting "don't
know" options. McClendon and Duane Alwin in the 90s, as
well as Krosnick and Berent and Poe and others, and I'm sure that there are
many more studies of that type, they did years of experiments. They found
no more unreliability in responses to unfiltered
question than to those filtered. Another set of experiments listed here
found that filtering doesn't strengthen constraint correlations
between attitudes and different issues. McClendon also, in a later, actually
earlier study in 1991, found that filtering did not reduce
acquiescence or response order effects. And Krosnick also did a series of studies
that supported the notion that by and large, one should probably omit "don't know"
options rather than offering
them. So, the Krosnick verdict, as we call it
here, summarized in those two papers is that "don't know"s are mostly not
due to complete lack of information. "Don't know"s are mostly due to ambivalence,
unclear questions, intimidation,
self-image protection, and satisficing. The best questionnaire design strategy
appears to be omitting "don't know" filters and telling respondents, "I'll note
that, but
if you had to choose, would you say...?" So that could be a way for the interviewer
to get substantive response after all,
nudging the respondent for an answer and as a result you will be collecting
informative data from a larger
portion of your sample. Okay and in the next segment we will talk
about scales and response order effects in answer
categories and the like.

You might also like