0% found this document useful (0 votes)
60 views13 pages

Cross-Cultural Cognitive Interviewing: Field Methods December 2011

This document summarizes an article about cross-cultural cognitive interviewing. Cognitive interviewing is a method used to test and improve survey questions by conducting in-depth interviews to understand how respondents think about and answer questions. The article discusses how cognitive interviewing has been extended to cross-cultural research to facilitate inclusion of diverse groups and enhance comparability across groups. It introduces articles in the special issue that apply cognitive interviewing in specific cross-cultural projects and address challenges in making survey methods cover different societies and cultures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views13 pages

Cross-Cultural Cognitive Interviewing: Field Methods December 2011

This document summarizes an article about cross-cultural cognitive interviewing. Cognitive interviewing is a method used to test and improve survey questions by conducting in-depth interviews to understand how respondents think about and answer questions. The article discusses how cognitive interviewing has been extended to cross-cultural research to facilitate inclusion of diverse groups and enhance comparability across groups. It introduces articles in the special issue that apply cognitive interviewing in specific cross-cultural projects and address challenges in making survey methods cover different societies and cultures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: [Link]

net/publication/241647671

Cross-Cultural Cognitive Interviewing

Article  in  Field Methods · December 2011


DOI: 10.1177/1525822X11416092

CITATIONS READS

65 836

2 authors:

Gordon B Willis Kristen S Miller


National Cancer Institute (USA) U.S. Department of Health and Human Services
141 PUBLICATIONS   6,892 CITATIONS    55 PUBLICATIONS   2,454 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Research on GIfted and Talented View project

Hard to Reach Populations View project

All content following this page was uploaded by Gordon B Willis on 26 May 2014.

The user has requested enhancement of the downloaded file.


Field Methods
[Link]

Cross-Cultural Cognitive Interviewing : Seeking Comparability and


Enhancing Understanding
Gordon B. Willis and Kristen Miller
Field Methods 2011 23: 331 originally published online 9 October 2011
DOI: 10.1177/1525822X11416092

The online version of this article can be found at:


[Link]

Published by:

[Link]

Additional services and information for Field Methods can be found at:

Email Alerts: [Link]

Subscriptions: [Link]

Reprints: [Link]

Permissions: [Link]

Citations: [Link]

>> Version of Record - Dec 28, 2011

Proof - Oct 9, 2011

What is This?

Downloaded from [Link] by Gordon Willis on January 11, 2012


Articles
Field Methods
23(4) 331-341
ª The Author(s) 2011
Cross-Cultural Reprints and permission:
[Link]/[Link]
Cognitive DOI: 10.1177/1525822X11416092
[Link]

Interviewing:
Seeking
Comparability and
Enhancing
Understanding
Gordon B. Willis1 and Kristen Miller2

Abstract
Cognitive interviewing (CI) has emerged as a key qualitative method for the
pretesting and evaluation of self-report survey questionnaires. This article
defines CI, describes its key features, and outlines the data analysis techniques
that are commonly used. The authors then consider recent extensions of
cognitive testing to the cross-cultural survey research realm, where the
major practical objectives are: (1) to facilitate inclusion of a range of
cultural and linguistic groups and (2) for purposes of comparative analysis,
to produce survey questionnaire items that exhibit comparability of
measurement, across groups. Challenges presented by this extension to
the cross-cultural and multilingual areas are discussed. Finally, the authors
introduce the articles contained within the current special issue of Field

1
National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
2
National Center for Health Statistics, Centers for Disease Control and Prevention,
Hyattsville, MD, USA

Corresponding Author:
Gordon B. Willis, National Cancer Institute, National Institutes of Health, 6130 Executive Blvd,
MSC 7344, EPN 4005, Bethesda, MD 20892, USA
Email: willisg@[Link]

Downloaded from [Link] by Gordon Willis on January 11, 2012


332 Field Methods 23(4)

Methods (2011), which endeavor to apply cognitive testing in specific cross-


cultural survey projects, and to both identify and suggest solutions to
the unique problems that face questionnaire designers and researchers
more generally, in the practice of survey pretesting and evaluation
methods as these endeavor to cover the sociocultural spectrum.

Keywords
cognitive interviewing, cross-cultural research, qualitative research,
questionnaire design, survey pretesting

To collect information relating to human endeavors, researchers often use a


method adapted from everyday life: Asking for it, in the form of survey ques-
tions that, in turn, result in respondent self-reports. This deceptively simple
approach—mimicking the question–answer sequence of informal conversa-
tion—has spawned an industry in survey data collection as well as in the devel-
opment of clinical and organizational scales. In a departure from data-collection
approaches that make use of flexibility, or adherence to the rules of normal
conversation, the dominant survey self-report methodology has featured stan-
dardization of questions and scale items. Items are standardized because they
are scripted in written form and presented to survey respondents for self-
completion or because they are read exactly as written by interviewers, who are
expected to serve as conduits between researcher and the subject (Beatty 1995).
The recognition that this standardized approach sometimes leads to gross
failures in communication, or in other requisite social and psychological
processes, has led to efforts to develop methods for evaluating and
improving the functioning of self-report instruments (Presser et al. 2004).
In particular, since the mid-1980s, cognitive interviewing (CI) has emerged
as a general set of methods for the development and evaluation of survey
questions and other self-report measures and is regularly applied across a
wide range of topic areas. CI has developed to the point that established
cognitive laboratories—at the Centers for Disease Control and Prevention,
National Center for Health Statistics, Centers for Disease Control and
Prevention; U.S. Census Bureau; and Bureau of Labor Statistics—regularly
submit draft questionnaires to cognitive testing and base decisions concern-
ing modification of these instruments on the results.

Definition of the Cognitive Interview


Cognitive testing consists of a range of techniques that are applied in a
variety of ways, and there is no single protocol for the conduct of cognitive

Downloaded from [Link] by Gordon Willis on January 11, 2012


Willis and Miller 333

interviews, nor even a universally accepted definition (Beatty and Willis


2007; DeMaio and Rothgeb 1996). An encyclopedic definition by Willis
(2009:106) states that:
Cognitive Interviewing is a psychologically-oriented method for empirically
studying the way in which individuals mentally process and respond to survey
questionnaires. Cognitive interviews can be conducted for the general
purpose of enhancing our understanding of how respondents carry out the
task of answering survey questions. However, the technique is more com-
monly conducted in an applied sense, for the purpose of pretesting questions
and determining how they should be modified, prior to survey fielding, to
make them more understandable or otherwise easier to answer.

Traditionally, cognitive testing is performed by conducting in-depth,


semistructured interviews with a small sample of approximately 10–30
respondents. The typical interview structure consists of respondents first
answering the evaluated question and then answering a series of follow-up
probe questions that reveal what respondents were thinking and their ratio-
nale for that specific response. As a ‘‘cognitive’’ endeavor, the cognitive
interview has traditionally been viewed as encompassing four basic cogni-
tive processes or stages purported to be invoked when a respondent answers
a survey question: (1) comprehension of the question; (2) retrieval
from memory of information used to prepare an answer to the question;
(3) decision/estimation processes that may influence the respondent’s pro-
cessing or reporting of a response; and (4) the response process, in which
the respondent produces an answer to the survey question (Tourangeau
1984). As such, the seemingly straightforward survey question: ‘‘Would
you say that in the past 30 days your health has been excellent, very good,
good, fair, or poor?’’ may pose a number of cognitive challenges to the
respondent, who might consider, in turn, what varieties of ‘‘health’’ to include
(physical, mental, emotional, spiritual, financial, and so on); whether and how
to use the ‘‘30-day’’ reference period to retrieve relevant memories; whether
they are comfortable reporting anything less than ‘‘good’’ health to the
interviewer; and which of the given response category options best fits
their self-assessment of health status.
As an alternative viewpoint, several authors (Gerber 1999; Gerber and
Wellens 1997; Miller 2003, 2011) have emphasized an anthropological or
sociocultural orientation. This viewpoint focuses heavily on the respon-
dent’s life experiences and cultural context and posits the importance of
interpretive processes that encompass a wider range than the somewhat
narrow concept of ‘‘encoding’’ as depicted in the standard cognitive model.

Downloaded from [Link] by Gordon Willis on January 11, 2012


334 Field Methods 23(4)

Comprehension probe: What does the term ‘‘dental sealant’’ mean to you?
Paraphrase: Can you repeat the question in your own words?
Confidence judgment: How sure are you that your health insurance
covers mental health services?
Recall probe: How do you know that you went to the doctor
three times in the past 12 months?
Specific probe: Why do you think that breast cancer is the
most serious health problem?
Process probe: How did you arrive at that answer?
Elaborative probe Can you tell me more about that?

Figure 1. Types of verbal probe questions.

In particular, this view espouses the subtle but powerful influence of social
location and cultural context through all phases of the survey–response
process.

CI Procedures
Procedurally, a trained cognitive interviewer administers the questionnaire
or scale to be evaluated to an individual recruited for purposes of testing
this instrument (often referred to as a subject, participant, or respondent).
The critical methods within the cognitive interview are the think-aloud
(Ericsson and Simon 1980) and verbal probing techniques (Forsyth and
Lessler 1991). Rather than simply requesting an unelaborated response to
each administered survey question, the cognitive interviewer requests that
the subject think-aloud as he or she ponders the question and provides an
answer. The cognitive interviewer also administers verbal probes, which are
questions designed to elicit additional relevant information about question
functioning. Optimally, a key product of verbal probing (or think-aloud) is
what Miller (2003) refers to as narrative information—textual data that
relays how and why respondents answered the question as they did, reveal-
ing the interpretive process used by respondents to relate survey questions
to their own life experiences and circumstances.
Verbal probing involves a range of probe types, summarized in Figure 1.
Probe questions are often designed to tap a specific cognitive process (e.g.,
comprehension, recall). However, a common probe, asked immediately
after the subject answers the tested question, is simply ‘‘Tell me more about
that,’’ which leads the participant to elaborate in a way that produces a nar-
rative flow clarifying whether the answer to the evaluated question remains

Downloaded from [Link] by Gordon Willis on January 11, 2012


Willis and Miller 335

consistent with the more complete picture produced by the elaborated


explanation. For example, if the subject reports that his health is ‘‘fair’’—
and then is led to explain that this is because he is injured and unable to run
marathons but is otherwise in perfect health—the investigators may decide
that the individual’s concept of fair health does not match their own mea-
surement objectives.
Normally, a set of anticipated verbal probes are designed to search for
problems, are scripted prior to the conduct of the interviews, and are placed
into a cognitive protocol consisting of the to-be-tested questions, probes,
and area for note-taking by the cognitive interviewer. On the other hand,
emergent probes may also be fashioned and administered in a more infor-
mal manner, particularly when problems arise during the interview that had
been unanticipated. The timing of probe administration also varies across
questionnaire-testing projects. Concurrent probes (whether anticipated,
emergent, etc.) are asked immediately after the subject has answered a tested
survey question, whereas retrospective probes are presented as a set during a
debriefing session, after all evaluated questions have been answered.
Interviewer-administered questions (telephone or in-person) typically involve
concurrent probing, as the normal back-and-forth interaction between
interviewer and respondent facilitates such probing. For testing of self-
administered instruments, practitioners often rely on retrospective probing,
in order not to disturb the subjects as they complete the questionnaire (espe-
cially when one objective of the testing is to evaluate ability of the answerer
to navigate skip or sequencing instructions, unaided).

Data Collection and Analysis of Interview Results


The CI is referred to in the literature as a qualitative technique because the
data consist largely of written comments concerning a subject’s narrative
report generally, and with regard to specific responses to verbal probe ques-
tions. Normally, interviewers provide summary notes concerning their
assessment of question functioning, in terms of either specific problems
identified, or more generally in terms of the totality of ‘‘what the question
captures’’ (Miller et al. 2010). Because the focus of CI is not primarily on
the responses to the tested questions but rather on the manner in which they
have been answered, these notes constitute the data that are further ana-
lyzed. Analysis sometimes involves a coding scheme, in which particular
types of responses are assigned to a descriptive category that itself repre-
sents a particular cognitive process or variety of question defect. For exam-
ple, the analyst may seek cases of ‘‘comprehension failure’’ or ‘‘item

Downloaded from [Link] by Gordon Willis on January 11, 2012


336 Field Methods 23(4)

vagueness.’’ More frequently, analysis has involved the use of what


Willis (2005) refers to as successive aggregation, in which each cognitive
interviewer first summarizes, for each item, their own results across all
interviews they have carried out. Subsequently, a researcher, or the team,
further collapses and synthesizes these results across interviewers, produc-
ing a single set of observations for each survey item. Hence, with regard to
reporting outcomes concerning the general health example above, the sum-
mary might state that:
Seven of nine subjects were thinking only of physical health, whereas two
also mentioned mental/emotional health. Further, none of the subjects
reported that they attempted to count any events that had literally occurred
during the past 30 days, but rather tended to report thinking about key events
relevant to health that had occurred in the past several weeks or months.

Alternative methods for analysis of CI emphasize the need to ensure that


different cognitive interviewers, or especially independent interviewing
teams, behave in ways that are complementary, integrated, and systemati-
cally make use of the observations made, as opposed to introducing inves-
tigator bias or opinion concerning question function. Conrad and Blair
(2004) and Miller (2007) in particular have emphasized the need to employ
analytic techniques that lead analysts to make clear the nature of the evi-
dence, taken directly from the interviews, upon which any conclusions or
recommendations for question modification are based. Further, in a depar-
ture from the successive aggregation approach, Miller advocates the use of
relatively large sample sizes and group-based comparative analysis, in
which all investigators are given the opportunity to review all the evidence
gathered across sites. Finally, to further the goal of increasing transparency
of CI results generally, Miller and her colleagues (Miller 2005) have pro-
moted the development of a publicly accessible database of cognitive test-
ing results, called Q-BANK (see [Link]
Once analysis is completed, the researchers make decisions concerning
the manner in which the question is functioning; whether it should be
revised in some way; and perhaps whether subsequent testing should be
conducted. Often, when an item is changed, it is desirable to submit the new
version to a further round of testing. Such rounds are typically small; sam-
ple sizes for CI studies often consist of only 20–30 individuals, such as
produced by 2–3 rounds of 10 each. However, studies have also been even
more modest in size (9–10) but also much larger (Willis 2005). Studies
intended for research purposes, especially publication (such as those con-
tained in this special issue), are typically much larger in scope (Miller

Downloaded from [Link] by Gordon Willis on January 11, 2012


Willis and Miller 337

et al. 2004). Typically, testing rounds are iterative in nature; that is, changes
to tested questions are made after each round, prior to subsequent testing.

Extensions to Cross-Cultural Research


Increasingly, survey researchers have extended CI to research that is multi-
cultural, multilingual, multinational, or multiregional (we follow Johnson
[2006] in referring to these as cross-cultural research). In part, this exten-
sion is a reaction to the identified need to find ways to attain cross-
cultural comparability of survey questions. It has been well established that
a major threat to the analysis of survey data collected across language or
cultural groups is that the items simply do not function similarly when
translated, or when administered to a range of cultural groups (Johnson
1988). Further, although statistical and psychometric techniques exist for
identifying such noncomparability in survey data (e.g., differential item
functioning, within item response theory [IRT] models; see DeVellis
2003; Embretson and Reise 2000), it is desirable to rely on a qualitative
technique to identify the source (rather than just the presence) of such mea-
surement disparities and remediate potential sources of noncomparability
before the questionnaire is administered.
Cross-cultural CI may be a compelling means to satisfy the requirement of
assessing noncomparability. That is, the intensive focus of CI on sociocul-
tural factors that influence the survey–response process, through cognitive
probing and production of narrative information, seems to render this method
well suited to the cross-cultural domain. Over the past decade, cross-cultural
CI studies have been conducted with increased frequency (Agans et al. 2006;
Nápoles-Springer et al. 2006; Nápoles-Springer and Stewart 2006; Pasick
et al. 2001; Willis et al. 2008). Although these studies have tended to focus
on the key issue of comparability, we argue that this aim represents only one
goal of such research. More broadly, we endeavor to apply CI to ascertain
how sociocultural factors influence the survey–response process and how
members of varying language and cultural groups interpret not only individ-
ual questions but questionnaires as a whole, in the context of the unique view-
points associated with group membership.
As such, we emphasize not only the improvement in survey questions,
the remediation of response error, and the establishment of comparable
measures, but a true understanding of the manner in which survey questions
and self-report items are approached, consumed, and digested across a wide
range of human groups and subpopulations. Presumably, such an under-
standing will be of interest not only to questionnaire designers and survey

Downloaded from [Link] by Gordon Willis on January 11, 2012


338 Field Methods 23(4)

researchers but also to psychologists, anthropologists, sociologists, and


other researchers interested in sociocultural dynamics and in appreciating
how others view their world.
Cross-cultural cognitive testing poses several vexing methodological chal-
lenges. In departure from monolingual, single-team CI, such projects often
involve a number of investigators who speak a range of languages. Even if
all of them speak a common language—normally English—interviews con-
ducted in other languages can be reviewed only by the teams speaking
that language, and it is impossible for any one team member to review these
interviews to ensure comparability of interviewing approaches. Further, CI
results must themselves be translated into a common language, if these are
to be discussed by the larger group. Finally, issues that arise in one language
or culture, but not another, must be reconciled, and decisions made concern-
ing whether these are due to errors of translation, cultural disparities, or arti-
facts of varied approaches to the conduct or analysis of the interviews.
The current issue is in some respects a response to these challenges. The
six articles represent the state-of-the-art in development of the procedures
used as CI is extended to the cross-cultural domain. Across these articles,
a number of issues are addressed concerning the manner in which CI can
best be adapted to cross-cultural testing. Specifically, they address the fol-
lowing: (1) From a procedural point of view, how do existing methods used
in cognitive interviews need to be modified to function effectively in a cross-
cultural context? (2) Concerning analysis of results, in what ways do cross-
cultural applications pose particular challenges that require rigorous
approaches to data analysis and interpretation? (3) How can CI be reconciled
with, and perhaps integrated with, other methods that are useful for assess-
ment of cross-cultural comparability?
Articles by Goerman and Clifton and by Chan and Pan are procedural in
focus. Goerman and Clifton examine the use of vignettes—short hypothe-
tical stories—for cross-cultural application and study both Spanish and
English speakers. Chan and Pan ask whether cognitive techniques can be
used for materials other than survey questionnaires and describe the use
of advance brochures, to be mailed to respondents in advance of the Census
Bureau’s American Community Survey, across Chinese, Korean, Russian,
Spanish, and Native American language speakers.
The articles by Miller et al. and Ridolfo and Schoua-Glusberg emphasize
analysis of narrative information produced in interviews. Miller et al. frame
the research objective in terms of reliability of results across testing labora-
tories: Given that cross-cultural research necessarily requires the use of
independent testing sites and staffs, can these be brought together through

Downloaded from [Link] by Gordon Willis on January 11, 2012


Willis and Miller 339

the use of a data-analysis approach that emphasizes joint analysis, in which


all of the pertinent results are considered within a common format and by
the team as whole? Ridolfo and Schoua-Glusberg tackle a similar problem
and evaluate the use of a procedure borrowed from the qualitative research
field—the constant comparison method (Glaser 1965)—to impose order on
the mechanisms used by cognitive interviewers to make sense of their
findings.
Finally, as a logical extension, Thrasher et al. and Reeve et al. make use
of pretesting and evaluation techniques other than CI in the cross-cultural
realm. The Thrasher et al. study applies behavior coding—a long-standing
method used to assess interviewer performance and to evaluate tobacco-
use question function through observation of respondent behavior, across five
locations (South Carolina/United States, Melbourne/Australia, Cuernavaca/
Mexico, Montevideo/Uruguay, and Penang/Malaysia). The authors compare
and contrast results and implications from behavior coding from those of CI.
Reeve et al. similarly compare CI with psychometric testing results, particu-
larly IRT, for a survey of racial and ethnic discrimination. IRT, as CI, is
increasingly applied to cross-cultural studies, with the express purpose of
identifying items whose functioning varies across populations or across other
demographic divisions. As such, it is of interest to ascertain whether the
results obtained from IRT are supportive, or depart from, those obtained
through cognitive testing.
In summary, these articles reflect the state-of-the-science in application
of an established technique within an area that is of increasing importance.
Based on the results of these studies, we hope that readers will be led to
promising approaches to their own work concerning attainment of
cross-cultural comparability and understanding.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or
publication of this article.

References
Agans, R. P., N. Deeb-Sossa, and W. D. Kalsbeek. 2006. Mexican immigrants
and the use of cognitive assessment techniques in questionnaire development.
Hispanic Journal of Behavioral Sciences 28:209–30.

Downloaded from [Link] by Gordon Willis on January 11, 2012


340 Field Methods 23(4)

Beatty, P. C. 1995. Understanding the standardized/non-standardized interviewing


controversy. Journal of Official Statistics 11:147–60.
Beatty, P. C., and G. B. Willis. 2007. Research synthesis: The practice of cognitive
interviewing. Public Opinion Quarterly 71:287–311.
Conrad, F. G., and J Blair. 2004. Data quality in cognitive interviews: The case of
verbal reports. In Methods for testing and evaluating survey questionnaires, eds.
S. Presser, J. Rothgeb, M. Couper, J. Lessler, E. Martin, J. Martin, and E. Singer,
67–87. Hoboken, NJ: John Wiley.
DeMaio, T. J., and J. M. Rothgeb. 1996. Cognitive interviewing techniques: In the
lab and in the field. In Answering questions: Methodology for determining cog-
nitive and communicative processes in survey research, eds. N. Schwarz and
S. Sudman, 177–95. San Francisco, CA: Jossey-Bass.
DeVellis, R. F. 2003. Scale development: Theory and applications 2nd ed. Thou-
sand Oaks, CA: SAGE.
Embretson, S. E., and S. P. Reise. 2000. Item response theory for psychologists.
Mahwah, NJ: Lawrence Erlbaum.
Ericsson, K. A., and H. A. Simon. 1980. Verbal reports as data. Psychological
Review 87:215–51.
Forsyth, B. H., and J. T. Lessler. 1991. Cognitive laboratory methods: A taxonomy.
In Measurement errors in surveys, eds. P. P. Biemer, R. M. Groves, L. E. Lyberg,
N. A. Mathiowetz, and S. Sudman, 393–418. New York: Wiley.
Gerber, E. R. 1999. The view from anthropology: Ethnography and the cognitive
interview. In Cognition and survey research, eds. M. G. Sirken, D.
J. Herrmann, S. Schechter, N. Schwarz, J. M. Tanur, and R. Tourangeau, 217–
34. New York: Wiley.
Gerber, E. R., and T. R. Wellens. 1997. Perspectives on pretesting: ‘‘Cognition’’ in
the cognitive interview? Bulletin de Methodologie Sociologique 55:18–39.
Glaser, B. 1965. The constant comparative method of qualitative analysis. Social
Problems 12:436–45.
Johnson, T. P. 1988. Approaches to equivalence in cross-cultural and cross-national
survey research. In Cross-cultural survey equivalence, ed. J. A. Harkness, 1–40.
Mannheim, Germany: ZUMA.
Johnson, T. P. 2006. Methods and frameworks for crosscultural measurement.
Medical Care 44:S17–S20.
Miller, K. 2003. Conducting cognitive interviews to understand question-response
limitations. American Journal of Health Behavior 27:S264–S272.
Miller, K. 2005. Q-BANK: Development of a tested-question database. American
Statistical Association Annual Meetings, Minneapolis, August, 2005. http://
[Link]/QBANK/Report/Miller_2005.pdf.

Downloaded from [Link] by Gordon Willis on January 11, 2012


Willis and Miller 341

Miller, K. (2007). Design and analysis of cognitive interviews for cross-national


testing. Paper presented at the annual meeting of the European Survey Research
Association, Prague, June.
Miller, K. 2011. Cognitive interviewing. In Question evaluation methods: Contri-
buting to the science of data quality, eds. K. Miller, J. Madans, A. Maitland, and
G. Willis, 51–75. New York: Wiley.
Miller, K., D. Mont, A. Maitland, B. Altman, and J. Madans. 2010. Results of a
crossnational structured cognitive interviewing protocol to test measures of dis-
ability. Quality & Quantity 4:801–15.
Miller, K., G. Willis, and C. T. Eason. 2004. Socio-cultural factors in the question
response process: A comparative coding analysis of Hispanic and non-Hispanic
interviews. Paper presented at the 6th International Conference on Social Sci-
ence Methodology, Amsterdam, August.
Nápoles-Springer, A. M., J. Santoyo-Olsson, H. O’Brien, and A. L. Stewart. 2006.
Using cognitive interviews to develop surveys in diverse populations. Medical
Care 44:S21–S30.
Nápoles-Springer, A. M., and A. L. Stewart. 2006. Overview of qualitative methods
in research with diverse populations: Making research reflect the population.
Medical Care 44:S5–S9.
Pasick, R., S. L. Stewart, J. Bird, and C. N. D’Onofrio. 2001. Quality of data in
multiethnic health surveys. Public Health Reports 116:223–43.
Presser, S., M. P. Couper, J. T. Lessler, E. Martin, J. Martin, J. M. Rothgeb, and
E. Singer. 2004. Methods for testing and evaluating survey questions. Public
Opinion Quarterly 68:109–30.
Tourangeau, R. 1984. Cognitive science and survey methods: A cognitive perspective.
In Cognitive aspects of survey design: Building a bridge between disciplines, eds.
T. Jabine, M. Straf, J. Tanur, and R. Tourangeau, 73–100. Washington, DC:
National Academy Press.
Willis, G. B. 2005. Cognitive interviewing: A tool for improving questionnaire
design. Thousand Oaks, CA: SAGE.
Willis, G. B. 2009. Cognitive interviewing. In Encyclopedia of survey research
methods, vol. 2, ed. P. Lavrakas, 106–9. Thousand Oaks, CA: SAGE.
Willis, G. B., D. Lawrence, A. Hartman, M. Kudela, K. Levin, and B. Forsyth. 2008.
Translation of a tobacco survey into Spanish and Asian languages: The tobacco
use supplement to the current population survey. Nicotine and Tobacco Research
10:1075–84.

Downloaded from [Link] by Gordon Willis on January 11, 2012


View publication stats

You might also like