Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, Transportation Research Procedia
…
10 pages
1 file
This paper summarizes the discussions held during an in-depth six-hour workshop on the challenges of combining data from different survey modes with the anticipated aim of identifying current research needs. The main theme of the workshop was mixing survey modes as a way to meet the challenge of low response rates. However, the use of multi-mode surveys introduces new sources of bias: not all households have access to certain survey media (coverage bias); the response rate using one or another of the survey modes is correlated with social demographics (non-response bias); the sampling frame is dependent on the mode (sampling bias) or the instrument itself may affect the responses (measurement bias). The aim of this report is present the workshop's discussion on the identification of research needs with related to combining data from different survey modes.
2017
Introduction Technological advances and increasing access to telephone and the internet mean that it is now possible for people to complete surveys by phone or online rather than face-to-face with an interviewer present. As budgets tighten and the benefits of using other modes (such as speed and flexibility) become more apparent, there has been a debate about whether it is better to use alternatives to face-to-face surveys either exclusively or in combination with in-person interviews. This chapter summarises findings from research conducted under the auspices of the European Social Survey (ESS) over the past 12 years to evaluate the potential effects of mixed-mode data collection on its own cross-national survey estimates.
2016
Survey methodology now faces an unprecedented challenge for how to collect information from samples of people that will provide scientifically defensible estimates of the characteristics of the population they represent. Many decades of research have shown that in order for such estimates to be made with known precision four major sources of error-coverage, sampling, measurement and nonresponse-must be controlled (Groves, 1989). Subsequent research has produced a great amount of knowledge on how those sample estimates are affected by different survey modes, sample sources, sample sizes, the failure of certain types of people to respond to survey requests, and how questions are structured and worded.Today's challenge stems from many considerations. Response rates for some survey modes, especially voice telephone in national surveys, have fallen precipitously and are not expected to recover (Dillman, Forthcoming). In addition, RDD landline surveys miss nearly half of all household...
Journal of Official Statistics, 2005
Field Methods, 2005
Changes in survey mode for conducting panel surveys may contribute significantly to survey error. This article explores the causes and consequences of such changes in survey mode. The authors describe how and why the choice of survey mode often causes changes to be made to the wording of questions, as well as the reasons that identically worded questions often produce different answers when administered through different modes. The authors provide evidence that answers may changeas a result of different visual layouts for otherwise identical questions and suggest ways to keep measurement the same despite changes in survey mode.
2012
learn the procedures associated with multiple modes of collecting sample survey information and apply the method or combination of methods that fits their specific situation. This handbook provides expert guidance from acknowledged survey methodologists and statisticians around the world, who bring their experiences to bear on issues faced in their own and other countries. It serves as an excellent text for courses and seminars on survey methodology at the masters and graduate level. It is a key reference for survey researchers and practitioners around the world. The book is also very useful for everyone who regularly collects or uses survey data, such as researchers in psychology, sociology, economics, education, epidemiology, and health studies and professionals in market and public opinion research. The book consists of five parts: foundations, design, implementation, data analysis, and quality issues. The book begins by focusing on the foundations of all sample surveys, ranging from sources of survey error to ethical issues of design and implementation. It is followed by a design section, which gives building blocks for good survey design, from coverage and sampling to writing and testing questions for multiple survey modes. The third section focuses on five modes of data collection, from the oldest, face-to-face interviews, to the newest, interactive voice response, ending with the special challenges involved in mixing these modes within one survey. The fourth section turns to analyzing survey data, dealing with simple as well as complex surveys, and procedures for nonresponse adjustment through imputation and other means. The fifth and final section focuses on special issues of maintaining quality and of documenting the survey process for future reference. The first chapter of the book, The cornerstones of survey research, ends with a more detailed description of the structure and contents of this book. There is a companion website . As we move further into the 21 st century, surveys will become inherently more international in scope and in practice. It is our hope that this book will prove helpful for those who are learning the craft of surveying, which like other life skills, will increasingly be applied beyond one's country of origin. We thank our colleagues across the world for many lively and stimulating discussions about survey methodology. We also thank our students who inspired us and especially the master class in survey methodology 2006 who enthusiastically and critically discussed the drafts. The final book has profited from close reading and copy-editing by Mallory McBride, Sophie van der Zee, Evert-Jan van Doorn, and Amaranta de Haan. We thank Allison O'Neill for her creative cover design. We also thank Emily Wilkinson and Debra Riegert of Lawrence Erlbaum Associates for their patience and careful prodding in getting this book done.
BMC Medical Research Methodology, 2010
Background: Health-related data at local level could be provided by supplementing national health surveys with local boosts. Self-completion surveys are less costly than interviews, enabling larger samples to be achieved for a given cost. However, even when the same questions are asked with the same wording, responses to survey questions may vary by mode of data collection. These measurement differences need to be investigated further. Methods: The Health Survey for England in London ('Core') and a London Boost survey ('Boost') used identical sampling strategies but different modes of data collection. Some data were collected by face-to-face interview in the Core and by self-completion in the Boost; other data were collected by self-completion questionnaire in both, but the context differed. Results were compared by mode of data collection using two approaches. The first examined differences in results that remained after adjusting the samples for differences in response. The second compared results after using propensity score matching to reduce any differences in sample composition. Results: There were no significant differences between the two samples for prevalence of some variables including long-term illness, limiting long-term illness, current rates of smoking, whether participants drank alcohol, and how often they usually drank. However, there were a number of differences, some quite large, between some key measures including: general health, GHQ12 score, portions of fruit and vegetables consumed, levels of physical activity, and, to a lesser extent, smoking consumption, the number of alcohol units reported consumed on the heaviest day of drinking in the last week and perceived social support (among women only). Conclusion: Survey mode and context can both affect the responses given. The effect is largest for complex question modules but was also seen for identical self-completion questions. Some data collected by interview and self-completion can be safely combined.
Social Science …, 2009
The potential for improving response rates by changing from one mode of data collection to another mode is examined in this paper, along with the consequences for measurement and nonresponse errors. Using a data set of 8,999 households, data collection consisted of two phases. Phase 1 data collection was conducted by telephone interview, mail, interactive voice response, and the Internet, while Phase 2 consisted of nonrespondents to Phase 1, and was conducted by telephone or mail. In general, results from our study suggest that switching to a second mode is an effective means of improving response. We also find that for the satisfactiondissatisfaction questions asked in these surveys, respondents to the aural modes (telephone and IVR) are significantly more likely than are respondents to the visual modes (mail and web) to give extreme responses, a difference that cannot be accounted for by a tendency towards recency effects with telephone. In general, switching to a second mode of data collection was not an effective means of reducing nonresponse error based on demographics.
2013
Mixed-mode surveys combine different data collection modes to reduce nonobservational survey errors under certain cost constraints. In this survey design, usually there is no control over who responds by which mode. As a result, data are obtained by a nonrandom mixture of survey administration modes. Without adjusting for this nonrandom mixture of modes, the standard method of estimation that combines responses from different modes has a bias that depends on both mode effects and the mix of respondents that choose each mode. Unless mode effects are zero, data should be adjusted for both nonresponse and nonrandom mixture of modes. We present alternative methods that account for both nonresponse and the nonrandom mixture of modes. Although in principle the separate mode effects are not estimable in a mixed-mode survey design, the alternative estimators do allow estimation of the difference in average mode effects. In addition, the bias properties of alternative methods can be better understood when compared to the standard estimation method. The alternative methods use models to impute each respondent's values for each counterfactual response modee.g. a telephone response value of in-person respondents. Combining the observed values with the imputations results in a "completed" data set for each mode. Alternative estimators are then used to combine these mode-specific "completed" data sets in an attempt to reduce bias associated with confounded and nonrandom influences of mode choice and mode effects. This paper presents some results for empirical comparisons of mean personal income and percent health insurance coverage based on the alternative methods and standard method. The public-use 2012 Current Population Survey (CPS) March data are used for empirical evaluations.
Communications of the Association …, 2002
Web surveys potentially cost less to administer and are more convenient for participants than either telephone or paper-based surveys, but concern remains about the representativeness of the respondents to the population. This research investigates the effectiveness of using Webbased survey methods by comparing response rates and demographic characteristics of the telephone-based and web-based survey respondents. The paper first reviews survey research methodologies, including recent trends in web-based surveys that take advantage of the power of modern computing and telecommunications technology. The issue of respondent characteristics in web-based surveys is then explored by comparing respondent demographics of an ongoing telephone survey to those of a subsequent Web survey of the same population. Exploratory and confirmatory statistical analyses are used to triangulate the findings and test the hypotheses. The data suggests that demographics of the respondents of the two methods are similar across race and age, but differ significantly across income and education levels, implying a converging digital divide.
Journal of Official Statistics, 2014
This study assesses the effect of response-mode choices on response rates, and responsemode preferences of hard-to-survey populations: young adults, full-time workers, big city inhabitants, and non-Western immigrants. Using address-based sampling, a stratified sample of 3,496 households was selected. The first group of sample members was contacted face to face and could choose between a CAPI and web response mode. The second group, contacted by telephone, could choose between CATI and web. The third group, contacted by telephone, was randomly allocated to a response mode. Our address-based sampling technique was successful in reaching most of the hard-to-survey groups. Insufficient numbers of non- Western immigrants were reached; therefore this group was excluded from our analyses. In our mixed-effect models, no significant effects on the willingness to participate were found for mode choice. We found that full-time workers and young adults were significantly more likely to choose w...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Transportation Research Procedia, 2018
RePEc: Research Papers in Economics, 2004
Frontiers in Psychology, 2015
International Journal of Therapy and Rehabilitation, 2009
Journal of Economic and Social Measurement, 2015
BMC Medical Research Methodology, 2010
Journal of survey statistics and methodology, 2020
Revista Española de Investigaciones Sociológicas, 2013
Journal of Elections, Public Opinion and Parties, 2008
Public Opinion Quarterly, 2011