Papers by Rebecca Maynard
who expertly produced and edited the report. Finally, we would like to extend a special thanks to... more who expertly produced and edited the report. Finally, we would like to extend a special thanks to the staff at each participating district. Without their strong support and participation, this study would not have been possible.

This report would not have been possible without sustained support and active engagement of many ... more This report would not have been possible without sustained support and active engagement of many Year Up staff and stakeholders and is the result of an exemplary practitioner-researcher partnership. Above all, we thank Year Up's national research leads, Garrett Warfield and Jessica Britt, for their constructive, cheerful, and tireless efforts on behalf of the study and for thoughtful comments on an earlier draft of this report. We are grateful to Year Up's founder Gerald Chertavian for his unflagging commitment to evidence building-"to improve and to prove" in his words-across multiple projects involving our team. We also very much appreciate the contributions of many other Year Up national staff and board members who shared their keen insights with us in interviews and webinars, as well as support from national support who provided crucial logistical support for the study. Many local Professional Training Corps staff and local college and corporate partners made essential contributions to the study. We are especially grateful to staff in offices that hosted site visits or participated in one of the project's two randomized controlled trials. And we thank the many young adults who agreed to participate in the study and shared their experiences with us. At Abt Associates, we thank Doug Walton and Dave Judkins for expert support with analyses of earnings data, Bry Pollack for capable editorial assistance, and Wendy McRae for fine project budget stewardship. The Office for Planning, Research and Evaluation at the federal Administration for Children and Families generously granted permission to access employment data in the National Directory of New Hires for this study. Finally, we thank our project officer Corinne Alfeld at the Institutes for Education Sciences for her steady support and encouragement over the course of this project. All interpretations and conclusions in this report are solely the responsibility of the authors and do not necessarily reflect the views of the Institutes for Education Sciences, Year Up, or its sponsors.

Journal of Educational and Behavioral Statistics, Mar 27, 2020
Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controll... more Cost-effectiveness analysis is a widely used educational evaluation tool. The randomized controlled trials that aim to evaluate the cost-effectiveness of the treatment are commonly referred to as randomized cost-effectiveness trials (RCETs). This study provides methods of power analysis for two-level multisite RCETs. Power computations take account of sample sizes, the effect size, covariates effects, nesting effects for both cost and effectiveness measures, the ratio of the total variance of the cost measure to the total variance of effectiveness measure, and correlations between cost and effectiveness measures at each level. Illustrative examples that show how power is influenced by the sample sizes, nesting effects, covariate effects, and correlations between cost and effectiveness measures are presented. We also demonstrate how the calculations can be applied in the design phase of two-level multisite RCETs using the software PowerUp!-CEA (Version 1.0).
Limit 4 pages single-spaced.

This chapter draws on a 40-year history of patchwork efforts to use data to inform the developmen... more This chapter draws on a 40-year history of patchwork efforts to use data to inform the development of public policy and shape its implementation. I begin with a description of the evolution of the policy process, drawing largely on experiences within the U.S. Departments of Health and Human Services, Education, and Labor. All three agencies have been major supporters of and contributors to advances in the methods of policy analysis and the use of program evaluation to guide decision making. The chapter draws on the roles of these agencies in laying the groundwork for the current emphasis on evidence-based policymaking, in part because of their leadership roles and in part because of the author’s first-hand experience working with these agencies. Of particular note is its attention to the lead up to the present context in which policy analysis and program evaluation are central to both the policy development and monitoring processes. The chapter ends with a discussion of the current movement to create and use credible evidence on the impacts and cost-effectiveness of programs, policies and practices as the foundation for more efficient and effective government and, where evidence is lacking, for integrating a knowledge-building agenda into the roll-out of strategies for change.
Evaluation Review, May 28, 2020
Background: This article offers a case example of how experimental evaluation methods can be coup... more Background: This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices. Objectives: This article demonstrates the feasibility and benefits of blending DBIR, IS, and RCE practices with embedded randomized controlled trials (RCTs) to improve the pace and efficiency of program improvement. Research design: This article describes a two-cycle experimental test of staff-designed strategies for improving a workforce development program. Youth enrolled in Year Up's Professional Training Corps
Journal of Research on Educational Effectiveness

This paper complements existing power analysis tools by offering tools to compute minimum detecta... more This paper complements existing power analysis tools by offering tools to compute minimum detectable effect sizes (MDES) for existing studies and to estimate minimum required sample sizes (MRSS) for studies under design. The tools that accompany this paper support estimates of MDES or MSSR for 21 different study designs that include 14 random assignment designs (6 designs in which individuals are randomly assigned to treatment or control condition and 8 in which clusters of individuals are randomly assigned to condition, with models differing depending on whether the sample was blocked prior to random assignment and by whether the analytic models assume constant, fixed, or random effects across blocks or assignment clusters); and 7 quasi-experimental designs (an interrupted time series design and 6 regression discontinuity designs that vary depending on whether the sample was blocked prior to randomization, whether individuals or clusters of individuals are assigned to treatment or ...
Pre-Print Disclaimer: This pre-print is being shared to promote timely dissemination of research ... more Pre-Print Disclaimer: This pre-print is being shared to promote timely dissemination of research findings and to solicit feedback from the scholarly community. It does not represent the final version; the findings and manuscript will almost certainly change for publication.
Society For Research on Educational Effectiveness, 2009

The Structured Training and Employment Transitional Services (STETS) demonstration was designed t... more The Structured Training and Employment Transitional Services (STETS) demonstration was designed to provide the first rigorous test of the effectiveness of transitional-employment programs in integrating mentally retarded young adults into the economic and social mainstream. Under the demonstration, which was funded by the Employment and Training Administration of the U.S. Department of Labor and directed by MDRC, programs were operated from the fall of 1981 through December 1983 in five cities throughout the country. This demonstration has greatly expanded our knowledge about the implementation and operation of transitional-employment programs for this target population and has documented the effectiveness of such programs in enhancing the economic and social independence of mentally retarded young adults. This report on the impact evaluation and the benefit-cost analysis of the demonstration program consists of the following components: (1) a brief description of the rationale for ...
Behavioral Science & Policy, 2017
Children from low-income families arrive at kindergarten already behind academically, do not over... more Children from low-income families arrive at kindergarten already behind academically, do not overcome these gaps during the school years, and are much less likely to attend and graduate from college. Many programs aim to help these children before they enter formal schooling, as well as during their kindergarten through 12th grade years and on the road to and through college; too often, though, the services go underutilized. In recent years, behavioral scientists have designed interventions meant to increase participation in such programs. Rigorous experiments have shown that a number of these approaches work well, enabling students to perform better academically and reach higher levels of education. Here, we propose four more interventions that federal agencies should test.
Many people contributed in important ways to this report. First and foremost, we thank the many a... more Many people contributed in important ways to this report. First and foremost, we thank the many abstinence education program grantees who have generously allowed us to visit their programs, meet with staff, and observe their operations. We are especially grateful to those who created and/or are directing the 11 programs that are the focus of the study:

During the past decade, increasing emphasis has been placed on conducting social experiments and ... more During the past decade, increasing emphasis has been placed on conducting social experiments and demonstrations to determine the economic and social effects of various public policies and programs. In such experiments, social science research has had to rely heavily on data collected through personal interviews; consequently, it has become imperative that researchers be concerned with the veracity of the data. Furthermore, researchers must determine not only the overall accuracy of the data, but also the factors that significantly influence data quality. In particular, knowledge of the determinants of response error can be critical in evaluations of experimental programs: if participation in a program is itself a significant influence on response error, and if adjustment for this fact is not made, then the conclusions based on comparisons between experimental and control group members may be incorrect.

Journal of Policy Analysis and Management, 2014
Fighting for Reliable Evidence is an incredibly powerful book, full of information that graduate ... more Fighting for Reliable Evidence is an incredibly powerful book, full of information that graduate schools do not teach and that would take a lifetime to learn on the job. Readers shadow the narrators-Judith Gueron, one of the founding employees and now President Emeritus of MDRC, and Howard Rolston, a long-time senior civil servant in the U.S. Department of Health and Human Services (HHS)-as they share their intersecting professional journeys in service of improving the welfare of our nation and its people. Over a 40-year period that began in 1974, a small band of warriors from the philanthropic, policy, and research communities, including Gueron and Rolston, fought and won many battles in a war to improve the availability and use of reliable evidence about what does and does not work to address the needs of poor and vulnerable populations. By shadowing the authors, readers learn the distinction between reliable and unreliable evidence and between evidence that is and is not useful for improving the development, implementation, and management of public policies. Readers also will gain an appreciation of the challenges to and rewards from strategic investments in generating such evidence. The vision, commitment, and grit of a relatively few individuals, including the authors, were critical in moving us from a world in which empirical evidence was largely absent from the policy process to one in which the federal government is now experimenting with performance partnerships and increasingly often considering the extent and reliability of evidence in its policy deliberations (Haskins & Baron, 2014; Mervis, 2013). However, as the title of the book suggests, the war has not been won. There is still a dearth of reliable evidence to guide policy decisions. Importantly, this book demonstrates well the feasibility of generating reliable evidence across a wide range of settings by using social experiments and the power of successful experiments for accelerating the accumulation of evidence. The story is rooted in the history of the use of social experiments to identify fixes for what, over the second half of the 20th century, was widely regarded as a seriously flawed welfare system. The system cost too much and did too little to incentivize and support poor or otherwise struggling individuals and families to improve their social and economic circumstances. As Gueron and Rolston explain, the beauty of experiments is that they largely sidestep big arguments about the appropriateness of the methodology used in and the credibility of findings from the "plenty of demonstrations, evaluations, and studies of welfare and employment and training programs..." (p. 11).

Prevention Science, 2021
The practice of prospectively registering the details of intervention studies in a public databas... more The practice of prospectively registering the details of intervention studies in a public database or registry is gaining momentum across disciplines as a strategy for increasing the transparency, credibility, and accessibility of study findings. In this article, we consider five registries that may be relevant for registration of intervention studies in the field of prevention science: ClinicalTrials.gov, the American Economic Association Registry of Randomized Controlled Trials (AEA RCT Registry), the Open Science Framework Preregistration (OSF Preregistration), the Registry for International Development Impact Evaluations (RIDIE), and the Registry of Efficacy and Effectiveness Studies (REES). We examine the five registries in terms of substantive focus, study designs, and contents of registry entries. We consider two paths forward for prospective registration of intervention studies in the field of prevention science: Path A: register all studies in ClinicalTrials.gov and Path B: allow individual researchers to select the registry with the “best fit.” Lastly, we consider how the field might begin to establish norms around registration.
The positions in this report do not reflect those of the funder but just the authors. Acknowledge... more The positions in this report do not reflect those of the funder but just the authors. Acknowledgements: We want to acknowledge the help of Stephen Bell,

This article summarizes the results from two studies that used experimental data to evaluate none... more This article summarizes the results from two studies that used experimental data to evaluate nonexperimental methods of program evaluation. These studies compared the experimental estimates—of an employment and training program that used random assignment to allocate its participants into training positions—to estimates from nonexperimental procedures. Both studies indicate that the nonexperimental methods may not accurately replicate the experimental estimates, and that recently developed methods for constructing comparison groups are no more likely to yield accurate estimates of the impact of training than the more conventional econometric procedures. Hor the last two decades, economists and policymakers havei studied the impact of employment and training programs on the labor market earnings of their participants. These studies have estimated the difference between the participants ’ postprogram earnings and the earnings they would have received in the absence of the ...

n this paper, I share some important facts and findings from recent re-search on the causes and c... more n this paper, I share some important facts and findings from recent re-search on the causes and consequences of teenage childbearing, particu-larly for teen parents on welfare. I examine what we know about the effectiveness of various programs and policies aimed at delaying early childbear-ing and improving outcomes for those who do bear children. I conclude by draw-ing lessons for welfare reform. The Causes and Consequences of Teenage Childbearing The odds that a teenager will engage in unprotected sex, become pregnant, and give birth increase in the face of multiple risk factors. These risk factors include coming from a single-parent family, living in poverty and/or a high-poverty neigh-borhood, low commitment to school, poor academic achievement, and having par-ents with limited education. For example, White teens living in single-parent households are twice as likely to become teenage parents as those in two-parent families; Black teens living in single-parent families are one a...
Uploads
Papers by Rebecca Maynard