Ch4 Project Managers Guide Opre Mar2023
Ch4 Project Managers Guide Opre Mar2023
What’s Inside?
What this chapter contains
An introduction to the importance of planning for an evaluation
A discussion about deciding what program, component, service, or activity to evaluate
A description of the basic questions an evaluation can answer
A guide for developing a logic model that will provide a structural framework for your evaluation
A plan for stating program objectives in measurable terms
A discussion of the common cost drivers and cost savers in an evaluation
Examples of ways to apply culturally responsive and equitable principles when
preparing for an evaluation
Introduction
7 2
5 4
Develop Measurable Build and Use
Objectives a Logic Model
4.2
Introduction
Once you have assembled your evaluation team, the next step is to look closely at the purpose of your
evaluation to determine what evaluation questions can be asked and answered and how to get the best
return on your evaluation investment. A shared understanding of the purpose, use, and users of the
evaluation findings should drive the development of evaluation questions. This understanding should in
turn drive the evaluation design, data collection, analysis, and reporting. Beyond facilitating good evaluation
practice, the planning phase can—
The important decisions of what to evaluate and how should involve the outside evaluator or consultant
(if you decide to hire one), all program staff who are part of the evaluation team, and anyone else in the
agency who will be engaged. As noted in chapter 3, evaluation teams should engage potential users of the
evaluation and community members early and often. Their engagement during the initial decision-making
processes will improve the ultimate usefulness of the evaluation and help balance the power between
evaluators and evaluation participants. Ideally, the planning process should begin before implementing the
program, component, or service you wish to evaluate. When that is not possible (i.e., the program is already
operational), take time to understand and articulate program goals and strategies.
This chapter offers guidance on preparing for the evaluation, including defining its size and scope,
identifying the evaluation questions, building a logic model to provide a structural framework, stating
program objectives in measurable terms, and budgeting for the evaluation. It concludes with strategies to
support conducting a culturally responsive and equitable evaluation.
a
W.K. Kellogg Foundation, 2004, p. 2
b
Blase & Fixsen, 2013, p. 3
To a large extent, your decision about what to evaluate will depend on program staff and leadership, the
funder, and potentially the local community’s priorities. The decision will also be subject to available financial
resources, staff and contractor availability, and the amount of time committed to the evaluation.
If your program is already operational, you may decide to evaluate a particular service or component
because you are unsure about its effectiveness for some participants. The introduction of a new service
or component may be another reason to focus your evaluation on that specific service or component.
Alternatively, you may choose to evaluate your entire program because you believe it is effective and you
want evidence of effectiveness to help you obtain additional funding to continue or expand it. Defining
what you will evaluate helps you determine at the outset whether your new efforts are being implemented
successfully and are effective at attaining expected participant outcomes.
As described in chapter 1, the two types of objectives are program implementation objectives and
participant outcome objectives. While implementation evaluations help you determine whether program
activities have been implemented as intended, outcome evaluations measure program effects (CDC
[Centers for Disease Control and Prevention], n.d.-b). Sometimes, evaluating program implementation
objectives is referred to as a process evaluation (OPRE [Office of Planning, Research, and Evaluation], 2010).
However, because many types of process evaluations are possible, this guide uses the term implementation
evaluation.
Implementation and outcome evaluations can be used to determine whether you have been successful in
attaining both types of objectives by answering the following questions:
Has the program been successful in attaining the anticipated implementation objectives? For
example, are you implementing the services or training you initially planned to implement? Are you
reaching the intended target population? Are you reaching the intended number of participants? Are you
developing the planned collaborative relationships?
Has the program been successful in attaining the anticipated participant outcome objectives?
For example, are participants exhibiting the expected changes in knowledge, attitudes, behaviors, or
awareness? Can these changes be attributed to the program?
A comprehensive evaluation must answer both questions. You may be successful in attaining your
implementation objectives, but if you do not have information about participant outcomes, you will not
know whether your program is having the intended outcome or effect. Similarly, you may be successful in
changing participants' knowledge, attitudes, or behaviors, but you will need information on implementation
to guide program adoption, replication, and scale-up.
4.5
Although this section focuses on implementation and outcome evaluations, other categories of questions
may be relevant to your program: questions regarding the need for services (needs assessment) and
questions regarding the program’s economic benefits (economic evaluation). These topics are beyond the
scope of this Guide, but a basic understanding of them may be helpful.
A needs assessment is a study of the problem a program intends to address and the need for the program,
such as determining the number of children who are chronically absent from school and the likely reasons
why they miss school (GAO [Government Accountability Office], 2021). An economic evaluation1 is a study
that measures program costs and compares them with either a monetary value of the program’s benefit
(cost-benefit analysis) or a measure of the program’s effectiveness in achieving its outcome objectives (cost-
effectiveness analysis). For more information on these types of assessments, see the resources in the
“To learn more” section at the end of the chapter.
1
Economic evaluation is an effort to use analytic methods to identify, measure, value, or compare the costs and consequences of
one or more alternative programs or interventions (CDC n.d.-a).
2
A logic model is a picture of how your organization does its work—the theory and assumptions underlying the program. A program
logic model links outcomes (both short and long term) with program activities and processes and the theoretical assumptions and
principles of the program (W.K. Kellogg Foundation, 2004).
4.6
In general, all logic models represent a series of logically related assumptions about the program's
participant population and the changes you hope to bring about in that population as a result of your
program. Evaluators and program staff should work together to jointly build the logic model to ensure it
reflects how the program will work and how it will influence the target population. Figure 4.2 presents the
basic elements of a logic model.
Source: Adapted from W.K. Kellogg Foundation Logic Model Development Guide (2004)
Logic models can inform program improvement and program evaluation. Regarding program improvement,
logic models can help advance strategic planning and program management by identifying the target
population (those the program is designed to serve), clarifying the program goals and any conceptual gaps,
tracking progress and changing needs, and describing the program to internal and external audiences.
Regarding program evaluation, logic models can provide a structural framework for your evaluation by
informing the development of a data collection plan and helping your evaluation team understand why
desired outcomes are or are not attained. For example,
tracking program outputs can help evaluators determine
Falsifiable logic model
whether ineffectiveness is the result of (1) insufficient
A logic model is a helpful tool for thinking through
resources or inputs or other implementation challenges
causal pathways by linking outcomes with program
or (2) other issues (i.e., the intervention is implemented inputs and activities. Taking this idea one step
with fidelity but did not have the intended effects). further, falsifiable logic models expand the role of the
logic model by including detailed—and falsifiable—
goals for components of a conventional logic model.
Logic models are not difficult to construct, and they lay Falsifiable logic models can help evaluation teams
the foundation for your evaluation by clearly identifying determine whether a program is satisfying its own
your program implementation and participant outcome stated goals.
objectives. These models can then be stated in To learn more about how falsifiable logic models
measurable terms for evaluation purposes. See “To learn can help a program strengthen its implementation
and increase the likelihood of success in a rigorous
more” and the appendices for resources and templates impact evaluation, see Epstein and Klerman (2013).
for building a logic model.
4.7
Program managers often believe that stating objectives in measurable terms means establishing
performance standards or some arbitrary “measure” the program must attain. This is not true. Stating
objectives in measurable terms simply means you describe what you plan to do in your program and how
you expect the participants to change in a way you can measure. From this perspective, measurement can
involve anything from counting the number of services (or determining the duration of services) to using a
standardized test that will result in a quantifiable score. Some examples of stating objectives in measurable
terms appear below.
How will you know the planned activities occurred? For example, the number, duration, and
frequency of services or activities implemented
Who will do it? What the staffing arrangements will be; the characteristics and qualifications of the
program staff who will deliver the services, conduct the training, or develop the products; and how these
individuals will be recruited and hired
What population do you plan to reach? How many individuals? A description of the participant
population for the program; the number of participants to be reached during a specific timeframe; and
how you plan to recruit or reach the participants
To state these objectives in measurable terms, be specific about your program’s operations. The example
in table 4.1 demonstrates how general implementation objectives can be transformed into measurable
objectives. A blank worksheet for stating your implementation objectives in measurable terms is provided in
appendix B.
4.8
From your description of the specific characteristics for each objective, the evaluation will be able to assess
in an ongoing way whether the objectives were attained, the types of problems encountered during program
implementation, and the areas where changes may be needed. Using the example above, you may discover
the first class session included only two youth from the crisis intervention services. Based on the findings
from the evaluation, you might examine your data to gain more insights into the recruitment process:
Based on your answers, you may decide to revise your recruitment strategies, train crisis intervention
counselors to be more effective in recruiting youth, visit a youth’s family to encourage the youth’s
participation, or offer transportation to youth to make it easier for them to attend the classes.
Stating participant outcome objectives in measurable terms. Be specific about the changes in
knowledge, attitudes, awareness, or behavior you expect to occur as a result of participation in your
program. One way to be specific about these changes is to ask yourself the following questions:
To answer these questions, identify the evidence needed to demonstrate your participants have changed.
The example in table 4.2 demonstrates how participant outcome objectives may be stated in measurable
terms. A worksheet for defining measurable participant outcome objectives appears in appendix B.
Unfortunately, calculating evaluation costs is not strictly defined. The amount of money needed depends on
many factors:
Costs also vary according to economic differences in communities and geographic locations. Table 4.3
describes other common factors that influence the costs and resources needed to conduct a program
evaluation, such as the source and condition of data, how the data will be collected, the statistical complexity
of data analyses, and the program staff’s evaluation capacity.
4.10
Table 4.3. Common Cost Drivers and Cost Savers in Program Evaluation
In general, as you increase the budget for your evaluation, you gain a corresponding increase in knowledge
about your success in attaining your program objectives. In many situations, the lowest cost evaluations
may not be worth the expense, and realistically, the highest cost evaluations may be beyond the scope of
most agencies’ financial resources. When possible, consider dropping evaluation components rather than
reducing the quality of the evidence collected. For example, lowering your study recruitment budget may
reduce your survey response rates because your team does not have time to follow up with nonrespondents.
This would diminish the quality of your data and the conclusions you can draw about your program’s
effectiveness. Instead, maintain data quality and reduce the scope of your evaluation (e.g., focus on one
component rather that an entire program).
Depending on budgeting and planning processes in your organization, you may be asked to roughly
estimate evaluation costs before evaluation planning starts and develop a more detailed budget later.
When learning about factors such as historical and current systemic sources of racism, communities cannot
be considered as having identical experiences. Collect information from many sources offering a variety of
4.12
perspectives. Potential, current, or past participants all have valuable perspectives about why they would or
would not participate in the program and what they would expect from program participation.
When thinking through what to learn, focus on factors that could influence the program based on the
emerging or final logic model design. New information can help shape overarching objectives and ways
to measure specific implementation and outcome objectives. For example, if an implementation objective
relates to the number of program participants, understanding barriers to participation is important.
When thinking through how to apply what you learned, consider how development of evaluation questions
can reflect a focus on equity based on community members’ experiences of underlying systems of inequity
(e.g., examine how institutional practices or policies affect individuals differently based on race, gender,
income). In addition to shaping the logic model and development of objectives, your understanding will
likely influence the data you seek (e.g., anticipated and actual program access barriers, determination of
whether the program is culturally appropriate and meeting the expectations of participants, and participant
outcomes and feedback).
Following are general considerations when incorporating a CREE approach to program evaluations:
Allow time in your evaluation development process to learn about factors that could influence the
program’s implementation or outcomes. Time is needed to develop rapport with community members
and include many perspectives.
Include budget needs for evaluation team time and effort, including community members and other local
partners and any other necessary resources for the planning process.
Form an inclusive evaluation team as early as possible to gather more diverse perspectives on planning
aspects, such as the logic model and program objectives.
Develop a common understanding of how decisions will be made to ensure all members of the
evaluation team, including community members and study participants, can contribute in meaningful
and authentic ways.
To learn more …
} A Guide to Assessing Needs (Watkins et al., 2012)
} Budget Preparation Guidelines Procurement and Grants Office (CDC, n.d.-c)
} Checklist for Developing and Evaluating Evaluation Budgets (Horn, 2001)
} Evaluability Assessment: Examining the Readiness of a Program for Evaluation (JRSA, 2003)
} Evaluation Questions Checklist (Wingate et al., 2016)
} Logic Model Tip Sheet (FYSB, n.d.)
} Needs Assessment Guide (WHO, n.d.)
} Refining Your Question (DeCarlo, 2018)
} Logic Model Development Guide (W.K. Kellogg Foundation, 2004)
} Tools and Methods for Evaluating the Efficiency of Development Interventions (Palenberg, 2011)
4.13
References
Blase, K., & Fixsen, D. (2013). Core intervention components: Identifying and operationalizing what makes
programs work. [Link]
operationalizing-what-makes-programs-work
Blocklin, M., Hyra, A., Kean, E., & Porowski, A. (2019). Community collaborations evaluation plan template and
quality indicators. Abt Associates. [Link]
CDC (Centers for Disease Control and Prevention). (n.d.-a). Types of evaluations. [Link]
program/pupestd/types%20of%[Link]
CDC. (n.d.-b). Program evaluation tip sheet: Economic evaluation. Evaluation and Program Effectiveness
Team, Division for Heart Disease and Stroke Prevention. [Link]
evaluation_tip_sheet_economic_evaluation.pdf
CDC. (n.d.-c). Budget preparation guidelines: Procurement and Grants Office (PGO). [Link]
hiv/pdf/funding/announcements/ps17-1704/[Link]
DeCarlo, M. (2018). Refining your question. Scientific Inquiry in Social Work. https://
[Link]/chapter/3-3-refining-your-question/
Epstein, D., & Klerman, J. A. (2012). When is a program ready for rigorous impact evaluation? The role of a
falsifiable logic model. Evaluation Review, 36(5), 375–401. [Link]
FYSB (Family and Youth Services Bureau). (n.d.). Logic model tip sheet. U.S. Department of Health and
Human Services, Administration for Children and Families. [Link]
files/documents/prep-logic-model-ts_0.pdf
GAO (Government Accountability Office). (2021). Program evaluation: Key terms and concepts. [Link]
[Link]/assets/[Link]
Horn, J. (2001). Checklist for developing and evaluating evaluation budgets. Western Michigan University,
Evaluation Checklist Project. [Link]
[Link]
JRSA (Justice Research and Statistics Association). (2003). Evaluability assessment: Examining the readiness
of a program for evaluation. Office of Juvenile Justice and Delinquency Prevention. [Link]
pubs/juv-justice/[Link]
OPRE (Office of Planning, Research, and Evaluation). (2010). The program manager’s guide to evaluation.
Second Edition. U.S. Department of Health and Human Services. [Link]
report/program-managers-guide-evaluation-second-edition
Palenberg, M. (2011). Tools and methods for evaluating the efficiency of development interventions. Global
Public Policy Institute. [Link]
efficiency-of-development-interventions
4.14
Tribal Evaluation Institute. (2016). Using PICO to build an evaluation question. [Link]
evaluation/using-pico/
Watkins, R., Meiers, M., & Visser, Y. (2012). A guide to assessing needs essential tools for collecting
information, making decisions, and achieving development results. World Bank. [Link]
[Link]/bitstream/handle/10986/2231/663920PUB0EPI00essing09780821388686.
pdf?sequence=1&isAllowed=y
Wingate, L. & Schroeter, D. (2016). Evaluation questions checklist for program evaluation. Western Michigan
University, Evaluation Checklist Project. [Link]
u350/2018/eval-questions-wingate%[Link]