0% found this document useful (0 votes)
31 views14 pages

Ch4 Project Managers Guide Opre Mar2023

Chapter 4 focuses on the preparation necessary for conducting an evaluation, emphasizing the importance of planning, defining evaluation questions, and developing a logic model. It guides program managers on selecting what to evaluate, creating measurable objectives, and budgeting for the evaluation while promoting culturally responsive practices. The chapter underscores the collaborative nature of evaluation planning, involving stakeholders to enhance the evaluation's relevance and effectiveness.

Uploaded by

moonie031107
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views14 pages

Ch4 Project Managers Guide Opre Mar2023

Chapter 4 focuses on the preparation necessary for conducting an evaluation, emphasizing the importance of planning, defining evaluation questions, and developing a logic model. It guides program managers on selecting what to evaluate, creating measurable objectives, and budgeting for the evaluation while promoting culturally responsive practices. The chapter underscores the collaborative nature of evaluation planning, involving stakeholders to enhance the evaluation's relevance and effectiveness.

Uploaded by

moonie031107
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

4.

Chapter 4. Prepare for the Evaluation

What’s Inside?
What this chapter contains
„ An introduction to the importance of planning for an evaluation
„ A discussion about deciding what program, component, service, or activity to evaluate
„ A description of the basic questions an evaluation can answer
„ A guide for developing a logic model that will provide a structural framework for your evaluation
„ A plan for stating program objectives in measurable terms
„ A discussion of the common cost drivers and cost savers in an evaluation
„ Examples of ways to apply culturally responsive and equitable principles when
preparing for an evaluation

Who can use this chapter


„ Program managers preparing to conduct a program evaluation

Click the links below to view the relevant section

Introduction

Practice Culturally Responsive Decide What


and Equitable Evaluation When to Evaluate
Preparing for an Evaluation
1

7 2

Prepare an Develop Evaluation


6 3
Evaluation Budget Questions

5 4
Develop Measurable Build and Use
Objectives a Logic Model
4.2

Introduction
Once you have assembled your evaluation team, the next step is to look closely at the purpose of your
evaluation to determine what evaluation questions can be asked and answered and how to get the best
return on your evaluation investment. A shared understanding of the purpose, use, and users of the
evaluation findings should drive the development of evaluation questions. This understanding should in
turn drive the evaluation design, data collection, analysis, and reporting. Beyond facilitating good evaluation
practice, the planning phase can—

„ Foster transparency for the evaluation.


„ Increase program staff buy-in for evaluation activities.
„ Connect and align various evaluation activities (especially for programs employing different contractors
or contracts).
„ Improve transitions during staff turnover.
„ Establish whether sufficient program resources and time are available to accomplish the intended
evaluation activities.

The important decisions of what to evaluate and how should involve the outside evaluator or consultant
(if you decide to hire one), all program staff who are part of the evaluation team, and anyone else in the
agency who will be engaged. As noted in chapter 3, evaluation teams should engage potential users of the
evaluation and community members early and often. Their engagement during the initial decision-making
processes will improve the ultimate usefulness of the evaluation and help balance the power between
evaluators and evaluation participants. Ideally, the planning process should begin before implementing the
program, component, or service you wish to evaluate. When that is not possible (i.e., the program is already
operational), take time to understand and articulate program goals and strategies.

This chapter offers guidance on preparing for the evaluation, including defining its size and scope,
identifying the evaluation questions, building a logic model to provide a structural framework, stating
program objectives in measurable terms, and budgeting for the evaluation. It concludes with strategies to
support conducting a culturally responsive and equitable evaluation.

Decide What to Evaluate


Some programs have many components, while others have only one or two. You can evaluate your entire
program, one or two program components, or even one or two services or activities within a component
(see figure 4.1). Consider, for example, a Head Start grantee providing seasonal Head Start to migrant
farmworker families. A successful evaluation will distinguish whether it is evaluating the early learning
and child development services, health and nutrition services, family well-being services, or all three
components.
4.3

Figure 4.1. Potential Evaluation Target Options

a
W.K. Kellogg Foundation, 2004, p. 2
b
Blase & Fixsen, 2013, p. 3

To a large extent, your decision about what to evaluate will depend on program staff and leadership, the
funder, and potentially the local community’s priorities. The decision will also be subject to available financial
resources, staff and contractor availability, and the amount of time committed to the evaluation.

Several options are available to work within limited evaluation


Program as shorthand
resources. For example, you might simplify the design or for an evaluand
narrow the scope of your evaluation. It is better to conduct You can evaluate almost anything. In addition
an effective evaluation of a single program component than to the examples of a program, program
attempt to evaluate several components or an entire program component, service, or activity, you can study
without sufficient resources. Sometimes, the decision about policies, laws, websites, a training, etc. In the
interest of readability, the Guide uses the term
what to evaluate is made for you, as when funders require “program” as a placeholder for any evaluand
specific evaluation elements as a condition of a grant award. (a generic term for the object or thing that is
At other times, you or your agency administrators will decide the subject of an evaluation).
what to evaluate.
4.4

If your program is already operational, you may decide to evaluate a particular service or component
because you are unsure about its effectiveness for some participants. The introduction of a new service
or component may be another reason to focus your evaluation on that specific service or component.
Alternatively, you may choose to evaluate your entire program because you believe it is effective and you
want evidence of effectiveness to help you obtain additional funding to continue or expand it. Defining
what you will evaluate helps you determine at the outset whether your new efforts are being implemented
successfully and are effective at attaining expected participant outcomes.

Develop Evaluation Questions


Once you have decided what programs, components, services, or activities to evaluate, you should decide
which questions you want the evaluation to answer. These questions will play a central role in guiding the
evaluation, so plan them carefully. Strong evaluation questions should be clear, relevant, and rigorous. They
must stem from a program’s objectives.

As described in chapter 1, the two types of objectives are program implementation objectives and
participant outcome objectives. While implementation evaluations help you determine whether program
activities have been implemented as intended, outcome evaluations measure program effects (CDC
[Centers for Disease Control and Prevention], n.d.-b). Sometimes, evaluating program implementation
objectives is referred to as a process evaluation (OPRE [Office of Planning, Research, and Evaluation], 2010).
However, because many types of process evaluations are possible, this guide uses the term implementation
evaluation.

Implementation and outcome evaluations can be used to determine whether you have been successful in
attaining both types of objectives by answering the following questions:

„ Has the program been successful in attaining the anticipated implementation objectives? For
example, are you implementing the services or training you initially planned to implement? Are you
reaching the intended target population? Are you reaching the intended number of participants? Are you
developing the planned collaborative relationships?
„ Has the program been successful in attaining the anticipated participant outcome objectives?
For example, are participants exhibiting the expected changes in knowledge, attitudes, behaviors, or
awareness? Can these changes be attributed to the program?

A comprehensive evaluation must answer both questions. You may be successful in attaining your
implementation objectives, but if you do not have information about participant outcomes, you will not
know whether your program is having the intended outcome or effect. Similarly, you may be successful in
changing participants' knowledge, attitudes, or behaviors, but you will need information on implementation
to guide program adoption, replication, and scale-up.
4.5

One common framework for formulating concise but


rigorous outcome evaluation questions is known as PICO framework
Population, Intervention, Comparison, and Outcome (PICO). The PICO framework is a widely used strategy
This framework encourages evaluators to consider the for breaking down evaluation questions into
target population that will participate in the intervention and four elements that facilitate the identification of
relevant information: population, intervention,
evaluation, the intervention to be evaluated, the comparison comparison, outcome. To learn more about how
that will be used to assess whether the intervention PICO can clarify evaluation questions, see the
makes a difference, and the outcomes you expect the Tribal Evaluation Institute (2016) or the evaluation
intervention to achieve (Tribal Evaluation Institute, n.d.). plan template in Blocklin et al. (2019).
Strong evaluation questions should specify all four of these
elements. An example of an evaluation question that specifies the four elements of PICO might be, “Do
student parents (P) of children who attend Head Start (I) miss fewer classes (O) than student parents whose
children do not attend Head Start (C)?”

Although this section focuses on implementation and outcome evaluations, other categories of questions
may be relevant to your program: questions regarding the need for services (needs assessment) and
questions regarding the program’s economic benefits (economic evaluation). These topics are beyond the
scope of this Guide, but a basic understanding of them may be helpful.

A needs assessment is a study of the problem a program intends to address and the need for the program,
such as determining the number of children who are chronically absent from school and the likely reasons
why they miss school (GAO [Government Accountability Office], 2021). An economic evaluation1 is a study
that measures program costs and compares them with either a monetary value of the program’s benefit
(cost-benefit analysis) or a measure of the program’s effectiveness in achieving its outcome objectives (cost-
effectiveness analysis). For more information on these types of assessments, see the resources in the
“To learn more” section at the end of the chapter.

Build and Use a Logic Model


Whether you decide to evaluate an entire program, a single component, or a single service, you will need
to build a logic model. A logic model2 is typically represented as a flow chart that tracks how inputs drive
activities to produce outputs, outcomes, and ultimate impact (OPRE, 2010). A variety of formats can be
used to create a logic model; the key is to develop a clear understanding of the program and its context for
operation. A logic model may also be referred to as a program model, program theory, and theory of change.

1
Economic evaluation is an effort to use analytic methods to identify, measure, value, or compare the costs and consequences of
one or more alternative programs or interventions (CDC n.d.-a).
2
A logic model is a picture of how your organization does its work—the theory and assumptions underlying the program. A program
logic model links outcomes (both short and long term) with program activities and processes and the theoretical assumptions and
principles of the program (W.K. Kellogg Foundation, 2004).
4.6

In general, all logic models represent a series of logically related assumptions about the program's
participant population and the changes you hope to bring about in that population as a result of your
program. Evaluators and program staff should work together to jointly build the logic model to ensure it
reflects how the program will work and how it will influence the target population. Figure 4.2 presents the
basic elements of a logic model.

Figure 4.2. Basic Elements of a Logic Model

Source: Adapted from W.K. Kellogg Foundation Logic Model Development Guide (2004)

Logic models can inform program improvement and program evaluation. Regarding program improvement,
logic models can help advance strategic planning and program management by identifying the target
population (those the program is designed to serve), clarifying the program goals and any conceptual gaps,
tracking progress and changing needs, and describing the program to internal and external audiences.

Regarding program evaluation, logic models can provide a structural framework for your evaluation by
informing the development of a data collection plan and helping your evaluation team understand why
desired outcomes are or are not attained. For example,
tracking program outputs can help evaluators determine
Falsifiable logic model
whether ineffectiveness is the result of (1) insufficient
A logic model is a helpful tool for thinking through
resources or inputs or other implementation challenges
causal pathways by linking outcomes with program
or (2) other issues (i.e., the intervention is implemented inputs and activities. Taking this idea one step
with fidelity but did not have the intended effects). further, falsifiable logic models expand the role of the
logic model by including detailed—and falsifiable—
goals for components of a conventional logic model.
Logic models are not difficult to construct, and they lay Falsifiable logic models can help evaluation teams
the foundation for your evaluation by clearly identifying determine whether a program is satisfying its own
your program implementation and participant outcome stated goals.
objectives. These models can then be stated in To learn more about how falsifiable logic models
measurable terms for evaluation purposes. See “To learn can help a program strengthen its implementation
and increase the likelihood of success in a rigorous
more” and the appendices for resources and templates impact evaluation, see Epstein and Klerman (2013).
for building a logic model.
4.7

Develop Measurable Objectives


The logic model serves as a foundation for identifying your program’s implementation and participant
outcome objectives. Initially, focus your evaluation on assessing whether implementation objectives and
immediate participant outcome objectives were attained. This will help you assess whether it is worthwhile
to commit additional resources to evaluating attainment of intermediate and long-term outcome objectives.

Program managers often believe that stating objectives in measurable terms means establishing
performance standards or some arbitrary “measure” the program must attain. This is not true. Stating
objectives in measurable terms simply means you describe what you plan to do in your program and how
you expect the participants to change in a way you can measure. From this perspective, measurement can
involve anything from counting the number of services (or determining the duration of services) to using a
standardized test that will result in a quantifiable score. Some examples of stating objectives in measurable
terms appear below.

Stating implementation objectives in measurable terms. Examples of implementation objectives follow:

„ How will you know the planned activities occurred? For example, the number, duration, and
frequency of services or activities implemented
„ Who will do it? What the staffing arrangements will be; the characteristics and qualifications of the
program staff who will deliver the services, conduct the training, or develop the products; and how these
individuals will be recruited and hired
„ What population do you plan to reach? How many individuals? A description of the participant
population for the program; the number of participants to be reached during a specific timeframe; and
how you plan to recruit or reach the participants

To state these objectives in measurable terms, be specific about your program’s operations. The example
in table 4.1 demonstrates how general implementation objectives can be transformed into measurable
objectives. A blank worksheet for stating your implementation objectives in measurable terms is provided in
appendix B.
4.8

Table 4.1. Example of Implementation Objectives Stated in Measurable Terms

General objective: Provide drug abuse education services.


How will you know
the planned activities
Measurable objective: Provide 2-hour drug abuse education classes 5 days a week, eight
occurred?
sessions per year.

General objective: Program staff will be experienced, certified addictions counselors.


Who will do it?
Measurable objective: One hundred percent of program staff will have an addictions counseling
certification; program staff will have a minimum of 2 years’ experience.

General objective: Recruit and serve runaway and homeless youth.


What population
do you plan to
Measurable objective: Participants will include youth aged 8–14 residing in a shelter during time
reach? How many
of classes. Reach six participants per session; recruit the participants to the classes by intake
individuals?
counselors and clinical director.

From your description of the specific characteristics for each objective, the evaluation will be able to assess
in an ongoing way whether the objectives were attained, the types of problems encountered during program
implementation, and the areas where changes may be needed. Using the example above, you may discover
the first class session included only two youth from the crisis intervention services. Based on the findings
from the evaluation, you might examine your data to gain more insights into the recruitment process:

„ How many youth resided in the shelter during that timeframe?


„ How many youth agreed to participate?
„ What barriers to participation did youth encounter (such as youth or parent reluctance to give
permission, lack of transportation, or lack of interest among youth)?

Based on your answers, you may decide to revise your recruitment strategies, train crisis intervention
counselors to be more effective in recruiting youth, visit a youth’s family to encourage the youth’s
participation, or offer transportation to youth to make it easier for them to attend the classes.

Stating participant outcome objectives in measurable terms. Be specific about the changes in
knowledge, attitudes, awareness, or behavior you expect to occur as a result of participation in your
program. One way to be specific about these changes is to ask yourself the following questions:

„ What change is expected to occur?


„ How much change is expected to occur?
„ For whom will the expected change occur?
„ How will you know the expected change occurred?
4.9

To answer these questions, identify the evidence needed to demonstrate your participants have changed.
The example in table 4.2 demonstrates how participant outcome objectives may be stated in measurable
terms. A worksheet for defining measurable participant outcome objectives appears in appendix B.

Table 4.2. Example of Outcome Objectives Stated in Measurable Terms

General objective: Expect to reduce the use of alcohol by youth.


How will you know
expected change Measurable objective: Youth who complete the program will demonstrate a 10 percent decrease
occurred? in alcohol use compared with preprogram, as measured by the Alcohol Timeline Followback
instrument.

Prepare an Evaluation Budget


Program managers are often concerned about the cost of an evaluation. This is a valid concern. Evaluations
do require time, money, and expertise. Many program managers and staff believe it is unethical to use
program or agency financial resources for an evaluation because available funds should be spent on serving
participants. However, evaluation is essential if you want to know whether your program is benefiting
participants. It is more accurate to view money spent on evaluation as an investment in your program and in
your participants rather than as a diversion of funds away from helping participants.

Unfortunately, calculating evaluation costs is not strictly defined. The amount of money needed depends on
many factors:

„ Aspects of your program you decide to evaluate


„ Number of people who will contribute to the evaluation (e.g., how many evaluators; how many
community members and their level of engagement)
„ Size of the program (i.e., the number of staff members, participants, components, and services)
„ Number of outcomes you want to assess
„ Who is conducting the evaluation
„ Your agency’s available evaluation-related resources

Costs also vary according to economic differences in communities and geographic locations. Table 4.3
describes other common factors that influence the costs and resources needed to conduct a program
evaluation, such as the source and condition of data, how the data will be collected, the statistical complexity
of data analyses, and the program staff’s evaluation capacity.
4.10

Table 4.3. Common Cost Drivers and Cost Savers in Program Evaluation

Factor Considerations Lower Cost Higher Cost

ƒ Using previously collected data will


ƒ Using previously collected require an agreement (e.g., data
ƒ What data are already
data (e.g., administrative data) use agreement, memorandum
available (secondary data),
that are readily available and of understanding) to obtain
and what will need to be
Data source inexpensive to obtain ƒ Collecting data at many
collected (primary data)?
ƒ Collecting data at a single geographically spread
ƒ How many sources of data
site and/or from a single sites and/or from multiple sources
are needed?
informant group (particularly from comparison
groups)
ƒ Will the data require extensive
cleaning or manipulation?
ƒ Available datafile(s) cleaned ƒ Datafiles will require entry,
Data condition ƒ Is the file easy to interpret and
and ready for use cleaning, and coding
use (e.g., is a data dictionary
provided)?
ƒ How will data be collected?
ƒ Developing new measures
ƒ Does the evaluation require
ƒ Using current measures ƒ Using complex data collection
computer-assisted data
ƒ Using simple data collection techniques (e.g., building a
Data collection collection methods?
techniques (e.g., existing new data collection portal,
methods ƒ Will data collection require
portals, easily programmed programming complicated web
travel?
survey software) surveys, using computer-assisted
ƒ Will participants be
telephone interviewing)
compensated?
ƒ Data analysis requires
descriptive methods to
summarize data (e.g.,
average participant age,
ƒ What amount of time and ƒ Data analysis requires advanced
proportion of participants
Statistical level of technical expertise inferential statistical methods to
who are employed)
complexity are required to conduct data establish evidence of a
ƒ Evaluation intends to establish
analysis and interpretation? causal relationship
evidence of a relationship
between intervention and
outcomes that is suggestive,
not causal
ƒ Program staff have sufficient
ƒ Can the evaluation be knowledge or experience
conducted by staff from the to design and implement
ƒ Need to hire staff with sufficient
program or organization the evaluation (e.g., relevant
knowledge, experience, time,
being evaluated (internal training, data-driven culture,
and resources to design and
Evaluation evaluators), or does it require experience engaging
implement the evaluation
capacity outside support (external community representatives)
evaluators)? ƒ Independent external evaluator
ƒ Program staff can build
needed to enhance credibility of
ƒ Do any supplies or equipment in time needed to conduct
findings
need to be purchased or the evaluation and/or
rented for the evaluation? have access to evaluation
resources
4.11

In general, as you increase the budget for your evaluation, you gain a corresponding increase in knowledge
about your success in attaining your program objectives. In many situations, the lowest cost evaluations
may not be worth the expense, and realistically, the highest cost evaluations may be beyond the scope of
most agencies’ financial resources. When possible, consider dropping evaluation components rather than
reducing the quality of the evidence collected. For example, lowering your study recruitment budget may
reduce your survey response rates because your team does not have time to follow up with nonrespondents.
This would diminish the quality of your data and the conclusions you can draw about your program’s
effectiveness. Instead, maintain data quality and reduce the scope of your evaluation (e.g., focus on one
component rather that an entire program).

Depending on budgeting and planning processes in your organization, you may be asked to roughly
estimate evaluation costs before evaluation planning starts and develop a more detailed budget later.

Practice Culturally Responsive and Equitable


Evaluation When Preparing for an Evaluation
Evaluation teams often fail to include community members
as co-creators or consider cultural assumptions and
norms, the community’s history and context, and the Sources for understanding factors
that could influence a program
structural inequities. Use a culturally responsive and and its evaluation
equitable evaluation (CREE) approach to gain a better
} Review written materials, such as literature
understanding of your program’s setting. While it is or evaluation reports of similar programs in
important to engage community members, especially comparable communities, local news stories,
those eligible to receive the program’s services, they are or even blog posts by local influencers
not responsible for educating evaluators. Evaluators must } Local public officials or records
do the work to understand the factors that can influence } Business and nonprofit leaders
an evaluation. } Neighborhood associations
} Program partner organizations
Ideally, systems to collaborate with local organizations } Community members
and community members will be in place before the } Current and past participants
planning process begins. Engaging community members } Other evaluators working in the community
in the logic model development can help you identify
perspectives previously not explored. This approach can
also help program staff understand how community
members’ expectations may differ from their own. If the evaluation design is already underway (i.e., the
logic model and/or objectives are set), it is still worthwhile to include community members and other
collaborators to the extent possible.

When learning about factors such as historical and current systemic sources of racism, communities cannot
be considered as having identical experiences. Collect information from many sources offering a variety of
4.12

perspectives. Potential, current, or past participants all have valuable perspectives about why they would or
would not participate in the program and what they would expect from program participation.

When thinking through what to learn, focus on factors that could influence the program based on the
emerging or final logic model design. New information can help shape overarching objectives and ways
to measure specific implementation and outcome objectives. For example, if an implementation objective
relates to the number of program participants, understanding barriers to participation is important.

When thinking through how to apply what you learned, consider how development of evaluation questions
can reflect a focus on equity based on community members’ experiences of underlying systems of inequity
(e.g., examine how institutional practices or policies affect individuals differently based on race, gender,
income). In addition to shaping the logic model and development of objectives, your understanding will
likely influence the data you seek (e.g., anticipated and actual program access barriers, determination of
whether the program is culturally appropriate and meeting the expectations of participants, and participant
outcomes and feedback).

Following are general considerations when incorporating a CREE approach to program evaluations:

„ Allow time in your evaluation development process to learn about factors that could influence the
program’s implementation or outcomes. Time is needed to develop rapport with community members
and include many perspectives.
„ Include budget needs for evaluation team time and effort, including community members and other local
partners and any other necessary resources for the planning process.
„ Form an inclusive evaluation team as early as possible to gather more diverse perspectives on planning
aspects, such as the logic model and program objectives.
„ Develop a common understanding of how decisions will be made to ensure all members of the
evaluation team, including community members and study participants, can contribute in meaningful
and authentic ways.

To learn more …
} A Guide to Assessing Needs (Watkins et al., 2012)
} Budget Preparation Guidelines Procurement and Grants Office (CDC, n.d.-c)
} Checklist for Developing and Evaluating Evaluation Budgets (Horn, 2001)
} Evaluability Assessment: Examining the Readiness of a Program for Evaluation (JRSA, 2003)
} Evaluation Questions Checklist (Wingate et al., 2016)
} Logic Model Tip Sheet (FYSB, n.d.)
} Needs Assessment Guide (WHO, n.d.)
} Refining Your Question (DeCarlo, 2018)
} Logic Model Development Guide (W.K. Kellogg Foundation, 2004)
} Tools and Methods for Evaluating the Efficiency of Development Interventions (Palenberg, 2011)
4.13

References
Blase, K., & Fixsen, D. (2013). Core intervention components: Identifying and operationalizing what makes
programs work. [Link]
operationalizing-what-makes-programs-work

Blocklin, M., Hyra, A., Kean, E., & Porowski, A. (2019). Community collaborations evaluation plan template and
quality indicators. Abt Associates. [Link]

CDC (Centers for Disease Control and Prevention). (n.d.-a). Types of evaluations. [Link]
program/pupestd/types%20of%[Link]

CDC. (n.d.-b). Program evaluation tip sheet: Economic evaluation. Evaluation and Program Effectiveness
Team, Division for Heart Disease and Stroke Prevention. [Link]
evaluation_tip_sheet_economic_evaluation.pdf

CDC. (n.d.-c). Budget preparation guidelines: Procurement and Grants Office (PGO). [Link]
hiv/pdf/funding/announcements/ps17-1704/[Link]

DeCarlo, M. (2018). Refining your question. Scientific Inquiry in Social Work. https://
[Link]/chapter/3-3-refining-your-question/

Epstein, D., & Klerman, J. A. (2012). When is a program ready for rigorous impact evaluation? The role of a
falsifiable logic model. Evaluation Review, 36(5), 375–401. [Link]

FYSB (Family and Youth Services Bureau). (n.d.). Logic model tip sheet. U.S. Department of Health and
Human Services, Administration for Children and Families. [Link]
files/documents/prep-logic-model-ts_0.pdf

GAO (Government Accountability Office). (2021). Program evaluation: Key terms and concepts. [Link]
[Link]/assets/[Link]

Horn, J. (2001). Checklist for developing and evaluating evaluation budgets. Western Michigan University,
Evaluation Checklist Project. [Link]
[Link]

JRSA (Justice Research and Statistics Association). (2003). Evaluability assessment: Examining the readiness
of a program for evaluation. Office of Juvenile Justice and Delinquency Prevention. [Link]
pubs/juv-justice/[Link]

OPRE (Office of Planning, Research, and Evaluation). (2010). The program manager’s guide to evaluation.
Second Edition. U.S. Department of Health and Human Services. [Link]
report/program-managers-guide-evaluation-second-edition

Palenberg, M. (2011). Tools and methods for evaluating the efficiency of development interventions. Global
Public Policy Institute. [Link]
efficiency-of-development-interventions
4.14

Tribal Evaluation Institute. (2016). Using PICO to build an evaluation question. [Link]
evaluation/using-pico/

Watkins, R., Meiers, M., & Visser, Y. (2012). A guide to assessing needs essential tools for collecting
information, making decisions, and achieving development results. World Bank. [Link]
[Link]/bitstream/handle/10986/2231/663920PUB0EPI00essing09780821388686.
pdf?sequence=1&isAllowed=y

WHO (World Health Organization). (n.d.). Needs assessment. [Link]


handle/10665/66584/WHO_MSD_MSB_00.[Link];jsessionid=942102247F724CABD2490AC87
B924C34?sequence=4

Wingate, L. & Schroeter, D. (2016). Evaluation questions checklist for program evaluation. Western Michigan
University, Evaluation Checklist Project. [Link]
u350/2018/eval-questions-wingate%[Link]

W.K. Kellogg Foundation. (2004). Logic model development guide. [Link]


[Link]

You might also like