Evaluating a Program
An evaluation is a systematic assessment. Evaluations should follow a systematic and
mutually agreed on plan. Plans will typically include the following:
● Determining the goal of the evaluation: What is the evaluation question, what is the
evaluation to find out.
● How will the evaluation answer the question: What methods will be used.
● Making the results useful, how will the results be reported so that they can be used by
the organization to make improvements.
The first part of the evaluation is to determine the question. “assessment of the operation
and/or outcomes of a program or policy” Evaluations can generally answer two types of
questions:
1. What is the outcome of the program? Did the program have any impact, was there any
improvement in people's lives?
2. How did the program get to that outcome? Did the program have some set of
procedures? Were these procedures followed, were the procedures reasonable, was there a better
way to get to the outcomes?
One way to do this is for the evaluator and program people to develop a very good
description of:
● what the outcomes should be,
● how the program will get there, and
● why the program leads to the outcome.
This description helps to identify how the program should lead to the outcome, why the
program activities should lead to the outcomes, and where to evaluate the program to check
whether it does.
This method is called a program theory. “A program theory explains how and why a
program is supposed to work. ... It provides a logical and reasonable description of why the
things you do – your program activities – should lead to the intended results or benefits.”
A useful tool to help work with the program theory is a logic model, which visually
shows the program theory, how all the program goals, activities, and expected outcomes link
together. Use the program theory or logic model to come up with evaluation questions
● Does the program have a positive outcome?
● Are people satisfied?
● How could the program be improved?
● How well is the program working?
● Is the program working the way it was intended to work?
There are many methods, each with their own uses, advantages and difficulties. Methods
include: Surveys Analysis of Administrative Data Key Informant Interviews Observation Focus
Groups Evaluations could use any, not necessarily all, of these methods, depending on the
question and goal of the evaluation.
Surveys are a set of questions that are asked of everyone in the same way. Surveys can
answer question about how many and how often.
Focus groups, interviews and observation are qualitative research methods, that is,
methods that are less likely to rely on statistical analysis.
Advantages:
● Useful to help figure out major program problems that cannot be explained by more
formal methods of analysis.
● The evaluator may see things that participants and staff may not see.
● The evaluator can learn about things which participants or staff may be unwilling to
reveal in more formal methods
● Useful when it's not clear what the program problems might be.
● Useful to give good ideas of what topics program participants and staff think are
important.
● Useful in developing surveys, in determining what questions or issues are important to
include.
● Useful when a main purpose is to generate recommendations
● Useful when quantitative data collected through other methods need to be interpreted.
Disadvantages:
● The evaluator's subjective views can introduce error.
● The focus of the evaluator is only on what is observed at one time in one place.
● Information from observations/ interviews/ groups can be time consuming and difficult
to interpret.
● Focus groups could be dominated by one individual and their point of view.
● Generally, information from focus groups, interviews, and observations CANNOT be
used to describe the client population.
The ultimate goal of a program is to improve people's lives. How do you know whether it
did? One commonly used way to find out whether the program improved people's lives is to ask
whether the program caused the outcome. If the program caused the outcome, then one could
argue that the program improved people's lives. On the other hand, if the program did not cause
the outcome, then one would argue that, since the program did not cause the outcome then the
program did not improve people's lives. How to figure this out? Determining whether a program
caused the outcome is one of the most difficult problems in evaluation, and not everyone agrees
on how to do it. Some say that randomized experiments are the best way to establish causality.
Others advocate in-depth case studies as best. The approach you take depends on how the
evaluation will be used, who it is for, what the evaluation users will accept as credible evidence
of causality, what resources are available for the evaluation, and how important it is to establish
causality with considerable confidence.
There are three approaches frequently used to establishing whether a program causes an
outcome.
• comparison groups – comparing people in the program to people not in the program
• multiple evaluation methods – comparing results from several evaluations, each using
different methods
• in depth case studies of programs and outcomes – showing that the links between what
program participants experience and the outcomes attained are reasonable, empirically validated,
and based on multiple sources and data.
It is important to periodically assess and adapt your activities to ensure they are as
effective as they can be. Evaluation can help you identify areas for improvement and ultimately
help you realize your goals more efficiently. Additionally, when you share your results about
what was more and less effective, you help advance environmental education.
Evaluation: What is it and why do it? | Meera ([Link])