0% found this document useful (0 votes)
97 views11 pages

Understanding Program Evaluation Methods

This document discusses key aspects of program evaluation including: 1) Program evaluation involves systematically investigating the effectiveness of social programs using social research methods to understand problems, design interventions, assess outcomes, and determine efficiency. 2) Developing an evaluation plan requires tailoring it to the specific program by exploring purposes, structures, stakeholders and available resources. 3) Critical considerations for evaluation planning include understanding the purposes of the evaluation, analyzing the program's structure/circumstances, and accounting for available evaluation resources.

Uploaded by

Muqadsa Zaineb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views11 pages

Understanding Program Evaluation Methods

This document discusses key aspects of program evaluation including: 1) Program evaluation involves systematically investigating the effectiveness of social programs using social research methods to understand problems, design interventions, assess outcomes, and determine efficiency. 2) Developing an evaluation plan requires tailoring it to the specific program by exploring purposes, structures, stakeholders and available resources. 3) Critical considerations for evaluation planning include understanding the purposes of the evaluation, analyzing the program's structure/circumstances, and accounting for available evaluation resources.

Uploaded by

Muqadsa Zaineb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Rossi, Freeman Lipsy's book 

Evaluation A systematic approach

• Program, policies and evaluation: - What is evaluative research?


Program evaluation as a robust arena of activity directed at collecting, analyzing, interpreting, and communicating information
about the effectiveness of social programs undertaken for the purpose of improving social conditions. Evaluations are conducted
for a variety of practical reasons: to aid in decisions concerning whether programs should be continued, improved, expanded, or
curtailed; to assess the utility of new programs and initiatives; to increase the effectiveness of program management and
administration; and to satisfy the accountability requirements of program sponsors. Evaluations also may contribute to substantive
and methodological social science knowledge.
Understanding evaluation in contemporary context requires some appreciation of its history, its distinguishing concepts and
purposes, and the inherent tensions and challenges that shape its practice. Program evaluation represents an adaptation of social
research methods to the task of studying social intervention in its natural political and organizational circumstances so that sound
judgments can be drawn about the need for intervention and the design, implementation, impact, and efficiency of programs that
address that need. Individual evaluation studies, and the accumulation of knowledge from many such studies, can make a vital
contribution to informed social action aimed at improving the human condition. The principal purpose of program evaluation,
therefore, is to provide valid findings about the effectiveness of social programs to those persons with responsibilities or interests
related to their creation, continuation, or improvement.

WHAT IS EVALUATION RESEARCH?


Although the broadest definition of evaluation includes all efforts to place value on events, things, processes, or people, we will be
concerned here with the evaluation of social programs. For purposes of orientation, we offer a preliminary definition of social program
evaluation now and will present and discuss a more complete version later in this chapter: Program evaluation is the use of social
research procedures to systematically investigate the effectiveness of social intervention
programs. More specifically, evaluation researchers (evaluators) use social research methods to study, appraise, and help improve social
programs in all their important aspects, including the diagnosis of the social problems they address, their conceptualization and design,
their implementation and administration, their outcomes, and their efficiency.
At various times, policymakers, funding organizations, planners, program managers, taxpayers, or program clientele need to distinguish
worthwhile programs from ineffective ones and launch new programs or revise existing ones so as to achieve certain desirable results. To
do so, they must obtain answers to questions such as the following:

 What are the nature and scope of the problem? Where is it located, whom does it affect, and how does it affect them?
 What is it about the problem or its effects that justifies new, expanded, or modified social programs?
 What feasible interventions are likely to significantly ameliorate the problem?
 What are the appropriate target populations for intervention?
 Is a particular intervention reaching its target population?
 Is the intervention being implemented well? Are the intended services being provided?
 Is the intervention effective in attaining the desired goals or benefits?
 How much does the program cost?
 Is the program cost reasonable in relation to its effectiveness and benefits?

• Tailoring Evaluation: evaluation planning, stakeholders/actors involved, evaluation questions,


criteria and methods
Every evaluation must be tailored to its program. The tasks that evaluators undertake depend on the purposes of the evaluation, the
conceptual and organizational structure of the program, and the resources available. Formulating an evaluation plan therefore
requires the evaluator to first explore these aspects of the evaluation situation with the evaluation sponsor and such other
stakeholders as policymakers, program personnel, and program participants.
Based on this reconnaissance and negotiation with the key stakeholders, the evaluator can then develop a plan that identifies the
evaluation questions to be answered, the methods to be used to answer them, and the relationships to be developed with the
stakeholders during the course of the evaluation.
No hard and fast guidelines direct the process of investigating the evaluation situation and designing an evaluation—it is
necessarily a creative and collaborative endeavor. Nonetheless, achieving a good fit between the evaluation plan and the program
circumstances usually involves attention to certain critical themes. It is essential, for instance, that the evaluation plan be
responsive to the purposes of the evaluation as understood by the evaluation sponsor
and other central stakeholders. An evaluation intended to provide feedback to program decision makers so that the program can be
improved will take a different approach than one intended to help funders determine if a program should be terminated. In
addition, the evaluation plan must reflect an understanding of how the program is designed and organized so that the questions
asked and the data collection arranged will be appropriate to the
circumstances. Finally, any evaluation, of course, will have to be designed within the constraints of available time, personnel,
funding, and other such resources.
Although the particulars are diverse, the basic program circumstances for which evaluation is requested typically represent one of
a small number of recognizable variations. Consequently, the evaluation designs that result from the tailoring process tend to be
adaptations of one or more of a set of familiar evaluation approaches or schemes. In practice, therefore, tailoring an evaluation is
often primarily a matter of selecting and adapting these schemes to the specific circumstances of the program to be evaluated. One
set of evaluation approaches is defined around the nature of the evaluator stakeholders interaction. Evaluators may function
relatively independently or work quite collaboratively with stakeholders in designing and conducting the evaluation. Another
distinct set of evaluation approaches is organized around common combinations of evaluation questions and the usual methods for
answering them. Among these are evaluation schemes for assessing social problems and needs, program theory, program process
or implementation, program impact or outcome, and program efficiency.

Evaluation designs may be quite simple and direct, perhaps addressing only one narrow question such as whether using a computerized
instructional program helps a class of third graders read better. Or they may be prodigiously complex, as in a national evaluation of the
operations and effects of a diverse set of programs for reducing substance abuse in multiple urban sites. Fundamentally, however, we can
view any evaluation as structured around three issues:
The questions the evaluation is to answer.
The methods and procedures the evaluation will use to answer the questions.
The nature of the evaluator stakeholder relationship.

WHAT CONSIDERATIONS SHOULD GUIDE EVALUATION PLANNING?.


Many aspects of the program and the circumstances of the evaluation will necessarily shape the evaluation design. Some of these involve
general considerations of almost universal relevance to evaluation planning, but others will be specific to the particular situation of each
evaluation. Development of the evaluation plan, therefore, must be guided by a careful analysis of the evaluation context. The more
significant considerations for that analysis can be organized into three categories, having to do with (a) the purposes of the evaluation,
(b) the program structure and circumstances, and (c) the resources available for the evaluation.

The Program Structure and Circumstances


No two programs are identical in their organizational structure and environmental, social, and political circumstances, even when they
ostensibly provide the "same" service. The particulars of a program's structure and circumstances constitute major features of the
evaluation situation to which the evaluation plan must be tailored. Although there is a myriad of such particulars, three broad categories
are especially important to evaluators because of their pervasive influence on evaluation design and implementation:
The stage of program development—whether the program being evaluated is new or innovative, established but still developing or
undergoing restructuring, or established and presumed stable.
The administrative and political context of the program, in particular, the degree of consensus, conflict, or confusion among stakeholders
about the values or principles the program embodies, its mission and goals, or its social significance.
The structure of the program, including both its conceptual and organizational makeup. This involves the nature of the program rationale;
the diversity, scope, and character of the services provided and of the target populations for those services; location of service sites and
facilities; administrative arrangements; recordkeeping procedures; and so forth.

The Resources Available for the Evaluation


It requires resources to conduct a program evaluation. Some number of person hours devoted to the evaluation activities and materials,
equipment, and facilities to support data collection, analysis, and reporting must be available whether drawn from the existing resources
of the program or evaluation sponsor or separately funded. n important aspect of planning an evaluation, therefore, is to break down the
tasks and timelines so that a detailed estimate can be made of the personnel, materials, and expenses associated with completion of the
steps essential to the plan. The sum total of the resources required must then, of course, fit within what is available or some changes in
either the plan or the resources must be made. Useful advice on the practicalities of resource planning, budgeting, and determining
timelines should be considered during this process.

THE NATURE OF THE EVALUATORSTAKEHOLDER RELATIONSHIP.


One of the matters requiring early attention in the planning of an evaluation is the nature of the relationship between the evaluator and
the primary stakeholders. Every program is necessarily a social structure in which various individuals and groups engage in the roles and
activities that constitute the program: Program managers administer, staff provide service, participants receive service, and so forth. In
addition, every program is a nexus in a set of political and social relationships among those with an association or interest in the
program, such as relevant policymakers, competing programs, and advocacy groups. These parties are typically involved in, affected by,
or interested in the evaluation, and interaction with them must be anticipated as part of the evaluation. Who are the parties typically
involved in, or affected by, evaluations? Listed below are some of the stakeholder groups that often either participate directly or become
interested in the
evaluation process and its results:
Policymakers and decision makers: Persons responsible for deciding whether the program is to be started, continued, discontinued,
expanded, restructured, or curtailed.
Program sponsors: Organizations that initiate and fund the program. They may also overlap with policymakers and decision makers.
Evaluation sponsors: Organizations that initiate and fund the evaluation (sometimes the evaluation sponsors and the program sponsors
are the same).
Target participants: Persons, households, or other units who receive the intervention or services being evaluated.
Program managers: The personnel responsible for overseeing and administering the intervention program.
Program staff: Personnel responsible for delivering the program services or in supporting roles.
Program competitors: Organizations or groups who compete with the program for available resources. For instance, an educational
program providing alternative schools will attract the attention of the public schools because they see the new schools as competitors.
Contextual stakeholders: Organizations, groups, individuals, and other social units in the immediate environment of a program with
interests in what the program is doing or what happens to it (e.g., other agencies or programs, public officials, or citizens' groups in the
jurisdiction in which the program operates).
Evaluation and research community: Evaluation professionals who read evaluations and pass judgment on their technical quality and
credibility and academic and other researchers who work in areas related to a program.

EVALUATION QUESTIONS AND EVALUATION METHODS


A program evaluation is essentially an information gathering and interpreting endeavor that attempts to answer a specified set of
questions about a program's performance and effectiveness. An important step in designing an evaluation, therefore, is determining the
questions the evaluation must answer. This is sometimes done
in a very perfunctory manner, but we advocate that it be given studious and detailed attention. A carefully constructed set of evaluation
questions gives structure to the evaluation, leads to appropriate and thoughtful planning, and serves as a basis for informative discussions
about who is interested in the answers and how they will be used. Indeed, constructing such questions and planning how to answer them
is the primary way in which an evaluation is tailored to the unique circumstances associated with each program that comes under
scrutiny.
Generally, the evaluation sponsor puts forward some initial evaluation questions when proposing or commissioning an evaluation or, in
the case of a competition to select an evaluator, as part of the Request for proposal that goes out to prospective evaluators. Those initial
declarations are the obvious starting point for defining the questions
around which the evaluation will be designed but usually should not be taken as final for purposes of evaluation planning. Often the
questions presented at this stage are too general or abstract to function well as a basis for evaluation planning. Or the questions, as
worded, may be beyond the capability of the evaluator to answer within the operative constraints on time, resources, available
information, and organizational or political arrangements.
Beyond the specifics, however, evaluation questions fall into recognizable types according to the program issues they address. Five such
types are readily distinguished:
 Questions about the need for program services
 Questions about program conceptualization or design
 Questions about program operations and service delivery
 Questions about program outcomes
 Questions about program cost and efficiency

Evaluators have developed relatively distinct conceptual frameworks and associated methods to address each type of evaluation question.
Evaluators use these schemes to organize their thinking about how to approach different program evaluation situations. For planning
purposes, an evaluator will typically select the general evaluation approach that corresponds to the types of questions to be answered in
an evaluation, then tailor the particulars to the specifics of the questions and the program situation. To complete our discussion of
tailoring evaluations, therefore, we must introduce the common evaluation approaches or schemes and review the circumstances in
which they are most applicable.
The common conceptual and methodological frameworks in evaluation correspond to the types of frequent evaluation questions, as
follows:

Needs assessment: answers questions about the social conditions a program is intended to address and the need for the program.
Assessment of program theory: answers questions about program conceptualization and design.
Assessment of program process (or process evaluation): answers questions about program operations, implementation, and service
delivery.
Impact assessment (impact evaluation or outcome evaluation): answers questions about program outcomes and impact.
Efficiency assessment: answers questions about program cost and cost effectiveness.

Typical Evaluation Questions


As should be evident from the discussions above, well formulated evaluation questions are very concrete and specific to the program at
issue and the circumstances of the prospective evaluation. It follows that the variety of questions that might be relevant to some social
program or another is enormous. however, evaluation questions typically deal with one of five general program issues. Some of the more
common questions in each category, stated in summary form, are as follows.

Questions about the need for program services:


What are the nature and magnitude of the problem to be addressed?
What are the characteristics of the population in need?
What are the needs of the population?
What services are needed?
How much service is needed, over what time period?
What service delivery arrangements are needed to provide those services to the population?
Questions about program conceptualization or design:
What clientele should be served?
What services should be provided?
What are the best delivery systems for the services?
How can the program identify, recruit, and sustain the intended clientele?
How should the program be organized?
What resources are necessary and appropriate for the program?
Questions about program operations and service delivery:
Are administrative and service objectives being met?
Are the intended services being delivered to the intended persons?
Are there needy but unserved persons the program is not reaching?
Once in service, do sufficient numbers of clients complete service?
Are the clients satisfied with the services?
Are administrative, organizational, and personnel functions handled well?
Questions about program outcomes:
Are the outcome goals and objectives being achieved?
Do the services have beneficial effects on the recipients?
Do the services have adverse side effects on the recipients?
Are some recipients affected more by the services than others?
Is the problem or situation the services are intended to address made better?
Questions about program cost and efficiency:
Are resources used efficiently?
Is the cost reasonable in relation to the magnitude of the benefits?
Would alternative approaches yield equivalent benefits at less cost?

COLLATING EVALUATION QUESTIONS AND SETTING PRIORITIES


The evaluator who thoroughly explores stakeholder concerns and conducts an analysis of program issues guided by carefully developed
descriptions of program theory will turn up many questions that the evaluation might address. The task at this point becomes one of
organizing those questions according to distinct themes and setting priorities among them.
Setting priorities to determine which questions the evaluation should be designed to answer can be much more challenging. Once
articulated, most of the questions about the program that arise during the planning process are likely to seem interesting to some
stakeholder or another, or to the evaluators themselves. Rarely will resources be available to address them all, however. At this juncture,
it is especially important for the evaluator to focus on the purpose of the evaluation and the expected uses to be made of its findings.
There is little point to investing time and effort in developing information that is of little use to any stakeholder.
With the priority evaluation questions for a program decided on through some reasonable process, the evaluator is ready to design that
substantial part of the evaluation that will be devoted to trying to answer them. To conclude, when these various procedures have
generated a full set of candidate evaluation questions, the evaluator must organize them into related clusters and draw on stakeholder
input and professional judgment to set priorities among them. With the priority evaluation questions for a program determined, the
evaluator is then ready to design the part of the evaluation that will be devoted to answering them.

 Assessing the need of a program:


There are various methods and approaches evaluators use to address different categories of evaluation questions.
The category of evaluation questions that is logically most fundamental to program evaluation has to do with the nature of the social
problem the program is expected to ameliorate and the needs of the population experiencing that problem. These questions follow from
the assumption that the purpose of social programs is to bring about improvement in problematic social conditions and that they are
accountable to those who fund and support them for making a good faith effort to do so.
Needs assessment, in general, is a systematic approach to identifying social problems, determining their extent, and accurately defining
the target population to be served and the nature of their service needs. From a program evaluation perspective, needs assessment is the
means by which an evaluator determines if, indeed, there is a need for a program and, if so, what program services are most
appropriate to that need. Such an assessment is critical to the effective design of new programs. However, it is equally relevant to
established programs because there are many circumstances in which it cannot merely be assumed that the program is needed or that
the services it provides are well suited to the nature of the need. What makes the assessment of the need for a program so fundamental,
of course, is that a program cannot be effective at ameliorating a social problem if there is no problem to begin with or if the program
services do not actually relate to the problem.
The family of procedures used by evaluators and other social researchers to systematically describe and diagnose social needs is
generally referred to as needs assessment. Its purpose is to determine if there is a need or problem and, if so, what its nature, depth, and
scope are. In addition, needs assessment often encompasses the process of comparing and prioritizing needs according to how serious,
neglected, or salient they are.
Within the context of program evaluation, however, the primary focus of needs assessment is not on human needs broadly defined but,
rather, on social conditions deemed unsatisfactory through some process of social judgment and presumed remediable by social
programs. The essential tasks for the program evaluator as needs assessor are to identify the decision makers and claimants who
constitute the primary stakeholders in the program domain of interest, describe the "problem" that concerns them in a manner that is as
careful, objective, and meaningful to both groups as possible, and help draw out the implications of that diagnosis for structuring
effective intervention, whether through new or ongoing programs.

- Role of evaluators in diagnosing social conditions and services needed, defining and specifying
the social problems, defining and identifying targets of intervention, describing nature of
services needed:
In the grand scheme of things, evaluators' contributions to the identification and alleviation of social problems are modest compared with
the weightier actions of political bodies, advocacy groups, investigative reporters, and sundry charismatic figures. The impetus for
attending to social problems most often comes from political and moral leaders and community advocates who have a stake, either
personally or professionally, in dealing with a particular condition.
All social programs rest on a set of assumptions and representations of the nature of the problem they address and the characteristics,
needs, and responses of the target population they intend to serve. Any evaluation of a plan for a new program, a change in an existing
program, or the effectiveness of an ongoing program must necessarily engage those assumptions and representations. Of course, the
problem diagnosis and target population description may already be well and convincingly established, in which case the evaluator can
move forward with that as a given. Or the nature of the evaluation task may be stipulated in such a way that the need for the program and
the nature of that need is not a matter for independent investigation. Indeed, program personnel and sponsors often believe they know the
social problems
and target population needs so well that further inquiry is a waste of time. Such situations must be approached cautiously. In all
instances, therefore, the evaluator should scrutinize the assumptions about the target problem and population that shape the nature of a
program. Where there is any ambiguity, it may be advisable for the evaluator to work with key stakeholders to
formulate those assumptions explicitly so that they may serve as touchstones for assessing the adequacy of the program design and
theory. Often it will also be useful for the evaluator to conduct at least some minimal independent investigation of the nature of the
program's target problem and population. For new program initiatives, or established programs whose utility has been called into
question, it may be essential to conduct a thorough assessment of the social need and target population to be served by the program at
issue. In other cases, a needs assessment may be virtually mandated. It should be noted that needs assessment is not always done with
reference to a specific social program or program proposal. The techniques of needs assessment are also used as planning tools and
decision aids for policymakers who must prioritize among competing needs and claims.
Indeed, the social definition of a problem is so central to the political response that the preamble to proposed legislation usually shows
some effort to specify the conditions for which the proposal is designed as a remedy. Also, an important role evaluators may play at this
stage is to provide policymakers and program managers with a critique of the problem definition inherent in their policies and programs
and propose alternative definitions that may be more serviceable.

SPECIFYING THE EXTENT OF THE PROBLEM: WHEN, WHERE, AND HOW BIG?
The design and funding of a social program should be geared to the size, distribution, and density of the problem it addresses. It is much
easier to establish that a problem exists than to develop valid estimates of its density and distribution.
Through their knowledge of existing research and data sources and their understanding of which designs and methods lead to conclusive
results, evaluation researchers are in a good position to collate and assess whatever information already exists on a given social problem.
Here we stress both collate and assess—unevaluated information can be as bad as no information at all.

Correctly defining and identifying the targets for intervention is crucial to the success of social programs from the very early stage when
stakeholders begin to converge in their definition of a social problem to the extended period over which the program is operated.
Specifying those targets is complicated by the fact that the definition and corresponding estimates of the size of the population may shift
over this period. As a new social problem emerges or becomes increasingly visible, one definition of the targets of an intervention may
be adopted; as stakeholders plan and eventually implement a program initiative, however, that definition may well be modified or
abandoned.
The targets of social programs are usually individuals. But they also may be groups (families, work teams, organizations), geographically
and politically related areas (such as communities), or physical units (houses, road systems, factories). Whatever the target, it is
imperative at the outset of a needs assessment to define the units in question clearly.
In the case of individuals, targets are usually identified in terms of social and demographic characteristics, location, or their problems,
difficulties, and conditions. When aggregates (groups or organizations) are targets, they are often defined in terms of the characteristics
of the individuals that constitute them: their informal and formal collective properties and their shared problems.

The central function of needs assessment research is to develop estimates of the extent and distribution of a given problem and the
associated target population. However, it is also often important for such research to yield useful descriptive information about the
specific character of the need within that population.
This is important because it is often not sufficient for a social program to merely deliver some standard services in some standard way
presumed to be responsive to a given problem or need. To be effective, a program may need to adapt its services to the local nature of the
problem and the distinctive circumstances of the persons in
need. This, in turn, requires information about the way in which the problem is experienced by those in need, their perceptions and
attributions about relevant services and programs, and the barriers and difficulties they encounter in attempting to access services. A
needs assessment might, for instance, probe into the matter of why the problem exists and what other problems are linked with it.

Qualitative Methods for Describing Needs


Qualitative research can be especially useful for obtaining detailed, textured knowledge of the specific needs in question. Such research
can range in complexity from interviews of a few persons or group discussions to elaborate ethnographic research such as that employed
by anthropologists. As an example of the utility of such research, qualitative data on the structure of popular beliefs can contribute
substantially to the effective design of educational campaigns. What, for instance, are the tradeoffs people believe exist between the
pleasures of cigarette smoking and the resulting health risks? A good educational program must be adapted to those perceptions.
Carefully and sensitively conducted qualitative studies are particularly important for uncovering process information of this sort. Thus,
ethnographic studies of disciplinary problems within high schools may not only provide some indication of how widespread disciplinary
problems are but also suggest why some schools have fewer disciplinary problems than others. The findings on how schools differ might
have implications for the ways programs are designed. Or consider the qualitative
research on household energy consumption that revealed the fact that few householders had any information about the energy
consumption characteristics of their appliances. Not knowing how they consumed energy, these householders could not very well
develop effective strategies for reducing their consumption, examples are such as ethnographic studies or focus groups with selected
representatives of various stakeholders and observers.
Because of the distinctive advantages of qualitative and quantitative approaches, a useful and frequently used strategy is to conduct
needs assessment in two stages. The initial, exploratory stage uses qualitative research approaches to obtain rich information on the
nature of the problem. The second stage, estimation, builds on this information to design a more quantitative assessment that provides
reliable estimates of the extent and distribution of the problem.

• Monitoring program processes and performances


The program must implement its plan; that is, it must actually carry out the intended functions in the intended way.
Although implementing a program concept may seem straightforward, in practice it is often very difficult. Social programs
typically must contend with many adverse influences that can compromise even well intentioned
attempts to conduct program business appropriately. The result can easily be substantial discrepancies between the program as
intended and the program actually implemented.
An important evaluation function, therefore, is to assess program implementation: the program activities that actually take place
and the services that are actually delivered in routine program operation. Program monitoring and related procedures are the
means by which the evaluator investigates these issues.
Program monitoring is usually directed at one or more of three key questions: (a) whether a program is reaching the appropriate
target population, (b) whether its service delivery and support functions are consistent with program design specifications or other
appropriate standards, and (c) whether positive changes appear among the program participants and social conditions the
program addresses. Monitoring may also examine what resources are being, or have been, expended in the conduct of the program.
Program monitoring is an essential evaluation activity. It is the principal tool for formative evaluation designed to provide
feedback for program improvement and is especially applicable to relatively new programs attempting to establish their
organization, clientele, and services. Also, adequate monitoring (process evaluation) is a vital complement to impact evaluation,
helping distinguish cases of poor program implementation from ineffective intervention concepts.
Program monitoring also informs policymakers, program sponsors, and other stakeholders about how well programs perform their
intended functions.
Increasingly, some form of program performance monitoring is being required by government and nonprofit agencies as a way of
demonstrating accountability to the public and the program stakeholders.
WHAT IS PROGRAM MONITORING?
Program monitoring is the systematic documentation of key aspects of program performance that are indicative of whether the program
is functioning as intended or according to some appropriate standard. It generally involves program performance in the domain of service
utilization, program organization, and/or outcomes. Monitoring service utilization consists of examining the extent to which the intended
target population receives the intended services. Monitoring program organization requires comparison of the plan for what the program
should be doing, especially with regard to providing services, and what is actually done. Monitoring program outcome entails a survey of
the status of program participants after they have received service to determine if it is in line with what the program intended to
accomplish.
In addition to these primary domains, program monitoring may include information about resource expenditures that bear on whether the
benefits of a program justify its cost. Monitoring also may include an assessment of whether program activities comply with legal and
regulatory requirements—for example, whether affirmative action requirements have been met in the recruitment of staff. More
specifically, program monitoring schemes are designed to answer such evaluation questions as these:

 How many persons are receiving services?


 Are those receiving services the intended targets?
 Are they receiving the proper amount, type, and quality of services?
 Are there targets who are not receiving services?
 Are members of the target population aware of the program?
 Are necessary program functions being performed adequately?
 Is program staffing sufficient in numbers and competencies for the functions that must be performed?
 Is the program well organized? Do staff work well with each other?
 Does the program coordinate effectively with the other programs and agencies with which it must interact?
 Are program resources, facilities, and funding adequate to support important program functions?
 Are program resources used effectively and efficiently?
 Are costs per service unit delivered reasonable?
 Is the program in compliance with requirements imposed by its governing board, funding agencies, and higher-level
administration?
 Is the program in compliance with applicable professional and legal standards?
 Is program performance at some program sites or locales significantly better or poorer than at others?
 Are participants satisfied with their interactions with program personnel and procedures?
 Are participants satisfied with the services they receive?
 Do participants engage in appropriate follow-up behavior after service?
 Are participants' conditions, status, or functioning satisfactory in areas the service addresses after service is completed?
 Do participants retain satisfactory conditions, status, or functioning for an appropriate period after completion of services?

It is especially important to recognize the evaluative themes in program monitoring questions such as those listed above. Virtually all
involve words such as appropriate, adequate, sufficient, satisfactory, reasonable, intended, and other phrasing that indicates that an
evaluative judgment is required. To answer these questions, therefore, the evaluator or other responsible parties must not only describe
the program performance but assess whether it is satisfactory. This, in turn, requires that there be some basis for making a judgment, that
is, some defensible criteria or standards to apply. In situations where such criteria are not already articulated and endorsed, the evaluator
may find that establishing workable criteria is as difficult as determining program performance on the pertinent dimensions.

Common Forms of Program Monitoring


Monitoring and assessment of program performance are quite common in program evaluation, but the approaches used are rather varied
and there is little uniformity in the terminology for the different variants. The commonality among these variants is a focus on indicators
(qualitative or quantitative) of how well the program performs its critical functions. An assessment of this sort may be conducted as a
one shot endeavor or may be continuous so that information is produced regularly over an
extended period of time. It may be conducted by an outside evaluator or an evaluator employed within the program agency and may,
indeed, be set up as a management tool with little involvement by professional evaluators. Moreover, its purpose may be to provide
feedback for managerial purposes, to demonstrate accountability to sponsors and decision-makers, to provide a freestanding process
evaluation, or to augment an impact evaluation. Amid this variety, we distinguish three
principal forms of program monitoring:

1- Process or Implementation Evaluation: Evaluators often


distinguish between process (or implementation) evaluation and outcome
(or impact) evaluation. Process evaluation, "verifies what the program is and whether or not it is delivered as intended to the
targeted recipients."
2- Routine Program Monitoring and Management Information Systems:
Continuous monitoring of indicators of selected aspects of program process can be a useful tool for effective management of social
programs by providing regular feedback about how well the program is performing its critical functions. Such feedback allows managers
to take corrective action when problems arise and can also provide stakeholders with regular assessments of program performance
3- Performance Measurement and Monitoring:
Increased public and political demands for accountability from social service agencies in recent years have brought forth a variety of
initiatives to require such agencies to demonstrate that their programs accomplish something worthwhile

COLLECTING DATA FOR MONITORING


A variety of techniques may be used singly and in combination to gather data on program implementation. As in all aspects of
evaluation, the particular approaches used must take into account the resources available and the expertise of the evaluator. There may be
additional restrictions on data collection, however. One concerns issues of privacy and confidentiality. Program services that depend
heavily on person to person delivery methods, such as mental health, family planning, and vocational education, cannot be directly
observed without violating privacy. In other contexts, self-administered questionnaires might, in theory, be an economical means of
studying a program's implementation, but functional illiteracy and cultural norms may prohibit their use.
Several data sources should be considered for program monitoring purposes: data collected directly by the evaluator, program records,
and information from program participants or their associates. The approaches used to collect and analyze the data overlap from one data
source to the next. A comprehensive monitoring evaluation might include data from all three sources.

• Strategies for impact assessment: - Randomized designs/experiments impact assessment


Impact assessments are undertaken to find out whether interventions actually produce the intended effects. Such assessments cannot be
made with certainty but only with varying degrees of plausibility. A general principle applies: The more rigorous the research design,
the more plausible the resulting estimate of intervention effects.
The design of impact evaluations needs to take into account two competing pressures: On the one hand, evaluations should be
undertaken with sufficient rigor that relatively firm conclusions can be reached; on the other hand, practical considerations of time,
money, cooperation, and protection of participants limit the design options and methodological procedures that can be employed.
Ordinarily, evaluators assess the effects of social programs by comparing information about outcomes for participants and
nonparticipants, by making repeated measurements on participants before and after intervention, or by other methods that attempt to
achieve the equivalent of such comparisons. The basic aim of an impact assessment is to produce an estimate of the net effects of an
intervention—that is, an estimate of the impact of the intervention uncontaminated by the influence of other processes and events that
also may affect the behavior or conditions at which a program is directed. The strategies available for isolating the effects attributable
to an intervention and estimating their magnitude are introduced in this chapter, together with issues surrounding their use.

Impact assessment can be relevant at many points throughout the life course of social programs. At the stage of policy and program
formation, impact assessments of pilot demonstrations are sometimes commissioned to determine whether the proposed program would
have the intended effects. At the stage of program design, impact evaluations may be undertaken to test for the most effective ways to
develop and integrate the various program elements. For example, the relative impact of different durations of service, of one type of
practitioner versus another, and of providing followup services or not to targets are all issues that can be addressed through impact
assessment.
When a new program is authorized, it is often started initially in a limited number of sites. Obviously, it is unwise to implement a new
program widely without some knowledge of its effects. Impact assessments may be called for to show that the program has the expected
effects before extending it to broader coverage. Furthermore, in many cases the sponsors of innovative programs, such as private
foundations, implement programs on a limited scale with a view to promoting their
adoption by government agencies if their effects can be demonstrated. Moreover, knowledge of program effects is critical to decisions
about whether a particular initiative should be supported in preference to competing social action efforts.
Also, programs may be modified and refined to enhance effectiveness or to accommodate revised program goals. Sometimes the changes
made are major and the assessments of the modified program resemble those of innovative programs. At other times, the modifications
are modest "fine-tuning’ efforts and the skeleton of the
program remains fundamentally the same. In either case, the modifications can be subjected to impact assessments.
Finally, many established programs can be subjected to impact assessments, either continually or periodically. For example, the high
costs of certain medical treatments make it essential that their efficacy be continually evaluated and compared with other means of
dealing with the same medical problem. In other cases, long established programs are evaluated at regular intervals either because of
"sunset" legislation requiring demonstration of effectiveness if funding is to be renewed or as a means of defending the programs against
attack by supporters of alternative interventions or other uses for the public funds involved.

KEY CONCEPTS IN IMPACT ASSESSMENT

The Experimental Model:


Although there are many ways in which impact assessments can be conducted, the options available are not equal: Some
characteristically produce more credible estimates of impact than others. The options also vary in cost and level of technical skill
required. As in other matters, the better approaches to impact assessment generally require more skills and more time to complete, and
they cost more. In this and subsequent chapters, our discussion of the available options is rooted in the view that the optimal way to
establish the effects caused by an intervention is a randomized field experiment. The laboratory model of such experiments is no doubt
familiar. Subjects in laboratory experiments are randomly sorted into two or more
groups. One group is designated the control group and receives no intervention or an innocuous one; the other group or groups, called the
experimental group(s), are given the intervention(s) being tested. Outcomes are then observed for both the experimental and the control
groups, with any differences being attributed to the experimental intervention.
This research model underlies impact evaluations as well, because such evaluations, like laboratory experiments, are efforts to establish
whether certain effects are caused by the intervention. Sometimes impact evaluations closely follow the model of randomized
experiments; at other times, practical circumstances, time pressures, and cost constraints necessitate compromises with the ideal

- Quasi-experimental impact assessment


A large class of impact assessment designs consists of nonrandomized quasi experiments, in which comparisons are made between
targets who participate in a program and nonparticipants who are presumed similar to participants in critical ways. These techniques
are called quasi experimental because, although they use "experimental"
and "control" groups, they lack the random assignment to conditions essential for true experiments. Four quasi experimental
designs are commonly used: regression discontinuity designs, matched constructed control groups, statistically equated constructed
controls, and designs using generic outcome measures as controls.

Depending on the nature of an impact assessment and the resources available, evaluators can call on a varied repertoire of design
strategies to minimize the effects of extraneous factors. Different strategies are appropriate for partial and
Full coverage programs, because in full coverage programs no untreated targets are available to use as controls.
A number of design options are available for impact assessments of full and partial coverage
programs, respectively, ranging from randomized experiments to time series analysis. Although the various designs differ widely in their
effectiveness, all can be used if proper precautions are taken. Judgmental approaches to assessment include connoiseurial assessments,
administrator assessments, and judgments by program participants. Judgmental
assessments are less preferable than more objective designs, but in some circumstances, they are the only impact evaluation options
available.

A CATALOG OF IMPACT ASSESSMENT DESIGNS


• The social context of evaluation
Evaluation research is a purposeful activity, undertaken to affect policy development, to shape the design and implementation of social
interventions, and to improve the management of social programs. In the broadest sense of politics, evaluation is a political activity.
There are, of course, intrinsic rewards for evaluators, who may derive great pleasure from satisfying themselves that they have done as
good a technical job as possible—like artists whose paintings hang in their attics and never see the light of day, and poets whose
penciled foolscap is hidden from sight in their desk drawers. But that is not really what it is all about. Evaluations are a real world
activity. In the end, what counts is the critical acclaim with which an evaluation is judged by peers in the field and the extent to which it
leads to modified policies, programs, and practices—ones that, in the short or long term, improve the conditions of human life.

SUMMARY
Evaluation is purposeful, applied social research. In contrast to basic research, evaluation is undertaken to solve practical problems. Its
practitioners must be conversant with methods from several disciplines and able to apply them to many types of problems. Furthermore,
the criteria for judging the work include its utilization and hence its impact on programs and the human condition.
Evaluators must put a high priority on deliberately planning for the dissemination of the results of their work. In particular, they need to
become "secondary disseminators" who package their findings in ways that are geared to the needs and competencies of a broad range of
relevant stakeholders.
Because the value of their work depends on its utilization by others, evaluators must understand the social ecology of the arena in which
they work.
Evaluation is directed to a range of stakeholders with varying and sometimes conflicting needs, interests, and perspectives. Evaluators
must determine the perspective from which a given evaluation should be conducted, explicitly acknowledge the existence of other
perspectives, be prepared for criticism even from the sponsors of the evaluation, and adjust their communication to the requirements of
various stakeholders.
An evaluation is only one ingredient in a political process of balancing interests and coming to decisions. The evaluator's role is close to
that of an expert witness, furnishing the best information possible under the circumstances; it is not the role of judge and jury.
Two significant strains that result from the political nature of evaluation are (a) the different requirements of political time and
evaluation time, and (b) the need for evaluations to have policymaking relevance and significance. With respect to both of these sets of
issues, evaluators must look beyond considerations of technical excellence and pure science, mindful of the larger context in which they
are working and the purposes being served by the evaluation.
Evaluators are perhaps better described as a "near group" than as a profession. The field is marked by diversity in disciplinary training,
type of schooling, perspectives on appropriate methods, and an absence of strong communication among its practitioners. Although the
field's rich diversity is one of its attractions, it also leads to unevenness in competency, lack of consensus on appropriate approaches, and
justifiable criticism of the methods used by some evaluators.
Among the enduring controversies in the field has been the issue of qualitative and quantitative research. Stated in the abstract, the issue
is a false one; the two approaches are suitable for different and complementary purposes.

You might also like