DATA ANALYSIS AND REPORTING
PPME 102 MUCHOCHOMA
The term “data” refers to raw, unprocessed information while “information,” or “strategic information,”
usually refers to processed data or data presented in some sort of context.
Data –primary or secondary- is a term given to raw facts or figures before they have been processed
and analysed.
Information refers to data that has been processed and analysed for reporting and use.
Data analysis is the process of converting collected (raw) data into usable information.
i. Quantitative and Qualitative data
Quantitative data measures and explains what is being studied with numbers (e.g. counts,
ratios, percentages, proportions, average scores, etc).
Qualitative data explains what is being studied with words (documented observations,
representative case descriptions, perceptions, opinions of value, etc).
Quantitative methods tend to use structured approaches (e.g. coded responses to surveys)
which provide precise data that can be statistically analysed and replicated (copied) for
comparison.
Qualitative methods use semi-structured techniques (e.g. observations and interviews) to
provide in-depth understanding of attitudes, beliefs, motives and behaviours. They tend to
be more participatory and reflective in practice.
Quantitative data is often considered more objective and less biased than qualitative data but recent debates
have concluded that both quantitative and qualitative methods have subjective (biased) and objective
(unbiased) characteristics.
Therefore, a mixed-methods approach is often recommended that can utilize the advantages of both,
measuring what happened with quantitative data and examining how and why it happened with qualitative
data.
(ii) Some Data Quality Issues in Monitoring and Evaluation
Coverage: Will the data cover all of the elements of interest?
Completeness: Is there a complete set of data for each element of interest?
Accuracy: Have the instruments been tested to ensure validity and reliability of the data?
Frequency: Are the data collected as frequently as needed?
Reporting schedule: Do the available data reflect the periods of interest?
Accessibility: Are the data needed collectable/retrievable?
Power: Is the sample size big enough to provide a stable estimate or detect change?
(iii) Data Analysis
Quantitative or qualitative research methods or a complementary combination of both approaches are used.
Analysis may include:
Content or textual analysis, making inferences by objectively and systematically identifying specified
characteristics of messages.
Statistical descriptive techniques, the most common include: graphical description (histograms,
scatter-grams, bar chart,…); tabular description (frequency distribution, cross tabs,…); parametric
description (mean, median, mode, standard deviation, skew-ness, kurtosis, …).
Statistical inferential techniques which involve generalizing from a sample to the whole population
and testing hypothesis. Hypothesis are stated in mathematical or statistical terms and tested through
two or one-tailed tests (t-test, chi-square, Pearson correlation, F-statistic, …)
TERMS OR REFERENCE IN M&E AND EVLUATION REPORT TEMPLATE
(i) Terms of Reference in Evaluation
Evaluation organizers are usually the ones who are in charge of a particular project and want to have the
project evaluated to better manage project operation. Responsibility in the evaluation organizers differs from
those of evaluators, who are usually consultants contracted for the evaluation.
Tasks of the evaluation organizers include:
Preparing the TOR; TOR is a written document presenting the purpose and scope of the evaluation, the
methods to be used, the standard against which performance is to be assessed or analyses are to be conducted,
the resources and time allocated, and reporting requirements. TOR also defines the expertise and tasks
required of a contractor as an evaluator, and serves as job descriptions for the evaluator.
Appointing evaluator(s);
Securing budget for evaluation;
monitoring the evaluation work;
Providing comments on the draft;
publicizing the evaluation report, and
Providing feedback from the results to concerned parties.
The role of evaluator includes:
Preparing the detailed evaluation design;
Collecting and analyzing information, and
Preparing an evaluation report.
The role of Management includes:
Management response
Action on recommendations
Tracking status of implementation of recommendations
ii) Evaluation Report Template
The is no single universal format for M&E but the template is intended to serve as a guide for preparing
meaningful, useful and credible evaluation reports that meet quality standards. It only suggests the content
that should be included in a quality evaluation report but does not purport to prescribe a definitive section-
by-section format that all evaluation reports should follow.
E.g.
Formal reports developed by evaluators typically include six major sections:
(1) Background
(2) Evaluation study questions
(3) Evaluation procedures
(4) Data analyses
(5) Findings
(6) Conclusions (and recommendations)
Or detailed:
Summary sections
A. Abstract
B. Executive summary
II. Background
A. Problems or needs addressed
B. Literature review
C. Stakeholders and their information needs
D. Participants
E. Project’s objectives
F. Activities and components
G. Location and planned longevity of the project
H. Resources used to implement the project
I. Project’s expected measurable outcomes
J. Constraints
III. Evaluation study questions
A. Questions addressed by the study
B. Questions that could not be addressed by the study (when relevant)
IV. Evaluation procedures
A. Sample
1. Selection procedures
2. Representativeness of the sample
3. Use of comparison or control groups, if applicable
B. Data collection
1. Methods
2. Instruments
C. Summary matrix
1. Evaluation questions
2. Variables
3. Data gathering approaches
4. Respondents
5. Data collection schedule
V. Findings
A. Results of the analyses organized by study question
VI. Conclusions
A. Broad-based, summative statements
B. Recommendations, when applicable
0r
Table of contents
Executive summary
Introduction
Evaluation scope, focus and approach
Project facts
Findings, Lessons Learned
Findings
Lessons learned
Conclusions and recommendations
Conclusions
Recommendations
Annexes/appendices
The report should also include the following:
1. Title and opening pages—Should provide the following basic information:
Name of the evaluation intervention
Time frame of the evaluation and date of the report
Country/Organization/Entity of the evaluation intervention
Names and organizations of evaluators
Name of the organization commissioning the evaluation
Acknowledgements
2. Table of contents
Should always include lists of boxes, figures, tables and annexes with page references.
3. List of acronyms and abbreviations
4. Executive summary
A stand-alone section of two to three pages that should:
Briefly describe the intervention (the project(s), programme(s), policies or other interventions) that
was evaluated.
Explain the purpose and objectives of the evaluation, including the audience for the evaluation and
the intended uses.
Describe key aspect of the evaluation approach and methods.
Summarize principle findings, conclusions, and recommendations.
5. Introduction
Should:
Explain why the evaluation was conducted (the purpose), why the intervention is being evaluated at
this point in time, and why it addressed the questions it did.
Identify the primary audience or users of the evaluation, what they wanted to learn from the
evaluation and why, and how they are expected to use the evaluation results.
Identify the intervention (the project(s) programme(s), policies or other interventions) that was
evaluated—see upcoming section on intervention.
Acquaint the reader with the structure and contents of the report and how the information contained
in the report will meet the purposes of the evaluation and satisfy the information needs of the report’s
intended users.
6. Description of the intervention/project/process/programme—Provide the basis for report users to
understand the logic and assess the merits of the evaluation methodology and understand the applicability
of the evaluation results. The description needs to provide sufficient detail for the report user to derive
meaning from the evaluation. The description should:
Describe what is being evaluated, who seeks to benefit, and the problem or issue it seeks to address.
Explain the expected results map or results framework, implementation strategies, and the key
assumptions underlying the strategy.
Link the intervention to national priorities, Development partner priorities, corporate strategic plan
goals, or other project, programme, organizational, or country specific plans and goals.
Identify the phase in the implementation of the intervention and any significant changes (e.g., plans,
strategies, logical frameworks) that have occurred over time, and explain the implications of those
changes for the evaluation.
Identify and describe the key partners involved in the implementation and their roles.
Describe the scale of the intervention, such as the number of components (e.g., phases of a project)
and the size of the target population for each component.
Indicate the total resources, including human resources and budgets.
Describe the context of the social, political, economic and institutional factors, and the geographical
landscape within which the intervention operates and explain the effects (challenges and
opportunities) those factors present for its implementation and outcomes.
Point out design weaknesses (e.g., intervention logic) or other implementation constraints (e.g.,
resource limitations).
7. Evaluation scope and objectives - The report should provide a clear explanation of the evaluation’s scope,
primary objectives and main questions.
Evaluation scope—The report should define the parameters of the evaluation, for example, the time
period, the segments of the target population included, the geographic area included, and which
components, outputs or outcomes were and were not assessed.
Evaluation objectives—The report should spell out the types of decisions evaluation users will make,
the issues they will need to consider in making those decisions, and what the evaluation will need to
achieve to contribute to those decisions.
Evaluation criteria—The report should define the evaluation criteria or performance standards used.
The report should explain the rationale for selecting the particular criteria used in the evaluation.
Evaluation questions—Evaluation questions define the information that the evaluation will
generate. The report should detail the main evaluation questions addressed by the evaluation and
explain how the answers to these questions address the information needs of users.
8. Evaluation approach and methods - The evaluation report should describe in detail the selected
methodological approaches, theoretical models, methods and analysis; the rationale for their selection; and
how, within the constraints of time and money, the approaches and methods employed yielded data that
helped answer the evaluation questions and achieved the evaluation purposes. The description should help
the report users judge the merits of the methods used in the evaluation and the credibility of the findings,
conclusions and recommendations.
The description on methodology should include discussion of each of the following:
Data sources—The sources of information (documents reviewed and stakeholders), the rationale for
their selection and how the information obtained addressed the evaluation questions.
Sample and sampling frame—If a sample was used: the sample size and characteristics; the sample
selection criteria (e.g., single women, under 45); the process for selecting the sample (e.g., random,
purposive); if applicable, how comparison and treatment groups were assigned; and the extent to
which the sample is representative of the entire target population, including discussion of the
limitations of the sample for generalizing results.
Data collection procedures and instruments—Methods or procedures used to collect data, including
discussion of data collection instruments (e.g., interview protocols), their appropriateness for the
data source and evidence of their reliability and validity.
Performance standards/indicators—The standard or measure that will be used to evaluate
performance relative to the evaluation questions (e.g., national or regional indicators, rating scales).
Stakeholder engagement—Stakeholders’ engagement in the evaluation and how the level of
involvement contributed to the credibility of the evaluation and the results.
Ethical considerations—The measures taken to protect the rights and confidentiality of informants
Background information on evaluators—The composition of the evaluation team, the background
and skills of team members and the appropriateness of the technical skill mix, gender balance and
geographical representation for the evaluation.
Major limitations of the methodology—Major limitations of the methodology should be identified
and openly discussed as to their implications for evaluation, as well as steps taken to mitigate those
limitations.
9. Data analysis—The report should describe the procedures used to analyse the data collected to answer the
evaluation questions. It should detail the various steps and stages of analysis that were carried out, including
the steps to confirm the accuracy of data and the results. The report also should discuss the appropriateness
of the analysis to the evaluation questions. Potential weaknesses in the data analysis and gaps or limitations
of the data should be discussed, including their possible influence on the way findings may be interpreted
and conclusions drawn.
10 Findings and conclusions—The report should present the evaluation findings based on the analysis and
conclusions drawn from the findings.
Findings—Should be presented as statements of fact that are based on analysis of the data. They
should be structured around the evaluation criteria and questions so that report users can readily
make the connection between what was asked and what was found. Variances between planned and
actual results should be explained, as well as factors affecting the achievement of intended results.
Assumptions or risks in the project or programme design that subsequently affected implementation
should be discussed.
Conclusions—Should be comprehensive and balanced, and highlight the strengths, weaknesses and
outcomes of the intervention. They should be well substantiated by the evidence and logically
connected to evaluation findings. They should respond to key evaluation questions and provide
insights into the identification of and/or solutions to important problems or issues pertinent to the
decision making of intended users.
11. Recommendations—The report should provide practical, feasible recommendations directed to the
intended users of the report about what actions to take or decisions to make. The recommendations should
be specifically supported by the evidence and linked to the findings and conclusions around key questions
addressed by the evaluation. They should address sustainability of the initiative and comment on the
adequacy of the project exit strategy, if applicable.
12. Lessons learned—As appropriate, the report should include discussion of lessons learned from the
evaluation, that is, new knowledge gained from the particular circumstance (intervention, context outcomes,
even about evaluation methods) that are applicable to a similar context. Lessons should be concise and based
on specific evidence presented in the report.
13. Report annexes—Suggested annexes should include the following to provide the report user with
supplemental background and methodological details that enhance the credibility of the report:
ToR for the evaluation
Additional methodology-related documentation, such as the evaluation matrix and data collection
instruments (questionnaires, interview guides, observation protocols, etc.) as appropriate
List of individuals or groups interviewed or consulted and sites visited
List of supporting documents reviewed
Project or programme results map or results framework
Summary tables of findings, such as tables displaying progress towards outputs, targets, and goals
relative to established indicators
Short biographies of the evaluators and justification of team composition
Code of conduct signed by evaluators