0% found this document useful (0 votes)
27 views21 pages

My Psy 310 Note Phew

The document outlines the significance of research in psychology, emphasizing systematic and scientific methods for knowledge acquisition. It discusses various research types, methods, and settings, highlighting the importance of empirical evidence in understanding human behavior. Additionally, it covers the process of formulating and evaluating research problems and hypotheses, as well as the distinctions between experimental and correlational research designs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views21 pages

My Psy 310 Note Phew

The document outlines the significance of research in psychology, emphasizing systematic and scientific methods for knowledge acquisition. It discusses various research types, methods, and settings, highlighting the importance of empirical evidence in understanding human behavior. Additionally, it covers the process of formulating and evaluating research problems and hypotheses, as well as the distinctions between experimental and correlational research designs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

TOPIC 1

Sub1.1. Meaning of Research Sub 1.3. Methods of Knowledge Acquisition

Research is a systematic and organized effort to investigate a specific Humans acquire knowledge in several ways, but not all are scientific. Here's
problem or question using scientific methods. It involves the collection, a breakdown:
analysis, and interpretation of data to increase understanding and generate
knowledge. Method Description Limitation

Tenacity Believing something Can lead to outdated or


In psychology, research helps us understand behavior, thoughts, and
because it has always been wrong beliefs.
emotions using objective evidence, rather than assumptions or opinions. that way.

It’s systematic (planned, organized steps). Intuition Relying on gut feeling or Often biased and unverified.
common sense.
It’s empirical (based on observation or experience).
Authority Accepting information from Authority can be wrong.
It aims at solving problems, testing theories, or creating new knowledge. experts or respected figures.

Rationalism Using logic and reasoning. Logic is only reliable if


based on true premises.
Sub 1.2. Need for Research in Psychology
Empiricism Gaining knowledge through Can be subjective or
observation and experience. limited.
Psychological research is essential because:
Scientific Method Combining observation, Most reliable and objective
It helps understand human behavior and mental processes. reasoning, and method.
experimentation.
It provides evidence for psychological theories and treatments.
.
It improves the accuracy of psychological assessments and interventions.
Psychology relies mostly on scientific methods, because it demands
It aids policy-making, especially in areas like education, mental health, and objectivity and evidence.
criminal justice.

For example: Research helped prove that positive reinforcement is more


effective than punishment in shaping behavior (behavioral psychology). Sub 1.4. Levels of Research in Psychology

Research in psychology can be divided into various levels or categories


based on the depth and type of inquiry:
1
1. Basic Research: 1. Laboratory Settings:

Focuses on building theory and knowledge. Controlled environment.

Not directly aimed at solving practical problems. Easier to manipulate variables.

Example: Studying memory processes. Higher internal validity, lower external validity.

Example: Memory test in a lab.

2. Applied Research:

Aims to solve real-world problems. 2. Field Settings:

Example: Researching effective treatments for depression. Natural, real-world environments.

Less control over variables.

3. Descriptive Research: Higher external validity.

Describes behavior without manipulating variables. Example: Observing classroom behavior.

Includes surveys, case studies, naturalistic observation.

3. Online/Virtual Settings:

4. Experimental Research: Emerging in modern psychology.

Involves manipulating variables to determine cause-effect. Includes online surveys, experiments.

Example: Testing how sleep affects concentration.

Sub 1.6. Types of Research

Sub 1.5. Research Settings Research in psychology can take many forms:

The setting where research is conducted influences the method and data Type Description Example
collected.
Descriptive Describes characteristics of a Survey on stress among

2
population or situation. students.

Correlationa Examines relationships between Relationship between TOPIC 2


l variables, without implying causation. sleep and GPA.
2.1. Sources of Research Problem
Experimenta Tests causal relationships by Impact of music on
l manipulating variables. mood. A research problem is the question or issue the researcher wants to
investigate. It’s the foundation of any research work.
Quasi- Like experimental, but lacks full Studying different
Experimenta control (e.g., random assignment). classrooms without Sources include:
l assigning students.
Personal experiences – Issues you've encountered in real life.
Qualitative Explores in-depth understanding using Exploring why victims of
interviews, focus groups. trauma seek therapy. Literature reviews – Gaps or contradictions in existing studies.

Quantitative Focuses on numerical data and Survey with Likert scale Theories – Testing or expanding existing psychological theories.
statistics. responses.
Social issues – Problems affecting society, like mental health or addiction.
Mixed Combines qualitative and quantitative Interview + questionnaire
Methods approaches. study on anxiety. Professional practices – Issues observed in workplaces, schools, therapy,
etc.

Replications – Repeating earlier studies in different contexts or populations.


Summary Slide 1
Example: A psychologist may study stress in students after observing
Research is systematic, scientific, and evidence-based. increased dropout rates.
Psychology needs research to remain accurate, relevant, and effective.

Scientific methods of knowledge acquisition are more reliable than intuition 2.2. Process of Problem Identification
or authority.
This involves narrowing down broad ideas into a clear, researchable
Understand the differences between types of research and where they're question.
applied (lab vs. field).
Steps:
Be able to give examples of each type of research.
1. Observe a phenomenon or issue.

2. Review literature – What has been studied? What remains?


3
3. Identify a gap or contradiction in the existing research. Must be testable (can be proven true or false).

4. Narrow down the scope to something specific and manageable. Should reflect a relationship between variables.

5. Formulate the research problem in clear terms. Example: Students who sleep fewer than 6 hours perform worse in memory
tests than those who sleep 8 hours.
Example: Instead of "mental health," refine to: "How does social media use
affect anxiety levels among university students?"

2.5. Characteristics of Hypotheses

2.3. Evaluating Research Problem A good hypothesis should be:

Before moving forward, check if the problem is worth researching. Consider: Testable – measurable through data.

Feasibility: Do you have time, access, resources? Falsifiable – possible to prove wrong.

Significance: Will the results be useful or important? Clear and concise – no vague language.

Clarity: Is the problem well-defined and specific? Directional (if applicable) – states expected outcome (e.g., increase,
decrease).
Ethical: Is it safe and responsible to research?
Relevant – tied to the research problem.
Novelty: Does it contribute something new?

A good problem should be original, manageable, and relevant.


2.6. Sources of Hypotheses

Hypotheses can arise from:


2.4. Formulating Hypotheses
Existing theories (e.g., behaviorism, cognitive psychology).
A hypothesis is a tentative answer or prediction about the relationship
between two or more variables. It is testable and guides the direction of the Previous studies – findings that need to be retested.
study.
Observation – patterns noticed in real life.
Tips:
Personal experience – insights from work or life.
Should be clear and specific.
Logical reasoning – applying deduction or induction.
4
actually believes. directional.

2.7. Functions of Hypotheses Statistical Formal statements tested with data. H₀ and H₁ written in
Hypothesis statistical language.
Hypotheses play several key roles in research:

Guide investigation – Focus your study.


Final Takeaways:
Provide structure – Help define variables and method.
Always start with a clear problem backed by real sources and evidence.
Predict outcomes – Direct expectations.
Form hypotheses that are testable, specific, and based on logical foundations.
Aid analysis – Help interpret results.
Know the types and roles of hypotheses — they are crucial for exam
Link theory to practice – Test if theory works in reality. questions.

2.8. Types of Hypotheses TOPIC 3

Type Description Example 3.1. Measurement

Null Hypothesis States there is no “There is no difference in Definition:


(H₀) relationship/effect. memory scores between
students who sleep 4 hours Measurement is the process of assigning numbers or symbols to
and 8 hours.”
characteristics of objects or people according to specific rules. In research,
Alternative States there is a relationship/effect. “Students who sleep 8 hours it's crucial for quantifying variables so they can be analyzed statistically.
Hypothesis (H₁) perform better than those
who sleep 4 hours.” Types of Measurement Scales:

Directional Predicts the direction of effect. “Students who listen to Nominal: Categories (e.g., gender, religion)
Hypothesis music while studying score
lower on tests.” Ordinal: Ranked order (e.g., class position)

Non-directional States there will be a difference but “There is a difference in Interval: Equal intervals without true zero (e.g., temperature in Celsius)
Hypothesis doesn’t say which way. performance between
students who study with and Ratio: Equal intervals with true zero (e.g., weight, height)
without music.”

Research Hypothesis The hypothesis the researcher Same as H₁, usually


5
Importance in Research: Criterion-Related Validity:

Ensures objectivity Predictive Validity: Predicts future performance

Allows for replication Concurrent Validity: Correlates with current performance

Supports statistical analysis Key Insight: A test can be reliable but not valid. However, for a test to be
valid, it must be reliable.

3.2. Reliability and Validity


3.3. Psychological Tests and Inventories
Reliability
Definition: Standardized tools used to assess mental functions and behaviors.
Definition: The consistency of a measuring instrument.
Examples:
Types:
Intelligence Tests (e.g., WAIS, Stanford-Binet)
Test-Retest Reliability: Stability over time
Personality Inventories (e.g., MMPI, Big Five Inventory)
Inter-Rater Reliability: Agreement between different observers
Attitude Scales, Interest Inventories, etc.
Internal Consistency: Consistency of items within a test (e.g., Cronbach’s
alpha) Characteristics:

Standardized administration and scoring

Validity Norm-referenced (compared to a standard group)

Definition: The degree to which an instrument measures what it’s intended to Used in clinical, educational, and occupational settings
measure.
Advantages:
Types:
Objectivity
Face Validity: Appears effective at face value
Comparability
Content Validity: Covers all aspects of the concept
Diagnostic utility
Construct Validity: Measures the theoretical construct
6
Measurement Tools:

3.4. Nature of Ability Tests Objective Tests: Structured (e.g., MMPI, NEO-PI)

Definition: Tests designed to assess specific kinds of cognitive or motor Projective Tests: Open-ended (e.g., Rorschach Inkblot Test, TAT)
abilities.
Application:
Types:
Clinical diagnosis
Aptitude Tests: Predict future performance (e.g., SAT, GRE)
Job screening
Achievement Tests: Assess learned knowledge (e.g., school exams)
Counseling
Intelligence Tests: General cognitive ability (e.g., IQ tests)

Key Concepts:
3.6. Observation as a Research Tool
Often timed and standardized
Definition: Systematic watching and recording of behaviors.
Focused on skills like reasoning, memory, comprehension, and verbal/math
abilities Types:

Frequently used in school and employment settings Participant Observation: Researcher is involved in the activity

Non-Participant Observation: Researcher is not involved

3 5. Nature and Measurement of Personality Structured Observation: Uses a predefined coding scheme

Personality: The unique and stable pattern of behaviors, thoughts, and Unstructured Observation: Open-ended, exploratory
emotions shown by an individual.

Theories of Personality: Advantages:


Trait Theory (e.g., Big Five) Captures natural behavior
Psychoanalytic Theory (Freud) Useful in exploratory stages
Humanistic Theory (Rogers, Maslow) Complements other methods

7
(As referenced: Hassan Pg. 129)

Limitations: Descriptive research is primarily concerned with describing the


characteristics of a phenomenon or a population. It doesn’t test hypotheses
Observer bias but answers the “what” rather than the “why” or “how.”
Hawthorne effect (change in behavior when observed) Key features:
Time-consuming No manipulation of variables.

Observational in nature.
Summary for Quick Revision: Data is usually collected through surveys, case studies, observational
methods, or content analysis.
Concept Key Idea

Measurement Assigning numbers to variables


Types:
Reliability Consistency of measurement
1. Case Studies – In-depth investigation of a single case (individual, group,
Validity Accuracy of measurement
or event).
Psychological Tests Standardized tools for behavior
2. Surveys – Collection of data from a large group using
assessment
questionnaires/interviews.
Ability Tests Measures cognitive or physical
3. Observational Studies – Watching and recording behaviors without
skills
interference.
Personality Measures Trait and projective tools 4. Content Analysis – Systematic coding of textual or visual material.
Observation Naturalistic or controlled behavioral
tracking
Use Case:

Descriptive research is widely used in education, social sciences, and


TOPIC 4 marketing to understand opinions, behaviors, or demographics.
4.1 Designs of Descriptive Research

8
4.2 Experimental and Correlational Research POTENTIAL SOURCES OF EXPERIMENTAL ERROR

These two research designs are often compared because they both deal with 1. Pre measurement (solution - randomization)
variables, but they differ in how they handle cause and effect and control.
2. Interaction error
Experimental Research
3. Maturation
This is the most rigorous and controlled type of research. It is used to
determine causal relationships between variables. 4. History

Key Characteristics: 5. Instrumentation

1. Manipulation: The researcher manipulates one variable (independent 6. Statistical Selection


variable - IV) to see its effect on another (dependent variable - DV). 7. Mortality
2. Control: All other variables are controlled to isolate the effect of the IV. 8. Measurement timing
3. Randomization: Participants are randomly assigned to groups (e.g.,
treatment vs. control) to reduce bias.
Correlational Research
4. Cause-Effect: The goal is to establish causality.
This research examines the relationship between two or more variables
Example: without manipulation.
A psychologist wants to know if sleep affects memory. One group sleeps 8 Key Characteristics:
hours, another 4 hours, then both take a memory test. The results can show if
sleep caused a difference in memory performance. 1. No manipulation: Researchers observe what naturally exists.

Types of Experimental Designs: 2. No causality: Correlation does not imply cause.

Pre-experimental design: Little to no control group or randomization. 3. Statistical Relationship: Uses correlation coefficients (from -1 to +1) to
show strength and direction.
True experimental design: Randomized groups, strong control, most valid for
causal claims. Example:

Quasi-experimental design: Lacks random assignment but tries to test cause- A study finds that students who spend more time studying tend to have
effect relationships (more on this later). higher GPAs. But it doesn’t mean studying causes higher GPA — other
factors (like motivation or prior knowledge) might play a role.
9
Types of Correlation: This ensures that every participant has an equal chance of being in any
group, reducing bias.
Positive: Both variables increase together.
2. Control Group
Negative: One variable increases as the other decreases.
This group does not receive the treatment or intervention.
Zero correlation: No relationship between the variables.
It serves as a baseline for comparison.
Comparison: Experimental vs Correlational
3. Manipulation of the Independent Variable (IV)
Feature Experimental Correlational
The researcher deliberately changes the IV to see its effect on the dependent
Manipulation Yes No variable (DV).
Control High Low 4. Pre- and Post-Testing (sometimes)

Randomization Often used Not used Measurements can be taken before and after the experiment to observe the
effect of the IV.
Determines Cause Yes No
Basic Structure of a True Experiment

Group Pre-test Treatment (IV)


Example Drug trials Studying & GPA
Experimental Yes Yes

Control Yes No
4.3 Design of True Experiments

A true experiment is the gold standard for research that aims to determine
cause-and-effect relationships. It is characterized by strict control and Example
randomization, ensuring that the results are reliable and internally valid.
A researcher wants to test if a new teaching method improves students' math
Core Characteristics of True Experimental Design performance. She randomly assigns students to two groups:
1. Random Assignment Group A uses the new method (experimental group),
Participants are randomly placed into different groups (e.g., experimental or Group B uses the traditional method (control group).
control).

10
Both groups take a math test before and after the teaching. If Group A shows
significantly greater improvement, the new method is considered effective.
4.4 Design of Quasi-Experiments
Types of True Experimental Designs
A quasi-experimental design is similar to a true experiment, but without
1. Pretest-Posttest Control Group Design random assignment. It is often used in real-world settings where
randomization isn't possible, yet the researcher still wants to assess cause-
Measures change over time in both groups. and-effect relationships.
2. Posttest-Only Control Group Design Key Characteristics of Quasi-Experiments
No pretest, only post-intervention results are compared. Good when pretests 1. No Random Assignment
might "tip off" participants.
Participants are placed into groups based on existing characteristics (e.g.,
3. Solomon Four-Group Design age, location, classroom).
Combines both pretest-posttest and posttest-only designs. Groups may differ in important ways before the study begins.
Helps control for the effect of pretesting itself.

2. Manipulation of the Independent Variable


Advantages The researcher still introduces an intervention or treatment (like in a true
experiment).
Strongest evidence for causality

High internal validity


3. Use of Control or Comparison Groups
Control over extraneous variables
While not randomly assigned, these help to draw comparisons and control for
confounding variables.
Limitations

Can be expensive and time-consuming


Example
Not always ethical or practical (e.g., randomly denying a life-saving drug to a
Suppose a school introduces a new teaching method in only one classroom
control group)
(due to scheduling constraints), while another classroom continues with the
11
traditional method. Students aren’t randomly assigned — but their Feature True Experiment Quasi-Experiment
performance can still be compared.
Random Assignment ✅ Yes ❌ No

Manipulation of ✅ Yes ✅ Yes


Common Quasi-Experimental Designs Variables

1. Non-Equivalent Groups Design Use of Control Group ✅ Often ✅ Often

Two or more groups are compared, but they are not randomly assigned. Internal Validity ✅ High ⚠️Moderate
Often used in educational settings. Real-world Feasibility ⚠️Sometimes limited ✅ High

2. Time Series Design Advantages


Multiple observations are made before and after the intervention. More practical in real-life settings
Helps to track trends and changes over time. Useful when ethical constraints prevent randomization

Still allows for testing causal hypotheses


3. Interrupted Time Series Design

A “natural” intervention (e.g., policy change) occurs, and data is analyzed Limitations
before and after it.
Selection bias (because of no random assignment)

Lower internal validity compared to true experiments


4. Regression Discontinuity Design
Harder to rule out confounding variables
Participants are assigned to groups based on a cutoff score (e.g., test results,
income level), not randomly.

4.5 Guide to Selecting Appropriate Research Design

Comparison with True Experiments

12
Choosing the right research design is about aligning your research question
with the right method that will yield reliable and valid answers. It’s like
choosing the right tool for a specific job — no design is universally best. 3. Determine the Level of Control You Need

Do you want to control variables and test causality? → Choose experimental.

Step-by-Step Guide to Selecting a Research Design Are you working in a natural setting without control over variables? →
Consider quasi-experimental or correlational.
1. Understand Your Research Problem
Is the aim just to describe a phenomenon or group? → Go for descriptive
Start by asking: research.

What is the central research question?

Are you trying to describe, explain, predict, or control something? 4. Consider Ethical and Practical Constraints

Examples: Can you randomly assign people to treatment or control groups?

“What are students’ attitudes toward online learning?” → Descriptive Is it ethical to withhold treatment from one group?

“Does exercise reduce anxiety?” → Experimental Are you studying something you can’t manipulate (e.g., gender, age)?

“Is there a link between income and education level?” → Correlational These may rule out true experiments, guiding you toward quasi-experiments
or correlational designs.

2. Clarify the Type of Data You Need


5. Identify the Time Frame
Quantitative: Numbers, statistics (e.g., test scores, age, income).
Are you studying changes over time? → Consider longitudinal or time series
Qualitative: Words, meanings, opinions (e.g., interviews, open-ended designs.
surveys).
Is your data collection happening at one point in time? → Use a cross-
Design Match: sectional design.
Quantitative → Experimental, Quasi-Experimental, Correlational,
Descriptive (survey-based)
6. Match Design to Research Goal
Qualitative → Case studies, Phenomenology, Grounded Theory
13
Goal Suitable Design(s) Identify patterns or trends.

Describe characteristics Descriptive Present data in a way that's easy to understand (e.g., charts, tables).

Explore relationships Correlational Common Descriptive Statistics

Explain causality Experimental, Quasi-experimental Concept What It Shows Examples

Understand experiences Qualitative (case studies, Mean (Average) Central value of a Average test score
ethnography) dataset

Median Middle value in Middle income of a


ordered data group
TOPIC 5
Mode Most frequent value Most common age
5. Methods of Data Analysis
Range Spread between lowest Highest vs. lowest
Data analysis is the process of making sense of the data you’ve collected, and highest salary
either to describe it or to draw conclusions. There are two broad categories:
Standard Deviation How much values Consistency in test
1. Descriptive Data Analysis deviate from the mean scores
2. Inferential Data Analysis Frequency Number of times a Number of students
Let’s take them one after the other. value occurs with “A”

5.1 Descriptive Data Analysis Percentage Proportional 40% of students passed


representation
Descriptive data analysis involves techniques used to summarize, organize,
and present data in a meaningful way. It does not make predictions or
generalizations — it simply tells you what is happening in your data.
Presentation Tools:
Key Objectives:
Tables Bar charts Pie charts Histograms Line graphs
Describe the central tendency of the data.

Show how spread out the values are.


Example:

14
If a researcher collects test scores from 100 students, descriptive statistics Estimate confidence levels for your results.
will help them find:

The average score (mean)


Common Inferential Statistics
The most frequent score (mode)
Concept Purpose Example
The score range (difference between highest and lowest)
Hypothesis Testing Determines if observed Does a new teaching
The distribution using graphs effects are real method improve scores?

Takeaway: T-test Compares means Boys vs. Girls in math


between 2 groups scores
> Descriptive statistics help you understand what’s in the data, but they don’t
let you say what’s true for a wider population. ANOVA (Analysis of Compares means among Performance across
Variance) 3 or more groups different schools

Chi-square Test Tests relationships Does gender influence


between categorical data course choice?

5.2 Inferential Data Analysis Correlation Measures strength of Study time and test score
relationship (r = +0.85)
Inferential data analysis goes a step further — it allows researchers to make
predictions or generalizations about a population based on a sample. Regression Analysis Predicts one variable Predict GPA from study
from another hours and attendance
It answers questions like:
Confidence Intervals Range where population We’re 95% sure the
Is this difference real or just by chance? value likely falls mean score is 70–75

Can we say this result applies beyond the sample? P-value Tells how likely a result p < 0.05 means
is due to chance statistically significant
Key Objectives: result

Generalize findings from sample to population.

Test hypotheses. Example:

Determine if relationships or differences are statistically significant. If a sample of 100 students scored higher using a new app, inferential
analysis helps decide whether this improvement is:
15
Real (due to the app) When…

Or just happened by chance Data analysis You just need to You want to draw
summarize or describe conclusions or test
Key Assumptions in Inferential Statistics: data relationships
Your sample is representative of the population.
Example Presenting age or gender Testing if a new
Data meets certain conditions (e.g., normal distribution for t-tests). breakdown of a class teaching method
improves performance
You're clear about the null and alternative hypotheses.
Sample Whole population or Subset of a population
large dataset (sample)

Descriptive vs Inferential Statistics

Feature Descriptive Inferential Final Tip:

Purpose Describe data Make predictions or > “Descriptive tells you what IS in the data.
conclusions
Inferential tells you what COULD BE true beyond the data.”
Data Source Uses entire dataset Uses sample to
represent population

Examples Mean, Mode, T-test, ANOVA, TOPIC 6


Frequency, Graphs Regression, p-values 6.1. Preparing for the Use of Computer in Social and Educational
Research
Generalizes to ❌ No ✅ Yes
population? This refers to the foundational steps needed before employing computers for
research. It involves:
Tests hypotheses? ❌ No ✅ Yes
Understanding research goals and design: Clarifying what the study is about
and the type of data needed.
Summary: Choosing Which to Use
Learning relevant software tools (e.g., SPSS, Excel, R, Python).
Situation Use Descriptive When… Use Inferential
Training researchers: Ensuring that the individuals involved are skilled in
using the required computer tools.
16
Setting up infrastructure: Computers, software, backups, and data protection Automated entry: Importing data from online forms, survey platforms, etc.
protocols.
Accuracy is vital—errors at this stage can corrupt results.

6.2. Data Coding


6.5. Roistering Data (Likely a typographical error; possibly meant
Data coding is the process of transforming collected qualitative or “Registering Data” or “Roster Data”)
quantitative data into a format that computers can process.
Assuming it means registering or organizing data:
For quantitative data: Assigning numerical values to responses (e.g., Male =
1, Female = 2). Creating a structured format or database for the data.

For qualitative data: Grouping open-ended responses into themes or Ensuring each data point is assigned to the right respondent or category.
categories, then assigning codes. Important for longitudinal studies and tracking individual participants across
Essential for data cleaning, analysis, and interpretation. multiple data points.

6.3. Data Sorting 6.6. Data Analysis Applications and Data Output Interpretations

Sorting refers to organizing data in a specific order to make analysis easier. This is the core of the research process:

For example: Sorting student test scores from highest to lowest. Applications: Using software (like SPSS, STATA, R, Python, Excel) to
perform:
Sorting can be alphabetical, numerical, or based on date/time.
Descriptive analysis (mean, mode, frequency, etc.)
Helps in identifying trends, outliers, or inconsistencies.
Inferential statistics (t-tests, ANOVA, regression, etc.)

Interpretation: Making sense of the results:

What do the numbers mean in the context of the research?


6.4. Data Entering
Are the hypotheses supported or rejected?
This is the step where coded data is input into a software tool or database.
What patterns or insights emerge?
Manual entry: Typing data into Excel, SPSS, or other software.
17
6.7. Selection of Suitable Programmes for Data Analysis 7.2. Format of a Research Report

Different tools are used depending on: A standard research report typically follows this structure:

Type of research (qualitative or quantitative) 🔹 Title Page

Complexity of data Includes the title of the study, researcher’s name, institution, and date.

User proficiency Examples: 🔹 Abstract

SPSS: Good for social sciences and education research. A concise summary (usually 150–300 words) of the entire research —
including the purpose, methods, findings, and conclusions.
R/Python: More flexible, ideal for complex or large datasets.
🔹 Introduction
Excel: User-friendly, best for basic analysis and data handling.
Background of the study
NVivo/ATLAS.ti: For qualitative data analysis.
Statement of the problem

Objectives/research questions/hypotheses
TOPIC 7
Significance of the study
7.1. What is a Research Report?
🔹 Literature Review
A research report is a structured document that presents the process, findings,
and conclusions of a research study. It: Overview of previous research

Summarizes the purpose of the study Identification of knowledge gaps

Describes the methodology used Theoretical or conceptual framework

Presents the results 🔹 Methodology

Discusses the implications of the findings Research design

It serves as a formal way to communicate research outcomes to academics, Population and sample
practitioners, funders, or the general public.
18
Instruments for data collection Making sense of statistical results: Explaining trends, patterns, and what they
mean.
Procedures
Linking results to research questions: Do the findings answer the questions or
Methods of data analysis support/reject the hypotheses?
🔹 Results/Findings Contextualizing findings: Comparing with previous studies to see if they
align or differ.
Presentation of analyzed data (tables, charts, graphs)
Identifying implications: What do these findings mean for policy, education,
Focus on what was discovered without interpretation
society, or further research?
🔹 Discussion

Interpretation of findings
7.4. Writing the Research Report
Connection to literature reviewed
This stage requires:
Explanation of implications
Organizing content logically based on the format
🔹 Conclusion and Recommendations
Maintaining clarity and coherence: Using proper grammar, transitions, and
Summary of key findings scholarly tone

Suggestions for practice, policy, or further research Referencing properly: Avoiding plagiarism and citing all sources

🔹 References/Bibliography Editing and proofreading: Checking for structure, flow, formatting, and
errors
List of all sources cited using a specific referencing style (APA, MLA, etc.)
🔸 Tip: Use visuals (charts, tables) to enhance understanding but always
🔹 Appendices explain what they show.

Extra materials such as questionnaires, raw data, charts, etc.

Topic 8: Evaluating Research Report

7.3. Discussing and Interpreting Research Data Evaluation of a research report is a systematic process of assessing the
quality, accuracy, clarity, and contribution of the report. It helps determine
This involves:
whether the research meets academic and professional standards.
19
🔹 Assessing Methodology

8.1. Aims and Objectives of Evaluating a Research Report Was the research design appropriate for the problem?

The purpose of evaluation is to: Are the population, sample, and instruments well defined?

Ensure the research process followed ethical and methodological standards. Are data collection and analysis procedures clearly explained?

Determine whether the research objectives were clearly stated and achieved. 🔹 Analyzing the Results and Discussion

Assess the validity and reliability of the findings. Are the results clearly presented (with tables/figures)?

Identify the strengths and weaknesses of the report. Are findings interpreted logically?

Encourage improvements in future research practices. Are limitations acknowledged?

Support grading or decision-making (in academic or policy settings). 🔹 Inspecting the Conclusion and Recommendations

Are conclusions drawn from the data?

8.2. Procedure for Evaluating Research Report Are recommendations realistic and useful?

A step-by-step approach typically includes: 🔹 Checking References and Appendices

🔹 Reviewing the Research Title and Abstract Are sources correctly cited?

Is the title clear and appropriate? Are appendices used to support the report?

Does the abstract provide a concise summary?

🔹 Checking the Introduction 8.3. Method of Scoring

Are the problem, objectives, and research questions/hypotheses well stated? Scoring is usually based on a rubric or assessment guide with criteria such as:

🔹 Evaluating the Literature Review Component Marks Allocation (example)

Is the literature relevant, up-to-date, and properly cited? Title and Abstract 5 marks

Is there a theoretical or conceptual framework? Introduction 10 marks

20
Literature Review 10 marks

Methodology 20 marks

Data Analysis & Interpretation 20 marks

Conclusion & Recommendation 10 marks

Presentation & Format 10 marks

Referencing & Appendices 5 marks

Total 100 marks

Other scoring approaches might include grading scales (A–F) or rating


systems (Excellent to Poor) depending on the institution.

21

You might also like