My Psy 310 Note Phew
My Psy 310 Note Phew
Research is a systematic and organized effort to investigate a specific Humans acquire knowledge in several ways, but not all are scientific. Here's
problem or question using scientific methods. It involves the collection, a breakdown:
analysis, and interpretation of data to increase understanding and generate
knowledge. Method Description Limitation
It’s systematic (planned, organized steps). Intuition Relying on gut feeling or Often biased and unverified.
common sense.
It’s empirical (based on observation or experience).
Authority Accepting information from Authority can be wrong.
It aims at solving problems, testing theories, or creating new knowledge. experts or respected figures.
Example: Studying memory processes. Higher internal validity, lower external validity.
2. Applied Research:
3. Online/Virtual Settings:
Sub 1.5. Research Settings Research in psychology can take many forms:
The setting where research is conducted influences the method and data Type Description Example
collected.
Descriptive Describes characteristics of a Survey on stress among
2
population or situation. students.
Quantitative Focuses on numerical data and Survey with Likert scale Theories – Testing or expanding existing psychological theories.
statistics. responses.
Social issues – Problems affecting society, like mental health or addiction.
Mixed Combines qualitative and quantitative Interview + questionnaire
Methods approaches. study on anxiety. Professional practices – Issues observed in workplaces, schools, therapy,
etc.
Scientific methods of knowledge acquisition are more reliable than intuition 2.2. Process of Problem Identification
or authority.
This involves narrowing down broad ideas into a clear, researchable
Understand the differences between types of research and where they're question.
applied (lab vs. field).
Steps:
Be able to give examples of each type of research.
1. Observe a phenomenon or issue.
4. Narrow down the scope to something specific and manageable. Should reflect a relationship between variables.
5. Formulate the research problem in clear terms. Example: Students who sleep fewer than 6 hours perform worse in memory
tests than those who sleep 8 hours.
Example: Instead of "mental health," refine to: "How does social media use
affect anxiety levels among university students?"
Before moving forward, check if the problem is worth researching. Consider: Testable – measurable through data.
Feasibility: Do you have time, access, resources? Falsifiable – possible to prove wrong.
Significance: Will the results be useful or important? Clear and concise – no vague language.
Clarity: Is the problem well-defined and specific? Directional (if applicable) – states expected outcome (e.g., increase,
decrease).
Ethical: Is it safe and responsible to research?
Relevant – tied to the research problem.
Novelty: Does it contribute something new?
2.7. Functions of Hypotheses Statistical Formal statements tested with data. H₀ and H₁ written in
Hypothesis statistical language.
Hypotheses play several key roles in research:
Directional Predicts the direction of effect. “Students who listen to Nominal: Categories (e.g., gender, religion)
Hypothesis music while studying score
lower on tests.” Ordinal: Ranked order (e.g., class position)
Non-directional States there will be a difference but “There is a difference in Interval: Equal intervals without true zero (e.g., temperature in Celsius)
Hypothesis doesn’t say which way. performance between
students who study with and Ratio: Equal intervals with true zero (e.g., weight, height)
without music.”
Supports statistical analysis Key Insight: A test can be reliable but not valid. However, for a test to be
valid, it must be reliable.
Definition: The degree to which an instrument measures what it’s intended to Used in clinical, educational, and occupational settings
measure.
Advantages:
Types:
Objectivity
Face Validity: Appears effective at face value
Comparability
Content Validity: Covers all aspects of the concept
Diagnostic utility
Construct Validity: Measures the theoretical construct
6
Measurement Tools:
3.4. Nature of Ability Tests Objective Tests: Structured (e.g., MMPI, NEO-PI)
Definition: Tests designed to assess specific kinds of cognitive or motor Projective Tests: Open-ended (e.g., Rorschach Inkblot Test, TAT)
abilities.
Application:
Types:
Clinical diagnosis
Aptitude Tests: Predict future performance (e.g., SAT, GRE)
Job screening
Achievement Tests: Assess learned knowledge (e.g., school exams)
Counseling
Intelligence Tests: General cognitive ability (e.g., IQ tests)
Key Concepts:
3.6. Observation as a Research Tool
Often timed and standardized
Definition: Systematic watching and recording of behaviors.
Focused on skills like reasoning, memory, comprehension, and verbal/math
abilities Types:
Frequently used in school and employment settings Participant Observation: Researcher is involved in the activity
3 5. Nature and Measurement of Personality Structured Observation: Uses a predefined coding scheme
Personality: The unique and stable pattern of behaviors, thoughts, and Unstructured Observation: Open-ended, exploratory
emotions shown by an individual.
7
(As referenced: Hassan Pg. 129)
Observational in nature.
Summary for Quick Revision: Data is usually collected through surveys, case studies, observational
methods, or content analysis.
Concept Key Idea
8
4.2 Experimental and Correlational Research POTENTIAL SOURCES OF EXPERIMENTAL ERROR
These two research designs are often compared because they both deal with 1. Pre measurement (solution - randomization)
variables, but they differ in how they handle cause and effect and control.
2. Interaction error
Experimental Research
3. Maturation
This is the most rigorous and controlled type of research. It is used to
determine causal relationships between variables. 4. History
Pre-experimental design: Little to no control group or randomization. 3. Statistical Relationship: Uses correlation coefficients (from -1 to +1) to
show strength and direction.
True experimental design: Randomized groups, strong control, most valid for
causal claims. Example:
Quasi-experimental design: Lacks random assignment but tries to test cause- A study finds that students who spend more time studying tend to have
effect relationships (more on this later). higher GPAs. But it doesn’t mean studying causes higher GPA — other
factors (like motivation or prior knowledge) might play a role.
9
Types of Correlation: This ensures that every participant has an equal chance of being in any
group, reducing bias.
Positive: Both variables increase together.
2. Control Group
Negative: One variable increases as the other decreases.
This group does not receive the treatment or intervention.
Zero correlation: No relationship between the variables.
It serves as a baseline for comparison.
Comparison: Experimental vs Correlational
3. Manipulation of the Independent Variable (IV)
Feature Experimental Correlational
The researcher deliberately changes the IV to see its effect on the dependent
Manipulation Yes No variable (DV).
Control High Low 4. Pre- and Post-Testing (sometimes)
Randomization Often used Not used Measurements can be taken before and after the experiment to observe the
effect of the IV.
Determines Cause Yes No
Basic Structure of a True Experiment
Control Yes No
4.3 Design of True Experiments
A true experiment is the gold standard for research that aims to determine
cause-and-effect relationships. It is characterized by strict control and Example
randomization, ensuring that the results are reliable and internally valid.
A researcher wants to test if a new teaching method improves students' math
Core Characteristics of True Experimental Design performance. She randomly assigns students to two groups:
1. Random Assignment Group A uses the new method (experimental group),
Participants are randomly placed into different groups (e.g., experimental or Group B uses the traditional method (control group).
control).
10
Both groups take a math test before and after the teaching. If Group A shows
significantly greater improvement, the new method is considered effective.
4.4 Design of Quasi-Experiments
Types of True Experimental Designs
A quasi-experimental design is similar to a true experiment, but without
1. Pretest-Posttest Control Group Design random assignment. It is often used in real-world settings where
randomization isn't possible, yet the researcher still wants to assess cause-
Measures change over time in both groups. and-effect relationships.
2. Posttest-Only Control Group Design Key Characteristics of Quasi-Experiments
No pretest, only post-intervention results are compared. Good when pretests 1. No Random Assignment
might "tip off" participants.
Participants are placed into groups based on existing characteristics (e.g.,
3. Solomon Four-Group Design age, location, classroom).
Combines both pretest-posttest and posttest-only designs. Groups may differ in important ways before the study begins.
Helps control for the effect of pretesting itself.
Two or more groups are compared, but they are not randomly assigned. Internal Validity ✅ High ⚠️Moderate
Often used in educational settings. Real-world Feasibility ⚠️Sometimes limited ✅ High
A “natural” intervention (e.g., policy change) occurs, and data is analyzed Limitations
before and after it.
Selection bias (because of no random assignment)
12
Choosing the right research design is about aligning your research question
with the right method that will yield reliable and valid answers. It’s like
choosing the right tool for a specific job — no design is universally best. 3. Determine the Level of Control You Need
Step-by-Step Guide to Selecting a Research Design Are you working in a natural setting without control over variables? →
Consider quasi-experimental or correlational.
1. Understand Your Research Problem
Is the aim just to describe a phenomenon or group? → Go for descriptive
Start by asking: research.
Are you trying to describe, explain, predict, or control something? 4. Consider Ethical and Practical Constraints
“What are students’ attitudes toward online learning?” → Descriptive Is it ethical to withhold treatment from one group?
“Does exercise reduce anxiety?” → Experimental Are you studying something you can’t manipulate (e.g., gender, age)?
“Is there a link between income and education level?” → Correlational These may rule out true experiments, guiding you toward quasi-experiments
or correlational designs.
Describe characteristics Descriptive Present data in a way that's easy to understand (e.g., charts, tables).
Understand experiences Qualitative (case studies, Mean (Average) Central value of a Average test score
ethnography) dataset
14
If a researcher collects test scores from 100 students, descriptive statistics Estimate confidence levels for your results.
will help them find:
5.2 Inferential Data Analysis Correlation Measures strength of Study time and test score
relationship (r = +0.85)
Inferential data analysis goes a step further — it allows researchers to make
predictions or generalizations about a population based on a sample. Regression Analysis Predicts one variable Predict GPA from study
from another hours and attendance
It answers questions like:
Confidence Intervals Range where population We’re 95% sure the
Is this difference real or just by chance? value likely falls mean score is 70–75
Can we say this result applies beyond the sample? P-value Tells how likely a result p < 0.05 means
is due to chance statistically significant
Key Objectives: result
Determine if relationships or differences are statistically significant. If a sample of 100 students scored higher using a new app, inferential
analysis helps decide whether this improvement is:
15
Real (due to the app) When…
Or just happened by chance Data analysis You just need to You want to draw
summarize or describe conclusions or test
Key Assumptions in Inferential Statistics: data relationships
Your sample is representative of the population.
Example Presenting age or gender Testing if a new
Data meets certain conditions (e.g., normal distribution for t-tests). breakdown of a class teaching method
improves performance
You're clear about the null and alternative hypotheses.
Sample Whole population or Subset of a population
large dataset (sample)
Purpose Describe data Make predictions or > “Descriptive tells you what IS in the data.
conclusions
Inferential tells you what COULD BE true beyond the data.”
Data Source Uses entire dataset Uses sample to
represent population
For qualitative data: Grouping open-ended responses into themes or Ensuring each data point is assigned to the right respondent or category.
categories, then assigning codes. Important for longitudinal studies and tracking individual participants across
Essential for data cleaning, analysis, and interpretation. multiple data points.
6.3. Data Sorting 6.6. Data Analysis Applications and Data Output Interpretations
Sorting refers to organizing data in a specific order to make analysis easier. This is the core of the research process:
For example: Sorting student test scores from highest to lowest. Applications: Using software (like SPSS, STATA, R, Python, Excel) to
perform:
Sorting can be alphabetical, numerical, or based on date/time.
Descriptive analysis (mean, mode, frequency, etc.)
Helps in identifying trends, outliers, or inconsistencies.
Inferential statistics (t-tests, ANOVA, regression, etc.)
Different tools are used depending on: A standard research report typically follows this structure:
Complexity of data Includes the title of the study, researcher’s name, institution, and date.
SPSS: Good for social sciences and education research. A concise summary (usually 150–300 words) of the entire research —
including the purpose, methods, findings, and conclusions.
R/Python: More flexible, ideal for complex or large datasets.
🔹 Introduction
Excel: User-friendly, best for basic analysis and data handling.
Background of the study
NVivo/ATLAS.ti: For qualitative data analysis.
Statement of the problem
Objectives/research questions/hypotheses
TOPIC 7
Significance of the study
7.1. What is a Research Report?
🔹 Literature Review
A research report is a structured document that presents the process, findings,
and conclusions of a research study. It: Overview of previous research
It serves as a formal way to communicate research outcomes to academics, Population and sample
practitioners, funders, or the general public.
18
Instruments for data collection Making sense of statistical results: Explaining trends, patterns, and what they
mean.
Procedures
Linking results to research questions: Do the findings answer the questions or
Methods of data analysis support/reject the hypotheses?
🔹 Results/Findings Contextualizing findings: Comparing with previous studies to see if they
align or differ.
Presentation of analyzed data (tables, charts, graphs)
Identifying implications: What do these findings mean for policy, education,
Focus on what was discovered without interpretation
society, or further research?
🔹 Discussion
Interpretation of findings
7.4. Writing the Research Report
Connection to literature reviewed
This stage requires:
Explanation of implications
Organizing content logically based on the format
🔹 Conclusion and Recommendations
Maintaining clarity and coherence: Using proper grammar, transitions, and
Summary of key findings scholarly tone
Suggestions for practice, policy, or further research Referencing properly: Avoiding plagiarism and citing all sources
🔹 References/Bibliography Editing and proofreading: Checking for structure, flow, formatting, and
errors
List of all sources cited using a specific referencing style (APA, MLA, etc.)
🔸 Tip: Use visuals (charts, tables) to enhance understanding but always
🔹 Appendices explain what they show.
7.3. Discussing and Interpreting Research Data Evaluation of a research report is a systematic process of assessing the
quality, accuracy, clarity, and contribution of the report. It helps determine
This involves:
whether the research meets academic and professional standards.
19
🔹 Assessing Methodology
8.1. Aims and Objectives of Evaluating a Research Report Was the research design appropriate for the problem?
The purpose of evaluation is to: Are the population, sample, and instruments well defined?
Ensure the research process followed ethical and methodological standards. Are data collection and analysis procedures clearly explained?
Determine whether the research objectives were clearly stated and achieved. 🔹 Analyzing the Results and Discussion
Assess the validity and reliability of the findings. Are the results clearly presented (with tables/figures)?
Identify the strengths and weaknesses of the report. Are findings interpreted logically?
Support grading or decision-making (in academic or policy settings). 🔹 Inspecting the Conclusion and Recommendations
8.2. Procedure for Evaluating Research Report Are recommendations realistic and useful?
🔹 Reviewing the Research Title and Abstract Are sources correctly cited?
Is the title clear and appropriate? Are appendices used to support the report?
Are the problem, objectives, and research questions/hypotheses well stated? Scoring is usually based on a rubric or assessment guide with criteria such as:
Is the literature relevant, up-to-date, and properly cited? Title and Abstract 5 marks
20
Literature Review 10 marks
Methodology 20 marks
21