0% found this document useful (0 votes)
26 views21 pages

Module 2

Uploaded by

aleenatresa8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views21 pages

Module 2

Uploaded by

aleenatresa8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Module -II

Research Design: Reading and Reviewing-Research literature, Finding


Research Papers, Critical Reading, Developing a literature Review,
Guidelines for Research Skills and Awareness, Validity of Research,
Reliability in Research. Meaning of Research Design, Need for Research
Design, Features of good design, Different Research Designs.

READING AND REVIEWING

Reading and reviewing is a process that encompasses more than just


skimming through texts. It involves several detailed steps that
enhance comprehension, critical thinking, and the ability to connect
ideas across different sources. Here's a more detailed explanation of
the process:

1. Pre-Reading

To effectively read and review, first set a clear purpose for your
reading, whether it's for a broad understanding, critical analysis, or
specific information gathering. Then, survey the text by reviewing
titles, subtitles, and summaries to grasp its overall structure and
prepare mentally for the key concepts and organization of the
information.

2. Active Reading

To effectively read and review, first set a clear purpose for your
reading, whether it's for a broad understanding, critical analysis, or
specific information gathering. Then, survey the text by reviewing
titles, subtitles, and summaries to grasp its overall structure and
prepare mentally for the key concepts and organization of the
information.

3. Note-Taking

When reading and reviewing, adopt a systematic approach for note-


taking, such as using the Cornell method, mind mapping, charting, or
annotating directly in the margins to organize and retain information
efficiently. Concentrate on extracting key concepts, significant data,
main arguments and counterarguments, and the conclusions the author
presents. This focused approach helps in thoroughly understanding
and critically analyzing the text.

4. Critical Review

Post-reading, engage in a detailed analysis to assess the strength of


the arguments, research validity, and the author’s objectivity.
Critically examine the text for biases, logical fallacies, and reasoning
gaps. Reflect on the relevance and overall impact of the work within
its respective field. This process is crucial for a deeper understanding
and evaluation of the content.

5. Synthesis

Integrate new insights with existing knowledge to enhance


understanding and consider practical applications in academic,
professional, personal, or societal contexts.

6. Reviewing
Exactly, summarizing the main points and sharing your thoughts with
others helps solidify your understanding and fosters a collaborative
learning environment, enhancing the collective knowledge on the
topic.

7. Further Exploration

Identify new questions from your reading to guide further


exploration and seek out additional resources mentioned in the text to
deepen your understanding.

Through these detailed steps, reading and reviewing transform from


passive reception of information to an active, critical, and engaging
learning process. This approach is particularly beneficial for academic
success, professional development, and personal growth.

RESEARCH LITERATURE

Research literature refers to scholarly articles, books, and other


sources that provide information and insight on specific topics,
forming the foundation for new research and studies. This literature
includes a variety of document types such as:

1. Primary Literature: Original research articles, case studies, and


clinical trials that present new findings directly from researchers.

2. Secondary Literature: Reviews, systematic reviews, and meta-


analyses that summarize and synthesize findings from primary
literature.
3. Tertiary Literature: Encyclopedias, textbooks, and handbooks that
summarize information from primary and secondary sources for
educational purposes.

4. Grey Literature: Material that is not formally published, such as


theses, dissertations, conference papers, and government reports.

Access to these resources allows researchers to evaluate existing


knowledge and theories, identify gaps in the field, and propose further
research. They rely on this literature for a comprehensive background,
contextual analysis, and as a basis for generating hypotheses,
designing studies, and drawing conclusions relevant to their own
investigations.

FINDING RESEARCH PAPERS

To effectively find research papers, use university libraries, academic


databases like PubMed, IEEE Xplore, and Google Scholar, and
platforms such as ResearchGate and Academia.edu. Consider preprint
repositories like arXiv for early research, professional associations for
specialized papers, and directly contacting authors if access is
restricted. Utilize social media and forums for additional resources
and community advice.

CRITICAL READING

Critical reading involves a thorough analysis and evaluation of a text.


It starts with previewing the text to grasp its main themes and
structure. Contextualizing the text within the author's background and
historical period helps in understanding the underlying influences.
Active questioning during reading assesses the author's arguments,
identifying biases or assumptions. Evaluating these arguments for
logical consistency and evidence support, and cross-checking with
other sources enhances understanding and identifies discrepancies.
Reflection on the text’s implications reveals its impact on the field
and suggests future directions. Integrating this new understanding
with existing knowledge allows for a deeper insight, while critiquing
the text helps formulate a personal perspective, assessing the
persuasiveness and potential improvements in the argument. This
methodical approach fosters a nuanced comprehension necessary for
informed discussions.

DEVELOPING A LITERATURE REVIEW

Developing a literature review involves defining your research topic,


systematically searching for relevant scholarly articles, and critically
analyzing these sources to identify trends, gaps, and conflicts in the
existing research. It includes organizing the literature by themes or
methodologies, synthesizing the findings, and clearly articulating how
they relate to your research question. The review should be structured
with a clear introduction, a coherent body that discusses the sources,
and a conclusion that highlights the study's implications and potential
future research directions. Proper citation and revision are crucial to
ensure accuracy and credibility.

GUIDELINES FOR RESEARCH SKILLS AND AWARENESS


Effective research skills and awareness are essential for academic and
professional success. Begin by clearly defining your research question
or objective to guide the scope and direction of your study. Develop a
systematic strategy for searching relevant literature, using databases
and citation indexes efficiently, and ensure to use reliable sources to
gather data. Employ critical thinking to analyze and synthesize the
information you collect, being mindful of biases and limitations in the
sources. It's crucial to maintain ethical standards throughout your
research, especially when dealing with sensitive or proprietary
information. Regularly update your knowledge on research
methodologies and tools, and seek feedback from peers or mentors to
refine your approach. Finally, effective communication of your
findings, whether through written reports, presentations, or
discussions, is key to demonstrating the value and reliability of your
research.

VALIDITY OF RESEARCH

The validity of research refers to how accurately the study measures


what it intends to measure and the truthfulness of its conclusions.
Internal validity assesses whether the study accurately demonstrates a
causal relationship without outside influence, while external validity
relates to whether the study's findings can be generalized beyond its
specific context. Construct validity examines if the test measures the
concept it's supposed to measure. Conclusion validity ensures that the
relationships observed in the study are accurately depicted and
supported by statistical analysis. Lastly, face validity is a more
subjective measure that evaluates whether the test appears effective
and suitable at first glance. Ensuring the validity of research requires
meticulous planning, reliable measurement methods, controlled
variables, and clear, replicable designs to uphold the integrity and
applicability of the study's findings.

RELIABILITY IN RESEARCH

Reliability in research refers to the consistency of a measurement, or


the degree to which an instrument measures the same way each time it
is used under the same condition with the same subjects. Essentially,
it is about the repeatability of measurements and the extent to which
the results are consistent over time and across different observers.
Key aspects include test-retest reliability, which measures stability
over time; inter-rater reliability, assessing the consistency across
different observers; parallel-forms reliability, comparing different
versions of a test to ensure equivalence; and internal consistency,
checking if all parts of a test consistently measure the same
underlying construct. Ensuring high reliability is critical as it
strengthens the credibility of the research, providing a trustworthy
foundation for the study's conclusions and for further research.

MEANING OF RESEARCH DESIGN

The concept of "research design" is central to the process of


organizing and executing a research project. It involves making
detailed decisions about the what, where, when, how much, and by
what means of the study. Essentially, a research design serves as the
structural blueprint that guides the entire process of data collection,
measurement, and analysis. It ensures that the research methods
chosen are both appropriate for the research purpose and cost-
effective. The design encompasses everything from the formulation of
hypotheses and their operationalization to the final stages of data
analysis, ensuring that each step is methodically planned and linked to
the overall research objectives. This careful planning is crucial as it
impacts the validity, reliability, and overall success of the research
findings.

More explicitly, the design decisions happen to be in respect of:

(i) What is the study about?


(ii) Why is the study being made?
(iii) Where will the study be carried out?
(iv) What type of data is required?
(v) Where can the required data be found?
(vi) What periods of time will the study include?
(vii) What will be the sample design?
(viii) What techniques of data collection will be used?
(ix) How will the data be analysed?
(x) In what style will the report be prepared?

NEED FOR RESEARCH DESIGN

Research design is essential as it helps to manage research operations


efficiently, maximizing information while minimizing effort, time,
and money. It acts like a blueprint, prepared in advance, that guides
the collection and analysis of data based on the study's goals and
available resources. A well-planned research design enhances the
reliability of the results and forms the foundation of the entire project.
However, the importance of a thoughtful research design is often
overlooked, which can lead to ineffective research or misleading
conclusions. Therefore, careful preparation of the research design is
critical, allowing researchers to organize their ideas, identify potential
flaws, and make necessary adjustments. This preparation is vital to
ensuring the research fulfils its intended purpose and withstands
critical evaluation.

FEATURES OF A GOOD DESIGN

A good research design is characterized by flexibility,


appropriateness, efficiency, and economy. It ideally minimizes bias
and maximizes the reliability of data, reducing experimental error and
providing comprehensive information. The suitability of a research
design depends on the specific objectives and the nature of the
problem being studied, meaning that no single design fits all types of
research. Each design is tailored to the unique demands of different
research questions, ensuring that it addresses specific requirements
effectively.

A research design appropriate for a particular research problem,


usually involves the consideration of the following factors:

(i) the means of obtaining information;


(ii) the availability and skills of the researcher and his staff, if
any;
(iii) the objective of the problem to be studied;
(iv) the nature of the problem to be studied;
(v) the availability of time and money for the research work.

The choice of a research design depends on the type of study:


exploratory studies need flexible designs for broad inquiries,
descriptive studies require designs that minimize bias for accurate
data, and hypothesis-testing studies need designs that allow for causal
inferences. However, classifying a study can be challenging as many
encompass elements from different types. The design choice must
also consider practical constraints like time, budget, and team skills,
impacting the specifics of the experimental, survey, and sample
designs.

IMPORTANT CONCEPTS RELATING TO RESEARCH


DESIGN

Before describing the different research designs, it will be appropriate


to explain the various concepts relating to designs so that these may
be better and easily understood.

1. Dependent and independent variables:


A variable is a concept that can assume different quantitative
values, such as weight, height, and income. Variables that can
vary in decimal points are called 'continuous variables', while
those that take only integer values are 'discrete variables', like
the number of children. Variables can also be categorized based
on dependency; if one variable depends on another, it is called a
dependent variable, whereas the influencing variable is termed
an independent variable. For example, if height depends on age,
then height is a dependent variable and age is an independent
variable. Additionally, if height is influenced by both age and
sex, then both age and sex are independent variables, while
height remains dependent. Similarly, educational tools like films
and lectures can act as independent variables, with resulting
behavioral changes being the dependent variables.
2. Extraneous variable:
In research, extraneous variables are those independent variables
that are not the focus of the study but can still affect the outcome
or the dependent variable. For instance, if a study aims to
examine the relationship between children's self-concept
(independent variable) and their achievement in social studies
(dependent variable), intelligence could influence the social
studies achievement but is not the main focus of this study.
Therefore, intelligence would be considered an extraneous
variable. Any impact on the dependent variable caused by
extraneous variables is known as 'experimental error'. Effective
research design aims to minimize the influence of extraneous
variables so that any changes in the dependent variable can be
confidently attributed to the independent variables under study.
3. Control:
In research, "control" refers to the method of minimizing the
impact of extraneous variables to ensure that observed effects on
the dependent variable are genuinely due to the independent
variables.
4. Confounded relationship:
When the dependent variable is not free from the influence of
extraneous variable(s), the relationship between the dependent
and independent variables is said to be confounded by an
extraneous variable(s).
5. Research hypothesis:
A research hypothesis is a predictive statement that tests a
relationship between an independent variable and a dependent
variable using scientific methods; it is distinct from assumptions
or predictions not intended for empirical testing.
6. Experimental and non-experimental hypothesis-testing research:
When research aims to test a hypothesis, it is called hypothesis-
testing research, which can be experimental or non-
experimental. In experimental hypothesis-testing research, the
researcher manipulates the independent variable, like altering a
training program to assess its impact on performance.
Conversely, in non-experimental hypothesis-testing research, the
independent variable is not manipulated, such as observing the
relationship between intelligence and reading ability without
altering the intelligence factor.
7. Experimental and control groups:
In experimental hypothesis-testing research, the group that
experiences standard conditions is called the 'control group,'
while the one under new conditions is the 'experimental group.'
In scenarios where both groups are subjected to special
conditions, both are labeled as 'experimental groups.' Studies
can be designed to include only experimental groups or a mix of
both experimental and control groups.
8. Treatments:
In experimental research, the specific conditions applied to
experimental and control groups are known as 'treatments.' For
example, using different study programs or types of fertilizers to
assess their effects on outcomes like student performance or
wheat yield represents different treatments.
9. Experiment:
The process of testing the validity of a statistical hypothesis
through experimentation can involve either absolute
experiments, which assess a single variable's impact, or
comparative experiments, which compare the effects of different
variables. For instance, comparing the effectiveness of two
fertilizers on crop yield is a comparative experiment.

10. Experimental unit(s):

The pre-determined plots or the blocks, where different treatments


are used, are known as experimental units.
DIFFERENT RESEARCH DESIGNS

Different research designs can be conveniently described if we


categorize them as:

(1) research design in case of exploratory research studies;

(2) research design in case of descriptive and diagnostic research


studies

(3) research design in case of hypothesis-testing research studies.

1. Research design in case of exploratory research studies:

Exploratory research, also known as formulative research, aims to


formulate a problem for detailed investigation and develop
hypotheses, requiring a flexible research design to adapt as insights
are gained, typically through literature reviews, experience surveys,
and the analysis of insightful examples.

1.Literature Review: Reviewing existing literature helps in


formulating precise research problems or hypotheses by evaluating
previous work and suggesting new hypotheses based on past findings.
This method builds on existing knowledge and theories from various
contexts to enhance the researcher's current project.

2. Experience Survey: This method involves surveying individuals


with practical experience related to the research problem to gain
insights and new ideas. Selected respondents offer valuable
perspectives through interviews, which are structured yet flexible
enough to explore unanticipated topics, enhancing the definition and
hypothesis formulation of the research problem.

3.Analysis of Insight-Stimulating Examples: This approach involves


studying specific cases or instances that offer unique insights into the
research problem, especially where little precedent exists. The method
relies on examining records, conducting interviews, and other
approaches to gather diverse information that can lead to new
hypotheses, focusing on cases that show striking contrasts or notable
features for their relevance and insight potential.

2. Research design in case of descriptive and diagnostic research


studies:

Descriptive research studies focus on describing the traits or


characteristics of specific individuals or groups. In contrast,
diagnostic research looks at how often something happens or its
relationship with other factors. Descriptive studies detail facts and
features about a person, group, or situation, often seen in social
research. Both descriptive and diagnostic studies need clear
definitions of what is being measured and the group (population)
being studied. They also require carefully planned methods to ensure
the information is accurate and free from bias, while also being cost-
effective.

The design in such studies must be rigid and not flexible and must
focus attention on the following:
(a) Formulating the objective of the study (what the study is about and
why is it being made?)

(b) Designing the methods of data collection (what techniques of


gathering data will be adopted?)

(c) Selecting the sample (how much material will be needed?)

(d) Collecting the data (where can the required data be found and with
what time period should the data be related?)

(e) Processing and analysing the data.

(f) Reporting the findings.

In descriptive or diagnostic studies, the research process begins with


clearly defining the study objectives to ensure relevant data collection.
The choice of data collection methods is crucial, such as surveys,
interviews, or observations, and measures must be taken to prevent
bias and ensure accuracy. The data instruments should be pre-tested
for clarity and efficiency. Sampling methods are then designed to
accurately represent the population, often using probability sampling
techniques.

During data collection, it's important to closely supervise field staff to


maintain the integrity of the data. Once collected, data must be
processed and analyzed, which includes coding, tabulating, and
statistical analysis to ensure accuracy and reliability. Finally, the
findings are reported in a structured manner to effectively
communicate the results. This process forms a comprehensive
research design, often characterized as a survey design, aimed at
minimizing bias and maximizing reliability.

3. Research design in case of hypothesis-testing research studies:

Hypothesis-testing research, often referred to as experimental studies,


involves testing hypotheses about causal relationships between
variables. These studies are designed to minimize bias and enhance
reliability, enabling researchers to make valid causal inferences. The
concept of experimental designs, fundamental in these studies,
originated with Professor R.A. Fisher at the Rothamsted Experimental
Station in England, primarily in agricultural research. Fisher's
innovative approach involved dividing agricultural fields into blocks
to conduct experiments that provided more reliable data and
inferences. This method evolved into various experimental designs
now applied across multiple disciplines, utilizing technical
agricultural terms like treatment, yield, plot, and block to describe
elements of the experiments.

BASIC PRINCIPLES OF EXPERIMENTAL DESIGNS

Professor Fisher has enumerated three principles of experimental


designs:

(1) the Principle of Replication;

(2) the Principle of Randomization;

(3) Principle of Local Control.


The Principle of Replication in experimental research means
repeating the experiment multiple times to increase statistical
accuracy. For instance, in testing two rice varieties, instead of planting
each variety in one part of the field only, you would divide the field
into several parts, planting each variety in multiple parts. This method
produces more reliable results because the effect of each variety is
observed across different samples, thus enhancing the conclusions'
credibility.

The Principle of Randomization protects against the influence of


external factors by randomly assigning treatments to experimental
units. This ensures that any variation among the units can be
attributed to chance rather than systematic differences like soil
fertility. By randomizing, the results more accurately reflect the true
effects of the treatments being tested.

The Principle of Local Control involves managing known sources of


variability by dividing the experimental field into homogeneous
blocks. Each block is then further divided, and treatments are
randomly assigned within these blocks. This approach allows
researchers to measure and eliminate the variability caused by
extraneous factors, ensuring that the observed effects are due to the
treatments alone and not external variables.

Important Experimental Designs

Experimental design refers to the framework or structure of an


experiment and as such there are several experimental designs. We
can classify experimental designs into two broad categories, viz.,
informal experimental designs and formal experimental designs.
Informal experimental designs are those designs that normally use a
less sophisticated form of analysis based on differences in
magnitudes, whereas formal experimental designs offer relatively
more control and use precise statistical procedures for analysis.
Important experiment designs are as follows:

(a) Informal experimental designs:

(i) Before-and-after without control design.

(ii) After-only with control design.

(iii) Before-and-after with control design.

(b) Formal experimental designs:

(i) Completely randomized design (C.R. Design).

(ii) Randomized block design (R.B. Design).

(iii) Latin square design (L.S. Design).

(iv) Factorial designs.

1. Before-and-after without control design:


This type of research design is a pre-test/post-test single group
design, where a dependent variable is measured before and after
a treatment is applied to the same group. The effect of the
treatment is determined by the difference in measurements of
the dependent variable before and after the treatment. A major
challenge with this design is that over time, external factors
might cause variations that could affect the measurement and
interpretation of the treatment's effect, thus complicating the
ability to attribute changes directly to the treatment.
2. After-only with control design:
This design uses a control group and a test group, where the
treatment is applied only to the test group. The effect of the
treatment is determined by comparing the changes in the
dependent variable between the two groups. The key assumption
is that both groups are identical in how they respond to the
phenomenon being studied. This method reduces the influence
of time-related variables and is more robust than designs
without a control group.
3. Before-and-after with control design:
This design measures a dependent variable in both a test and
control area before and after introducing a treatment only to the
test area. The effectiveness of the treatment is assessed by
comparing the changes in the dependent variable in both areas,
effectively accounting for both time-related changes and
differences between the test and control areas. This method
generally provides more reliable results by controlling for more
variables, though it may be impractical if historical data, time,
or a comparable control area are unavailable.
4. Completely randomized design (C.R. design):
The Completely Randomized (C.R.) design involves randomly
assigning subjects to treatments, using one-way ANOVA for
analysis, and is best suited for homogeneous experimental
conditions, maximizing error degrees of freedom and attributing
all extraneous variation to chance.
5. Randomized block design (R.B. design)
The Randomized Block (R.B.) design groups subjects into
homogeneous blocks and assigns one subject per treatment
within each block, using two-way ANOVA for analysis,
enhancing control over variability and ensuring each treatment
is equally represented in all blocks.
6. Latin square design (L.S. design)
The Latin Square (L.S.) design manages variability in
experiments by allocating treatments across rows and columns,
ensuring each treatment is used only once per row and column,
ideal for addressing two major extraneous factors like varying
soil fertility and seed types.
7. Factorial designs:
Factorial designs in experiments allow for the study of effects
from multiple factors simultaneously and come in two forms:
simple and complex factorial designs.

You might also like