0% found this document useful (0 votes)
1K views38 pages

MAC 413 - Summary

The document discusses the importance of data editing in communication research, emphasizing its role in enhancing data quality, eliminating coding difficulties, and facilitating accurate data analysis and interpretation. It also highlights the benefits and downsides of using computers for data analysis, noting issues such as potential data loss due to viruses and system crashes, as well as the misuse of statistical techniques. Additionally, it explains key data analysis concepts, including independent and dependent variables, types of errors, and the significance of qualitative coding in research.

Uploaded by

stonesa2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views38 pages

MAC 413 - Summary

The document discusses the importance of data editing in communication research, emphasizing its role in enhancing data quality, eliminating coding difficulties, and facilitating accurate data analysis and interpretation. It also highlights the benefits and downsides of using computers for data analysis, noting issues such as potential data loss due to viruses and system crashes, as well as the misuse of statistical techniques. Additionally, it explains key data analysis concepts, including independent and dependent variables, types of errors, and the significance of qualitative coding in research.

Uploaded by

stonesa2000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

COURSE CODE: MAC 413

COURSE TITLE: DATA ANALYSIS IN COMMUNICATION RESAERCH

1. Why do you think data editing is essential in communication research?

1. Enhance the quality of field data by making sure that all

information are complete, unambiguous, legible, uniform,

accurate and true:

Any incomplete data could render analysis useless. At this stage, the

appropriate research assistants or data editors are expected to ensure

information supplied are detailed enough depending on what the researcher

wants to achieve and that the information gathered is not ambiguous,

illegible, inaccurate and untrue before it becomes difficult to reach the

participants or respondents for further clarification.

Data analysis is essentially analysis of what was brought in as information;

so, any wrong information that creeps in at this stage unnoticed will surely

make analysis also wrong. The implication again is that data interpretation

or interpretation of the so-called results also becomes very deceptive

because of the faulty premise it was based.


2. Eliminate Coding Difficulties:

Eliminating coding difficulties is another very important objective of data

editing. At this stage, the investigators are really interested in making sure

any potential difficulty that may creep in at the stage of coding is

eliminated now before that stage. The reason is simple. At the coding stage,

any unresolved difficulty may engender analysis because it may be difficult

to get back to the field to seek clarification. Coding, as we will later find

out in this Unit must take place for a meaningful data entry and subsequent

analysis to be effectively carried out.

3. Facilitate Data Analysis and Interpretation of Results:

Data editing is primarily meant to facilitate data analysis. The research

process is so linked together that every of the procedures or processes are

interrelated and aiming at a single primary purpose: find answers to the

research questions! So, data editing helps the next procedural step be

making sure its own loose ends are tight against possible errors that could

be traced back to poor editing.

4. Detect Errors/ Incorrect Entries and Correct Them:

Editing in normal or other situations is done to identify errors and to correct

them! This is not different in data editing. Humans are naturally prone to

errors. This should not be a problem at all if a proper editorial team is in

place to proofread materials before they are moved to the next level.

Therefore, detecting errors, incorrect entries, or incomplete entries with the

aim of correcting them before any form of analysis is performed on the data

should be a primary objective of data editing.


5. Facilitate Interpretation of Results:

Interpretation of research results is a fundamental purpose of research that

is meant to address specific needs. The outcome of researches cannot really

make any sense if such outcomes are not properly put in context by way of

interpretation. Interpretation will help the researcher to find out if the

objectives have been met and to find out if the research questions have been

answered. This makes interpretation of research findings very vital in the

research business. Having established the importance of result

interpretation, every effort should be made at the editing stage to facilitate

this.

Discuss the benefits and the downsides of using computer for data analysis in

communication research.

1. It Is Subject To Abuse:

The computer is often abused by inexperienced users who just get excited about

the many things they could do with the computer. I have seen a situation where an

undergraduate simply instructed the computer to print out simple frequency tables,

bar charts and pie-charts of a 30-item questionnaire constructed to address five

research questions only! So, she had 30 FDTs, 30 similar bar charts and 30 pie

charts in the same chapter four of her work. This is a typical abuse found with

undergraduate using computer to do their projects.

2. Virus Attack Can Wipe Off Your Entire Data:


Virus and other types of attack could completely wipe out stored data. What is the

need of saving data that will never be used again because it has been deleted by a

virus that attacked the system. This is a core disadvantage of the computer

technology. The earlier researchers begin addressing this issue, the better for all of

us because once people get the conviction that there private, confidential and

sensitive information are no longer safe in the computer they will begin

reinventing the wheel!

3. System Crash Could Also Cause The Loss Of Data:

Apart from virus attack which wipes out data; a worse case scenario exists: system

crash! A system could crash as a result of various forms of virus attack. This will

result in the loss of data.

4. Data Could also Easily Fall into the Hands of Wrong Persons Who Have

Access to the Computer System in Which Such Data are Stored:

Such wrong persons could take advantage to go through your confidential

and personal data, or even steal your personal data for personal use.

1. a. Justify with not less than five reasons why you will need a computer set while carrying

out your final year research project. (171/2 Marks)

1.It Makes Data Storage And Retrieval Easy:


One critical advantage of the computer is its capacity to store large amount

of data and produce same on request many days or years after storing such

data.

2. It Makes Data Processing Easy To Perform:

Data processing and management are easily performed using the computer

technology unlike when humanity used manual efforts to attempt same. Then,

the job will take days with so much errors and omissions. Today, the computer

has speed, accuracy and efficiency in processing and managing large amount of

data.

3. It Facilitates Simulation:

The computer could generate data which could take the place of actual

behaviour of an object or a temporal process especially for research

procedures.

4. It Increases the Accuracy of Statistical/Mathematical Calculations

as Well as Maintains Speed in Doing Such Calculations:

The computer has the advantage of speed, accuracy and efficiency in its

application. Unless there is a mechanical fault which is not usually the

regular case, a computer computation cannot be wrong. This is without

prejudice to the garbage in, garbage out syndrome the technology is

associated with.

5. It Enhances Research Results and Subsequent Interpretation of

Such Results:

A good data analysis will definitely lead to a good interpretation of the

result. The computer obviously has the capacity to generate valid results

which will help the researcher or user make valid inferences and
interpretation from the results.

6. It Facilitates Data Presentation Through The Application Of

Appropriate Techniques:

The computer helps the researcher or presenter with diverse presentation

applications that enhance delivery and skill development. The computer

also has other presentation techniques and modules that make data

presentation appear in a much more appealing and comprehendible formats.

Discuss at least two ways in which the abuse of computer can affect your research work.

1. Wrong Application of Statistical Techniques for Data Analysis:

Evidence abounds to support the argument that available statistical software or

packages are being wrongly applied by both students and researchers who

understand little or nothing about the assumptions or implications of such statistical

software or packages. This is often leads to the problem of “garbage in, garbage

out” syndrome in social science research (Obikeze, 1990).

2. The Tendency to Overproduce Statistical Tables:

The second type of abuse is the tendency for the inexperienced researcher to

overproduce tables, graphs, charts, etc without consideration to main issues or

variables under study. This is seen more in the overproduction of cross tabulation

tables and correlation matrixes. The result of this type of abuse is a truckload of

computer print-outs which will take a decade to go through (Obikeze, 1990).


Discuss with relevant examples and or illustrations where applicable any six of the

following data analysis terms

a. Independent variable

b. Dependent variable

c. Error Type 1

d. Error Type 2

e. Primary data

f. Secondary data

g. Null hypothesis

h. Alternative hypothesis

Variables could be classified using different platforms. However, the

classification according to relationship to one another is the most popular and

often used in mass media research. In this regard, variables could be classified

into two thus:

1. Independent Variables

2. Dependent Variables

According to Kerlinger (1973:35), cited in Wilson, Esiri & Onwubere (2008),

the most important and useful way to classify variables is to see them as either

independent or dependent variables. This classification, according to him, is

highly useful because of its general applicability, simplicity and special


importance in conceptualizing and designing research and in communicating the

results of the research.

Kerlinger (1973:35) defines an independent variable (IV) as the presumed cause

of the dependent variable (DV), the presumed effect. The independent variable

is the antecedent, while the dependent variable is the consequent (Wilson, Esiri

& Onwubere, 2008).

According to Wilson, Esiri & Onwubere (2008), independent variable could be

distinguished thus:

The independent variable is observed and its value presumed to

depend on the effects of the dependent variable. In other words, the

dependent variable is what the researcher wishes to explain. The

independent variable may be manipulated or it may just be measured.

In contrast, the dependent variable is what we are studying, with

respect to how it is related to or influenced by the independent

variable or how it can be explained or predicted by the independent

variable. It is sometimes called the response variable or the criterion

variable. It is never manipulated as a part of the study. DVs are the

things we measure about people.

Wilson, Esiri & Onwubere (2008) went ahead to further explain the concept of

Independent variable and Dependent variable using the following example:

Suppose two investigators are studying the relationship between


criminal behaviour in adolescents and parental guidance to determine

what kinds of advice to give parents. The two investigators may have

the same data. This data includes: (1) the police records of a group of

adolescents, giving data about the number of times the child has

entered the criminal justice system (such as by being arrested,

questioned by the police, etc.), and (2) information from a

questionnaire about the kinds of information or advice that each

adolescent has received from his or her parents. One investigator

might be examining whether parents who give advice focusing on

walking away from interpersonal conflicts differ from parents who

give advice to the child to “stand up for yourself”. The independent

variable is the kind of advice the parents give and the dependent

variable is whether the child has criminal record or not.

Continuing, Wilson, Esiri & Onwubere (2008) further note that a second

investigator might be asking a different question like “what types of parental

advice and guidance distinguishes adolescents who get into the criminal system

from those that don’t? In this case, whether or not the child has a criminal

record or not is the independent variable and the type of parental advice is the

dependent variable. From this example, it should be clear that the distinction

between the independent and dependent variable is based not on manipulation

but on the questions one is asking of the data”.

Type 1 Error:

A Type 1 Error has been committed if we reject the null hypothesis, Ho when in fact it is

true. This error is sometimes called the α-error (Alpha error).


Type 2 Error

A Type 2 Error has been committed if we accept the null hypothesis, Ho when in

fact it is false. This error is sometimes called the β-error (Beta error). In other

words, a type 2 error is made when Ho is erroneously accepted.

It is pertinent to mention here that these two types of errors arise because the truth

or falsity of Ho is unknown; even after it is accepted or rejected. Consequently, that

we accept Ho does not necessarily mean it is true. In the same manner, that we

reject Ho does not mean it is false (Nwabuokei, 1999).

Primary Data:

Primary data are the type of data gathered directly from the field by the

researcher. In order words, the researcher himself collects primary data. He

collects them through interviews, experiments, direct observations, surveys,

experiments etc. Primary data are collected for a specific purpose. After this

need has been met, the data may become useless for any other purpose(s). In

conclusion, one can comfortably say that Primary data come in raw for

processing and some even end up as Secondary data in certain situations or

circumstances.

2. Secondary Data:

Secondary data already exist in a database or any other storage type but put

there for a different purpose other than for research purposes. They are called

Secondary because the researcher himself does not directly obtain them. The

researcher merely secures permission of the custodian of such data to use

them for a different purpose for which it was originally obtained and stored.
THE NULL AND THE ALTERNATIVE HYPOTHESES

The hypothesis that is being tested is called the Null Hypothesis and is denoted by

Ho. The hypothesis that we are willing to accept if we reject the null hypothesis is

called the Alternative Hypothesis denoted by H1.

We shall now illustrate these two hypotheses with the following examples.

Example:

Suppose that during the Alpha Semester, a class of 100 students was taught statistics

by the use of a certain method. At the end of the semester, the class recorded an

average score of 65% in this subject with a standard deviation of 5%. During the

Omega Semester, a new statistics lecturer was employed to handle the course. The

new lecturer claims that he has developed a new method for teaching the course,

which according to him, is more effective than the first one. In order to test his claim,

a sample of 30 students taught by the new method was examined at the end of the

Omega Semester. The average score was found to be 68%. The question arises: Is

this new method really more effective than the old one? In other words, should a

decision be taken to adopt the new method of teaching statistics? In order to answer

this question, we need to carry out a test of hypothesis (Nwabuokei, 1999). Such a

test will enable us verify the following possibilities:

1. Whether the observed higher sample average of 68% was actually due to the fact

that the second method is better than the first one, that is, whether there is a

significant difference between the two methods, or

2. Whether the observed sample average occurred by chance. For example, it might

have happened that the sample was made up of the very brilliant students. If that is

the case, it would be wrong to attribute the higher average score observed for the
30 students to the superiority of the second method (Nwabuokei, 1999).

On the other hand, according to Nwabuokei (1999), suppose the sample of 30 students so

selected recorded an average score of say 63%, which is lower than the average scored by the

entire class taught by the first method during the Alpha Semester. Here, again, a test

of hypothesis will enable us verify:

1. Whether the observed Low Sample Score of 63% has occurred because the new

method is by no means more effective than the old one.

Whether the observed Low Sample average score occurred because the sample

happened to be made up of less brilliant students; and not because the new method

is less effective than the old one.

The null hypothesis is usually specified in terms of the population parameter of

interest. It is the hypothesis of no difference, and consequently, it is stated with the

equality sign. Throughout the process of analysis, the Ho is assumed true. Evidence

provided from the sample that is inconsistent with the null hypothesis leads to its

rejection. On the other hand, evidence supporting the hypothesis leads to its

acceptance (Nwabuokei, 1999)

Discuss at least four (4) reasons why you think qualitative Data coding is essential in data

analysis?

Uses of Qualitative Coding:

Qualitative coding has many uses. Some of these uses are

summarized by Obikeze (1990) thus:


1. Helps the researcher to clarify, sharpen and specify the key

concepts, issues and overall research objectives.

2. Helps the researcher to detect errors, omissions,

inconsistencies etc in the construction and administration of

research instruments.

3. Helps the researcher to replicate data analysis procedures and

verifications of research findings and interpretations.

4. Helps the researcher to apply statistics in the analysis of

qualitative data.

5. Helps in the development of new and specialized social research techniques and

methodologies

Discuss the two sources of data.

SOURCES OF DATA

The sources of data also come in as the types. In order words, the various

sources of statistical data also function as type. So, we derive the type of data

from the sources from which such data are gathered. In this line, we shall now

discuss the two sources of data from where the two types of data emanate from

thus:

1. Primary Data:

Primary data are the type of data gathered directly from the field by the

researcher. In order words, the researcher himself collects primary data. He


collects them through interviews, experiments, direct observations, surveys,

experiments etc. Primary data are collected for a specific purpose. After this

need has been met, the data may become useless for any other purpose(s). In

conclusion, one can comfortably say that Primary data come in raw for processing and some

even end up as Secondary data in certain situations or

circumstances.

2. Secondary Data:

Secondary data already exist in a database or any other storage type but put

there for a different purpose other than for research purposes. They are called

Secondary because the researcher himself does not directly obtain them. The

researcher merely secures permission of the custodian of such data to use

them for a different purpose for which it was originally obtained and stored.

Secondary data, however, come in first time as primary data except that they

are data collected for use, for a purpose different from that for which they

were originally collected. They are usually obtained from existing records

like medical records of patients in a particular area could now be used to

determine the cause of maternal mortality of that particular area in a related

study. Those medical records were merely kept there as a hospital policy but

now a medical researcher who is interested in maternal mortality could make

use of them with appropriate approvals.

Describe briefly two types of data analysis

Two broad types of data analyses, derivable from the two types of statistics,

are possible in the Social and Behavioural Sciences. These are:


1. Descriptive data analysis

2. Inferential data analysis.

1. Descriptive Data Analysis:

Descriptive data analysis is a type of data analysis that occurs when data are

analysed in such a way as to describe and summarise the content therein.

The tools which are used in descriptive data analysis are descriptive

statistics. Descriptive statistics is an aspect of statistics that studies a body

of statistical data and no generalizations are made from the results obtained.

Descriptive statistics only seeks to describe and analyze a given set of data

without drawing any conclusions or inferences about the population.

Population in statistics refers to any finite or infinite collection of objects

under study (Nwabuokei, 1990).

In other words, everything dealing with collection, processing, analysis,

presentation and interpretation of numerical data belongs to this aspect of

statistics. Descriptive statistics include tabulation, graphical representation

of data (e.g. bar chart, histogram, pie chart, etc) and measures of central

tendency (Wilson, Esiri & Onwubere, 2008).

2. Inferential Data Analysis:

Inferential data analysis happens when the researcher is interested in doing

more than just description of the data. At this stage, parametric or

inferential statistics is used. Inferential statistics is a branch of statistics that

studies a group of data in order to use the results obtained in making

generalization on a larger group of data. In other words, statistical inference

is the use of sample results to reach conclusions about populations from


which the samples have been drawn (Nwabuokei, 1990).

Inferential statistics, according to Wilson, Esiri & Onwubere (2008),

involves making generalizations about the whole population based on

information or data obtained from a sample. Inferential statistics include

estimation theory, hypothesis testing, parametric tests etc. Citing Olaitan, et

al, (2000), Wilson, Esiri & Onwubere (2008), also note that “Analyses done

on this basis are used for testing hypotheses and making inferential

decisions, based on some sample data. Thus, on the basis of analyzed

sample data, generalization can be made about the overall population from which the sample

was originally drawn.

For an investigation to qualify as research, it must satisfy some qualities. Discuss in

details, five (5) characteristics of research.

This includes:

1. Research is Systematic and Procedural

Research follows a definite set of procedures. These procedures are standards that
are generally accepted the world over. Anyone who engages in research is

therefore expected to adhere strictly to such procedures. This adherence has made

research acquire the notion of being systematic and organized in a peculiar and

particular way.

2. Research is Logical

Wilson, Esiri & Onwubere (2008) see logic as a system of constructing proofs to

get reliable confirmation of the truth of a set of hypotheses. Generally, they see it

as the rational way of drawing or arriving at a reasonable conclusion on any

subject matter through research. Hence in research, according to Wilson, Esiri &

Onwubere (2008), the following logical laws are maintained:

1. Hypotheses derived are expressed in a formal language·

2. The allowable steps of inference are codified formally so that well formed

proofs are obtained.

3. The permitted inferences and conclusions are sound.

3. A Research Activity Is Purposeful, Well Planned And Well

Thought – Out

Research has the characteristics of being purposeful and aimed at achieving

well defined and specific objectives. A research activity is ordered,

systematic and follows well known and clearly laid-out procedures. This

ensures replicability and generalizability.

.4. Research is Reductive


The reductive nature of research makes it possible to summarize complex

observations in logically related propositions which attempt to explain a

subject matter. According to Wilson, Esiri & Onwubere (2008),

observations are converged in a way that irrelevant variables are excluded

while relevant variables are included. Hence, research has the

characteristics of controlling the flux of things and establishing facts

(Wilson, Esiri & Onwubere, 2008).

5. Research is Empirical

Empiricism suggests that research is objective, observed, experimental,

experiential, and pragmatic in nature. According to Wilson, Esiri &

Onwubere (2008), “A common image of ‘research’ is a person in a

laboratory wearing a white overcoat, mixing chemicals or looking through

a microscope to find a cure for an exotic illness. Research ideas are

accepted and rejected based on evidence. Hence, one of the most

outstanding feature or characteristics of research is its empirical nature”.

.6. Research is Replicable and generalizable

Wilson, Esiri & Onwubere (2008) see Replication as a critical step in

validating research to build evidence and to promote use of findings in

practice. Replication involves the process of repeating a study using the

same methods with the assurances that you will get same or similar results.

According to Wilson, Esiri & Onwubere (2008), Replication is important for

a number of reasons. These include:


(1) Assurance that results are valid and reliable,

(2) Determination of generalizability or the role of extraneous

variables,

(3) Application of results to real world situations,

(4) Inspiration of new research combining previous findings from

related studies; and

(5) Assurances that the schema used is available, reusable and

reliable in terms of producing same or similar results.

On the other hand, generalizability ensures that the findings of a particular

research conducted using a selected sample are easily implied or generalized

to the larger population. To meet the demands of this characteristic,

researchers make sure their research samples are representative of the main

population. This representativeness ensures generalizability of findings to

the population.

Mistakes in the data gathering process can render research worthless. That is why

researchers take their time to edit data gathered. Critically review five (5) objectives of

editing in the data gathering process.

1. Enhance the quality of field data by making sure that all

information are complete, unambiguous, legible, uniform,

accurate and true:

Any incomplete data could render analysis useless. At this stage, the

appropriate research assistants or data editors are expected to ensure


information supplied are detailed enough depending on what the researcher

wants to achieve and that the information gathered is not ambiguous,

illegible, inaccurate and untrue before it becomes difficult to reach the

participants or respondents for further clarification.

Data analysis is essentially analysis of what was brought in as information;

so, any wrong information that creeps in at this stage unnoticed will surely

make analysis also wrong. The implication again is that data interpretation

or interpretation of the so-called results also becomes very deceptive

because of the faulty premise it was based.

2. Eliminate Coding Difficulties:

Eliminating coding difficulties is another very important objective of data

editing. At this stage, the investigators are really interested in making sure

any potential difficulty that may creep in at the stage of coding is

eliminated now before that stage. The reason is simple. At the coding stage,

any unresolved difficulty may engender analysis because it may be difficult

to get back to the field to seek clarification. Coding, as we will later find

out in this Unit must take place for a meaningful data entry and subsequent

analysis to be effectively carried out.

3. Facilitate Data Analysis and Interpretation of Results:

Data editing is primarily meant to facilitate data analysis. The research

process is so linked together that every of the procedures or processes are

interrelated and aiming at a single primary purpose: find answers to the

research questions! So, data editing helps the next procedural step be

making sure its own loose ends are tight against possible errors that could
be traced back to poor editing.

4. Detect Errors/ Incorrect Entries and Correct Them:

Editing in normal or other situations is done to identify errors and to correct

them! This is not different in data editing. Humans are naturally prone to

errors. This should not be a problem at all if a proper editorial team is in

place to proofread materials before they are moved to the next level.

Therefore, detecting errors, incorrect entries, or incomplete entries with the

aim of correcting them before any form of analysis is performed on the data

should be a primary objective of data editing.

5. Facilitate Interpretation of Results:

Interpretation of research results is a fundamental purpose of research that

is meant to address specific needs. The outcome of researches cannot really

make any sense if such outcomes are not properly put in context by way of

interpretation. Interpretation will help the researcher to find out if the

objectives have been met and to find out if the research questions have been

answered. This makes interpretation of research findings very vital in the

research business. Having established the importance of result

interpretation, every effort should be made at the editing stage to facilitate

this.

A case study is the investigation of a particular subject matter. This research method is

used to study individuals, institutions, organisations, events, issues, or some type of

phenomenon. The implication of this, therefore, is that there are different kinds of case

studies.
a State and explain five (5) types of case study research with examples.

Wilson, Esiri and Onwubere (n.d.) citing Osuala (2005:187-188), identify types of

case studies as follows:

1. Historical Case Study: These studies trace the development

of an organisation/system over time.

2. Observational Case Study: These studies often focus on a

classroom, group, teacher and pupil, often using a variety of

observation and interview methods as their major tools.

3. Oral History: These are normally first person narratives that

the researcher collects using extensive interviewing of a single

individual.

4. Situational Analysis: Particular events are studied in this form

of case study, for example, an act of student vandalism could be

studied by interviewing the concerned, the parents, the teacher, the

chairman, witnesses, etc.

5. Clinical Case Study: This approach aims to understand in

depth a particular individual, such as a child having problem with

reading, a newly transferred student in his first term at school or a

teacher with disciplinary difficulties.

6. Multi-Case Studies: A collection of case studies, that is the

multi-case study, is based on the sampling logic of multiple

subjects in one experiment.

What are the purposes of case study research? Explain five (5) of it.
PURPOSE OF CASE STUDY

According Wilson, Esiri and Onwubere (n.d.) citing Osuala (2005:186-187), the

purposes of a case study research include the followings:

1. It is valuable as preliminaries to major investigations. Because

they are so intensive and generate rich subjective data they may

bring to light variables, phenomena, processes and relationships

that deserve more intensive investigation. In this way a case study

may be a source of hypotheses for future research.

2. A case study fits many purposes, but most case studies are based

on the premise that a case can be located that is typical of many

other cases. Once such a case is studied it can provide insights into

the class of events from which the case has been drawn.

3. A case study may provide anecdotal evidence that illustrates

more general findings.

4. A case study may refute a universal generalization. A single

case can represent a significant contribution to theory building and

assist in refocusing the direction of future investigations in the

area.

5. A case study is preferred when the relevant behaviours cannot


be manipulated.

6. A case study may be valuable in its own right as a unique case.

Gruber’s (1974) study of Darwin and the processes by which he

arrived at the theory of evolution is an example of this.

1. A research topic, “Influence of social media use on the academic performance of

Nigerian undergraduates” has just been approved for you in your final year project.

Discuss and justify the relevance of the topic under each of the following classifications

of research:

a. Classification by practice(6 Marks)

b. Classification by the method used in gathering data (6 Marks)

c. Classification by measurement (6 Marks)

d. Classification by discipline (6 Marks)

CLASSIFICATION BY PRACTICE

One way scholars have classified research in communication is to look at it from

the perspective of practice. From this school of thought, communication research

could be divided into two thus:

A. Academic research

B. Applied research.

3.1.1. (A) Academic Research

Academic research by practice is the type of research faculty and students of

higher institutions of learning conduct in their institutions to meet certain career

and graduation demands respectively. It is usually theoretical in nature. In other

words, it is conducted for academic purpose rather than for its intrinsic values for
the society as a whole (Wilson, Esiri & Onwubere, 2008).

Wilson, Esiri & Onwubere (2008) further note that notwithstanding their

description of this type of research, that the assumption that it has no intrinsic

value is not entirely true because situations have arisen when at later dates

someone who obviously consults such studies for literature for a current study

have found something of value in such academic research.

3.1.1. (B) Applied Research

Applied Research is the opposite of Academic Research. This research usually sets

out to address specific needs in the society at large. It is research conducted for its

intrinsic values rather than to meet some obscure academic expectations. This type

of research is also enterprising in nature because of the availability of funds to

support it. When compared to Academic Research, most companies and funding

33 | P a g eagencies will always have a preference for Applied Research than Academic

Research in their sponsorship budgeting. In essence, there are usually enough

grants, funds and resources to support of fully sponsor Applied Research because

of the value it holds.

CLASSIFICATION BY MEASUREMENT

The way data for research is generated and measured could also be used to classify

it. This leads us to the classification by measurement. In this category, we have

two types as presented below:

A. Quantitative Research

B. Qualitative Research

3.1.2 (A) Quantitative Research


Quantitative Research takes numerical values and uses serious statistical tools for

its measurement. The research is therefore designed to yield numerical data or

expected to turn the variables into numbers (Wilson, Esiri & Onwubere, 2008).

According to Wilson, Esiri & Onwubere (2008) Quantitative Research “is

concerned with how often a variable is present and generally uses figures to

communicate this amount. In other words, the quantitative approach involves the

collection of numerical data in order to explain, predict and/or control the

phenomena of interest. Data analysis in quantitative research is mainly statistical

or deductive process”. Quantitative research techniques include Field Experiment,

Survey, and content analysis.

3.1.2 (B) Qualitative Research

Qualitative Research is the opposite of Quantitative Research. While Quantitative

takes numerical values, Qualitative does not. Rather, it gives a more detailed and

in-depth analysis of the subject as a result of the closer interaction with the subject

of investigation. Wilson, Esiri & Onwubere (2008) observe that Qualitative

Research involves the collection of extensive narrative data in order to gain

insights into the phenomena of interest. According to them, data analysis in

Qualitative Research “involves the coding of data and production of a verbal

synthesis or inductive process. In other words, it does not depend on the

measurement of variables or research elements.” Qualitative research methods

include Focus Group Discussion (FGD), in-depth interview, Field Observation,

Case Study approach, Historical Analysis and Ethnography.

3.1.3 CLASSIFICATION BY THE METHOD USED IN GATHERING

DATA
Based on actual practice and literature review, this type of classification is the most

popular in the field of Mass Communication and most other disciplines in the Social

Sciences and Humanities because no matter how research is classified, it is

fundamentally identified by the method used in collecting the data for the study

(Wilson, Esiri & Onwubere, 2008). By the method used in data gathering for a

particular research, the following methods, already identified as either quantitative

or qualitative suffice as examples: Surveys, Field Experiments, Content Analysis,

Interviews, Focus Group Discussion (FGD), Field Observation, Ethnography,

Historical Research and Case Study Approach.

3.1.4 CLASSIFICATION BY DISCIPLINE

Research could also be classified by the discipline or subject orientation. Here, the

subject matter becomes the focus and label for such research endeavour. Using this

perspective, we have the following types of research: communication research,

media research, public relations research, advertising research, social science

research, clinical research, marketing research, operations research, legal research,

population research, psychological research, political research, opinion research etc.

3.1.5 OTHER RESEARCH CLASSIFICATION TYPES

There are other research classification types that do not fall under the categories

already discussed in this Unit. Some do and even overlap to more than two different

levels of classification. However, we will merely list them here because this course is

not about communication research per se; it is about data analysis in communication

research.

The other research classifications that we are yet to identify under any of the existing
categories include: Longitudinal Research, Administrative Research, Critical

Research, Exploratory Research, Ethno Methodological Research, Primary Research,

Social Research, Cultural Research and Secondary Research amongst others.

You are a member of a research team using qualitative technique to establish the need for

social media regulation in Nigeria. Discuss with other members of the team at least five

factors that can help increase the credibility of your study.

Factors That Help Increase the Credibility of Qualitative Data

1) The Use Of Multiple Methods Of Data Collection:

This factor is similar to the notion of triangulation. The use of interviews along

with field observation and analysis of existing documents suggests that the topic

was examined holistically. This helps to build credibility in the findings as well as

introduces other perspectives that a single method may have ignored.

2) Audit Trail:

This is essentially a permanent record of the original data used for analysis and the

researcher’s comments and analysis methods. Audit trail allows others to examine

the thought process involved in the researcher’s work and allows them to assess

the accuracy of his/her conclusions. This is basically about getting an outside view

point on the procedure that may have influenced the position taken.
3) Member Checks:

This allows research participants access to the notes and conclusions of the

researcher. This helps the participants to determine whether the researcher has

accurately described what he/she was told. This is quite effective and corrective in

approach. The research participants have an opportunity to determine if the

researchers have adequately captured what they were thinking or what they

actually meant in responses they gave. Any error of judgment or misconception or

misinterpretation identified during this process could be handled at this stage

making this a practical way of increasing the credibility and acceptability of a

research finding. Member check is a solid validation process in qualitative studies.

4) Research Team:

This method or factor assumes that team members keep one another honest and on

target when describing and interpreting their data. Sometimes an outside person is

asked to observe the process and raise questions of possible bias or

misinterpretation where appropriate.

5) Debriefing:

This consists of having an individual outside the project question the meanings,

methods, and interpretations of the researcher. The researcher is obligated to

report the entire process to this individual who is at liberty to ask all manner of

question regarding how the research was conducted and the final outcome. His or

her views on the entire process are very vital to the credibility of the research

outcome.
Examine the four major procedures for quantitative analysis in social science research.

Preliminary Analysis of Data:

This provides a rough overview of the pattern of response for all the variables in

the study. The preliminary analysis of data in quantitative data analysis does not

aim at answering any of the substantive research questions (Obikeze, 1990). It

usually includes the calculation of response rate, a check for response bias, and the

compilation for simple frequency distribution for all the variables. The response

rate measures the proportion of the total respondents who successfully filled their

questionnaires and returned or were available for interview and who were actually

interviewed (Obikeze, 1990).

Summarizing Sample Characteristics:

This stage of analysis highlights the main structural features of the study

population which a likely to impinge on the behaviors of, and relationships among

its members thus providing necessary background for proper understanding and

explanation of social realities. Covered in this analysis are the so called

background variables like age, sex, marital status, occupation and income level,

etc. Descriptive statistics in the form of simple frequency distribution etc are

usually used here (Obikeze, 1990).

Thematic Analysis:
Obikeze (1990:92) identifies this stage of the quantitative data process as the crux

of the data analysis procedure because at this stage the researcher or investigator

“attempts to find answers to the various research issues and questions, as well as

test specific hypothesis. The outcome of this particular process determines the

extent to which the study objectives have been met.”

Interpretation Of Result And Inferences:

Interpretation of results and drawing inferences happen after the data have

been analyzed using descriptive statistics. If the research has followed the

scientific processes diligently, it will be easy to generalize at this level or

draw inferences from the analyzed data.

Relate any five characteristics of social science research to data analysis procedure.

There are many such characteristics of research but for a communication/social science

student, about six core characteristics are vital to your

further exploration in data analysis. This includes:

Research is Systematic and Procedural

Research follows a definite set of procedures. These procedures are standards that

are generally accepted the world over. Anyone who engages in research is

therefore expected to adhere strictly to such procedures. This adherence has made

research acquire the notion of being systematic and organized in a peculiar and
particular way.

Research is Logical

Wilson, Esiri & Onwubere (2008) see logic as a system of constructing proofs to

get reliable confirmation of the truth of a set of hypotheses. Generally, they see it

as the rational way of drawing or arriving at a reasonable conclusion on any

subject matter through research. Hence in research, according to Wilson, Esiri &

Onwubere (2008), the following logical laws are maintained:

1. Hypotheses derived are expressed in a formal language·

2. The allowable steps of inference are codified formally so that well formed

proofs are obtained.

3. The permitted inferences and conclusions are sound.

A Research Activity Is Purposeful, Well Planned And Well

Thought – Out

Research has the characteristics of being purposeful and aimed at achieving

well defined and specific objectives. A research activity is ordered,

systematic and follows well known and clearly laid-out procedures. This

ensures replicability and generalizability.

Research is Reductive

The reductive nature of research makes it possible to summarize complex

observations in logically related propositions which attempt to explain a

subject matter. According to Wilson, Esiri & Onwubere (2008),

observations are converged in a way that irrelevant variables are excluded


while relevant variables are included. Hence, research has the

characteristics of controlling the flux of things and establishing facts

(Wilson, Esiri & Onwubere, 2008).

Research is Empirical

Empiricism suggests that research is objective, observed, experimental,

experiential, and pragmatic in nature. According to Wilson, Esiri &

Onwubere (2008), “A common image of ‘research’ is a person in a

laboratory wearing a white overcoat, mixing chemicals or looking through

a microscope to find a cure for an exotic illness. Research ideas are

accepted and rejected based on evidence. Hence, one of the most

outstanding feature or characteristics of research is its empirical nature”.

Research is Replicable and generalizable

Wilson, Esiri & Onwubere (2008) see Replication as a critical step in

validating research to build evidence and to promote use of findings in

practice. Replication involves the process of repeating a study using the

same methods with the assurances that you will get same or similar results.

According to Wilson, Esiri & Onwubere (2008), Replication is important for

a number of reasons. These include:

(1) Assurance that results are valid and reliable,

(2) Determination of generalizability or the role of extraneous

variables,

(3) Application of results to real world situations,

(4) Inspiration of new research combining previous findings from


related studies; and

(5) Assurances that the schema used is available, reusable and

reliable in terms of producing same or similar results.

As a researcher, discuss at least four factors that can help you to increase the credibility of

your qualitative data. Explain the elements of qualitative data analysis.

Many elements make up the data analysis process in qualitative studies.

However, the following points discussed below generally situate qualitative

data analysis in the parlance of empiricism. These elements are:

1. Analysis Is Holistic And Multi-Dimensional: Rather than concentrate

on specific key variables, qualitative analysis attempts to grasp the totality

of the socio-cultural condition, the historical development, and the entirety

of what that research subject represents. In essence, all of these help the

researcher to address the problems or objectives holistically.

2. Analysis Is Descriptive, In-Depth And Longitudinal: The analysis is

also very descriptive in terms of specifying what transpired; in-depth in

terms of going deep in revealing insightful details; and longitudinal in terms

of the length of time and period the analysis covers so as to situate the

research as well as the findings in a period in history.

3. Analysis Is Naturalistic: This actually implies that the analysis deals

with direct observations under the natural settings of the subject under
investigation. The implication again is that the data gathered are natural

behaviors of people or the subject and so the analysis produces findings that

relate to the real world; not something imagined in the rarefied mind of the

researcher. It is devoid of artificialities and computational manipulations of

the original information to meet some standardized way of doing research.

4. Analysis Is Humanistic: Qualitative data analysis permits the researcher

to interact closely with the data and to look at the problems from the insider

perspective. By making human beings and their behavior become only

statistical figures or numbers, quantitative analysis dehumanizes human

beings. In contrast, qualitative analysis is dialectical and interactive.

5. Data For Analysis May Be Recorded And Presented In Oral Or

Written Forms, In Audio Forms Or In Any other Visual Or Art

Forms: This enriches the accessibility mode for all those who are interested

in the outcome. Its ability to be produced in all of these forms is also a plus

in terms of acceptability and credibility.

6. Analysis Is Generally Not Aimed At Testing Specific Hypothesis

And The Data Usually Cover A Few Numbers Of Cases: This is

another unique element of qualitative data analysis. The analysis does not

concentrate on testing any particular hypothesis as is the case in quantitative

data analysis. Here, the so-called hypothesis is never a finished outcome as it continues a

refinement process even while the analysis remains ongoing.


1. Using relevant examples or illustrations where applicable, explain the three major types

of Frequency Distribution Tables (FDT). Describe the well-defined procedure of

constructing Frequency Distribution Table.

Frequency Distribution Tables (FDT)

There are basically three FDT namely:

1. Univariate FDT

2. Bivariate FDT

3. Multivariate FDT.

UNIVARIATE FDT means that only one variable or questionnaire item is

being considered.

BIVARIATED FDT means that only two variables are being considered

together. More specifically, it is the cross-tabulation of responses to two


questionnaire items or two variables simultaneously.

Multivariate FDT helps to describe, explain or understand relationships

among three or more variables (containing one dependent and two or more

independent and intervening variables).

You might also like