0% found this document useful (0 votes)
6 views4 pages

Quantitative Methods Assignment Answers

The document covers key concepts in quantitative methods, including conditional probability, Bayes' Theorem, and various statistical distributions. It also discusses sampling methods, the Central Limit Theorem, hypothesis testing, and the application of t-tests and ANOVA in analyzing data. The use of R programming for data structures and operations is highlighted as a tool for statistical computing.

Uploaded by

AjayBolleddu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views4 pages

Quantitative Methods Assignment Answers

The document covers key concepts in quantitative methods, including conditional probability, Bayes' Theorem, and various statistical distributions. It also discusses sampling methods, the Central Limit Theorem, hypothesis testing, and the application of t-tests and ANOVA in analyzing data. The use of R programming for data structures and operations is highlighted as a tool for statistical computing.

Uploaded by

AjayBolleddu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

QUANTITATIVE METHODS – CO1 ASSIGNMENT

1. Conditional Probability and Independence of Events

Conditional probability is a fundamental concept in probability theory that describes the


likelihood of an event occurring, given that another related event has already occurred. It is
represented as P(A|B), meaning the probability of event A happening, assuming that event B
has occurred. This is particularly useful in real-world scenarios where events are
interlinked.

For example, suppose there is a 30% chance that it will rain today, and given that it is
cloudy, the chance of rain increases to 70%. This means the probability of rain (A), given
that it is cloudy (B), is P(A|B) = 0.70. The event of rainfall is not independent of cloudiness;
one affects the likelihood of the other.

In contrast, independence of events refers to situations where the occurrence of one event
has no effect on the occurrence of another. Mathematically, events A and B are independent
if:
P(A ∩ B) = P(A) × P(B)

A classic example is flipping a coin and rolling a dice. The outcome of the coin toss (heads or
tails) does not influence the number that appears on the dice. These are two entirely
independent events.

The key difference lies in dependence: conditional probability implies dependence between
events, whereas independent events do not influence each other.

2. Application of Bayes’ Theorem in a Medical Test Scenario

Bayes’ Theorem is an essential statistical tool used to revise the probability of a hypothesis
when new evidence is introduced. In medical diagnostics, it is widely used to interpret test
results, especially when tests are not 100% accurate.

Consider a situation where:


- The prevalence of a disease (P(Disease)) is 1% (0.01).
- The test correctly identifies the disease in 99% of cases (P(Positive|Disease) = 0.99).
- The test gives a false positive 5% of the time (P(Positive|No Disease) = 0.05).

We want to find the true probability that a person actually has the disease if they test
positive. Using Bayes’ Theorem:
P(Disease|Positive) = (P(Positive|Disease) × P(Disease)) / [(P(Positive|Disease) ×
P(Disease)) + (P(Positive|No Disease) × P(No Disease))]

= (0.99 × 0.01) / [(0.99 × 0.01) + (0.05 × 0.99)] ≈ 0.167


This means there's only a 16.7% chance that the person has the disease, even though the
test was positive. This shows the importance of understanding test accuracy and disease
prevalence when interpreting results.

3. Characteristics and Applications of Binomial, Poisson, and Normal Distributions

Binomial Distribution:
- Applies to discrete data where there are two outcomes: success or failure.
- Used when the number of trials is fixed.
- Example: Tossing a coin 10 times and counting how many times it lands heads.

Poisson Distribution:
- Describes the number of times an event occurs within a fixed interval (time, distance, etc.).
- Used when events are rare and occur independently.
- Example: Number of cars passing through a toll booth in an hour.

Normal Distribution:
- A continuous distribution with a symmetric, bell-shaped curve.
- Most natural phenomena follow this distribution: heights, weights, test scores.
- Defined by its mean and standard deviation.

4. R Programming – Data Structures and Basic Operations

R is a powerful language designed for statistical computing. It supports various data


structures:

- Vectors: Simplest form. e.g., v <- c(1, 2, 3)


- Lists: Heterogeneous data. e.g., l <- list(name="John", age=28)
- Matrices: 2D numeric arrays. e.g., m <- matrix(1:6, nrow=2)
- Data Frames: Table-like structures. e.g., df <- [Link](ID=1:3, Score=c(80, 85, 90))

Example Script:
# Vector addition
a <- c(10, 20, 30)
b <- c(1, 2, 3)
result <- a + b
print(result)

This script creates two vectors, adds them element-wise, and prints the result. R’s intuitive
syntax and built-in functions make data analysis accessible and efficient.
QUANTITATIVE METHODS – CO2 ASSIGNMENT

1. Sampling Methods and Central Limit Theorem

Sampling methods are techniques to select a subset of individuals from a population to


draw conclusions about the whole.

- Simple Random Sampling: Every individual has an equal chance.


- Stratified Sampling: Population divided into strata; random samples taken from each.
- Systematic Sampling: Every k-th member selected.
- Cluster Sampling: Population divided into clusters; some clusters randomly selected.

The sampling distribution refers to the probability distribution of a statistic (e.g., mean)
obtained from many samples.

The Central Limit Theorem (CLT) states that the sampling distribution of the sample mean
approaches a normal distribution as the sample size increases, regardless of the
population’s distribution. This is foundational for performing inference.

2. One-Sample t-Test – Apple Weight Example

Given:
- Sample size (n) = 25
- Sample mean (x̄) = 145g
- Population mean (µ) = 150g
- Sample standard deviation (s) = 10g
- Significance level (α) = 0.05

a. Hypotheses:
- Null Hypothesis (H₀): µ = 150g
- Alternative Hypothesis (H₁): µ ≠ 150g

b. Test Statistic:
t = (x̄ - µ) / (s / √n) = (145 - 150) / (10 / √25) = -5 / 2 = -2.5

c. Decision:
- Degrees of freedom = 24
- Critical t-value (two-tailed) ≈ ±2.064
- Since -2.5 < -2.064, we reject H₀.

d. p-value:
- Approximate p-value ≈ 0.02 (less than 0.05)
- This means there’s significant evidence that the average weight is different from 150g.

3. Hypothesis Testing and Z-Tests vs. t-Tests


Hypothesis testing helps determine if observed data deviates significantly from expected
results.

- One-sample z-test: Used when population standard deviation is known and sample size is
large.
- Two-sample z-test: Compares means from two large samples.
- t-test: Used when sample size is small and population standard deviation is unknown.

Example:
- One-sample t-test: Testing if average employee satisfaction score ≠ 75.
- Two-sample t-test: Comparing average salaries between departments.

Use t-test when:


- Sample size < 30
- Population SD is unknown

4. One-Way ANOVA – Fertilizer Yield Test

Data:

Fertilizer A: 80, 85, 90, 88, 92


Fertilizer B: 78, 82, 84, 81, 83
Fertilizer C: 85, 89, 93, 91, 87

a. Hypotheses:
- H₀: All fertilizers yield the same average crop.
- H₁: At least one fertilizer has a different mean yield.

b. ANOVA Result (via software like Excel or R):


- F-statistic ≈ 6.79
- p-value ≈ 0.006

c. Conclusion:
Since p-value < 0.05, we reject the null hypothesis. There is a significant difference in yields
between at least two fertilizers.

You might also like