0% found this document useful (0 votes)
8 views3 pages

Deep Learning Assignmnent2

Deep learning concepts

Uploaded by

muskansoni7610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views3 pages

Deep Learning Assignmnent2

Deep learning concepts

Uploaded by

muskansoni7610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Experiment – 2

Name: Muskan Soni UID: 22BCS16851


Branch: BE-CSE Section/Group: DL-902/B
Semester: 6th Date:
Subject: Deep Learning Lab Subject Code: 22CSP-368

1. Aim: To implement Bayes' theorem using Python to compute posterior


probabilities and demonstrate its application in decision-making.

2. Objective:

• To understand and apply Bayes' theorem to infer probabilities based on prior


knowledge and observed data.
• To calculate the posterior probability for an event.
Bayes' theorem is mathematically expressed as:
[ P(H|E) = \frac{P(E|H) \cdot P(H)}{P(E)} ]
Where:
• ( P(H|E) ): Posterior Probability - The probability of the hypothesis ( H ) being true
given the evidence ( E ).
• ( P(E|H) ): Likelihood - The probability of observing the evidence ( E ) given that the
hypothesis ( H ) is true.
• ( P(H) ): Prior Probability - The initial probability of the hypothesis ( H ) before
observing the evidence.
• ( P(E) ): Marginal Likelihood - The total probability of observing the evidence ( E )
under all possible hypotheses.

3. Implementation:
def bayes_theorem(prior_h, likelihood_e_given_h, likelihood_e_given_not_h):
prior_not_h = 1 - prior_h
p_e = (likelihood_e_given_h * prior_h) + (likelihood_e_given_not_h * prior_not_h)
posterior_h_given_e = (likelihood_e_given_h * prior_h) / p_e
return posterior_h_given_e
# Given values
prior_h = 0.01 # P(H)
likelihood_e_given_h = 0.9 # P(E|H)
likelihood_e_given_not_h = 0.05 # P(E|¬H)
# Calculate posterior probability
posterior_probability = bayes_theorem(prior_h, likelihood_e_given_h,
likelihood_e_given_not_h)

print(f"The posterior probability of having the disease given a positive test result is:
{posterior_probability:.4f}")

def bayes_theorem(prior_h, likelihood_e_given_h, likelihood_e_given_not_h):


prior_not_h = 1 - prior_h
p_e = (likelihood_e_given_h * prior_h) + (likelihood_e_given_not_h * prior_not_h)
posterior_h_given_e = (likelihood_e_given_h * prior_h) / p_e
return posterior_h_given_e
# Given values for spam classification
prior_h = 0.2 # P(H): Probability that an email is spam
likelihood_e_given_h = 0.8 # P(E|H): Probability of "discount" given spam
likelihood_e_given_not_h = 0.1 # P(E|¬H): Probability of "discount" given not spam

# Calculate posterior probability


posterior_probability = bayes_theorem(prior_h, likelihood_e_given_h,
likelihood_e_given_not_h)

print(f"The posterior probability that the email is spam given it contains 'discount' is:
{posterior_probability:.4f}")
4. Output:

5. Learning Outcomes:
• Result: For this example, the posterior probability of having the disease given
a positive test result is approximately 0.17 (17%).
• Insight: Even with a highly sensitive and specific test, the low prevalence of
the disease significantly reduces the posterior probability.

You might also like