0% found this document useful (0 votes)
55 views1 page

Probability Theory

Given two events A and B, from the sigma-field of a probability space, with P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views1 page

Probability Theory

Given two events A and B, from the sigma-field of a probability space, with P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

In probability theory, conditional probability is a measure of the probability of an event given that (by assumption, presumption, assertion or evidence)

another event has occurred. [1] If the


event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written
as P(A|B), or sometimes PB(A). For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person has a cold, then
they are much more likely to be coughing. The conditional probability of coughing given that you have a cold might be a much higher 75%.

The concept of conditional probability is one of the most fundamental and one of the most important concepts in probability theory.[2]But conditional probabilities can be quite slippery and
require careful interpretation.[3] For example, there need not be a causal or temporal relationship between A and B.

P(A|B) may or may not be equal to P(A) (the unconditional probability of A). If P(A|B) = P(A) (or its equivalent P(B|A) = P(B)), then events A and B are said to be independent: in such a case,
having knowledge about either event does not change our knowledge about the other event. Also, in general, P(A|B) (the conditional probability of A given B) is not equal to P(B|A). For
example, if you have dengue you might have a 90% chance of testing positive for dengue. In this case what is being measured is that if event B ("having dengue") has occurred, the probability
of A (test is positive) given that B (having dengue) occurred is 90%: that is, P(A|B) = 90%. Alternatively, if you test positive for dengue you may have only a 15% chance of actually having
dengue because most people do not have dengue and the false positive rate for the test may be high. In this case what is being measured is the probability of the event B(having dengue) given
that the event A (test is positive) has occurred: P(B|A) = 15%. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities
can be correctly reversed using Bayes' theorem.

You might also like