Unit 4
Unit 4
---
**Key Terms**:
1. **Experiment**: A process that leads to an outcome.
Example: Tossing a coin, rolling a die.
2. **Sample Space (\(S\))**: The set of all possible outcomes.
Example: For a die roll, \(S = \{1, 2, 3, 4, 5, 6\}\).
3. **Event (\(E\))**: A subset of the sample space representing one or more outcomes.
Example: Getting an even number when rolling a die (\(E = \{2, 4, 6\}\)).
---
1. **Classical Approach**:
- Based on equally likely outcomes.
- Formula:
\[
P(E) = \frac{\text{Favorable outcomes}}{\text{Total outcomes}}
\]
Example: Probability of getting a "head" in a coin toss = \( \frac{1}{2} \).
3. **Subjective Approach**:
- Based on personal judgment or belief.
Example: The probability of a team winning a match based on expert opinion.
---
2. **Independent Events**: Events where the occurrence of one does not affect the other.
Example: Tossing two coins.
---
---
- **Definition**: The probability of an event \(B\) occurring, given that event \(A\) has already
occurred.
- **Formula**:
\[
P(B|A) = \frac{P(A \cap B)}{P(A)}, \quad P(A) \neq 0
\]
Example: Probability of drawing a red card from a deck given it’s a face card.
---
- **Definition**: Bayes’ theorem relates the probability of an event based on prior knowledge
of conditions related to the event.
- **Formula**:
\[
P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}
\]
**Applications**:
1. Medical diagnosis (probability of disease given symptoms).
2. Spam detection (probability of an email being spam given certain keywords).
---
| **Concept** | **Formula/Description** |
|------------------------------|-----------------------------------------------------|
| Classical Probability | \(P(E) = \frac{\text{Favorable outcomes}}{\text{Total outcomes}}\)
|
| Empirical Probability | \(P(E) = \frac{\text{Observed favorable cases}}{\text{Total
cases}}\) |
| Addition Law (Exclusive) | \(P(A \cup B) = P(A) + P(B)\) |
| Addition Law (Non-Exclusive) | \(P(A \cup B) = P(A) + P(B) - P(A \cap B)\) |
| Multiplication Law | \(P(A \cap B) = P(A) \cdot P(B|A)\) |
| Conditional Probability | \(P(B|A) = \frac{P(A \cap B)}{P(A)}\) |
| Bayes’ Theorem | \(P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}\) |
---
---
\[
P(X = r) = \binom{n}{r} p^r (1-p)^{n-r}, \quad r = 0, 1, 2, \dots, n
\]
Where:
- \(n\): Number of trials
- \(p\): Probability of success in a single trial
- \(r\): Number of successes
- \(\binom{n}{r}\): Combination, given by \(\frac{n!}{r!(n-r)!}\)
---
\[
P(X = r) = \frac{\lambda^r e^{-\lambda}}{r!}, \quad r = 0, 1, 2, \dots
\]
Where:
- \(r\): Number of events
- \(\lambda\): Average number of events per interval (mean)
In such cases:
\[
P(X = r) = \frac{\lambda^r e^{-\lambda}}{r!}
\]
\[
f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}
\]
Where:
- \(x\): Random variable
- \(\mu\): Mean
- \(\sigma^2\): Variance
---
---
### **Applications**
1. **Binomial**: Quality control, survey results, pass/fail scenarios.
2. **Poisson**: Defects in manufacturing, number of calls in a call center, rare events.
3. **Normal**: Heights, weights, exam scores, natural phenomena.