PS Lecture 19-03-2021
PS Lecture 19-03-2021
PS Lecture 19-03-2021
ENGINEERS
International Burch University Counting Methods, Independent Trials,
Reliability Problems
Suppose we have a shuffled full deck and we deal seven cards. What is the probability that we draw no queens?
It is too difficult to enumerate all 133 million combinations of seven cards. (In fact, you may wonder if 133 million is
even approximately the number of such combinations.)
To solve this problem, we need to develop procedures that permit us to count how many seven-card combinations
there are and how many of them do not have a queen.
The results we will derive all follow from the fundamental principle of counting.
Applying the multiplication principle, the number of ways to order the n objects is equal to the product:
Each order is called a permutation, and the product above is called the number of permutations of n objects.
Because products of the form n(n -1)(n - 2) ... (3)(2)(1) occur frequently when counting objects, a special symbol n!,
called n factorial, is used to denote this product.
Suppose that k objects will be selected from a set of n objects, where k ≤ n and the k objects will be placed in order
from 1st to kth.
Then there are n choices for the first object, n - 1 choices for the second object, and so on, until there are n - k + 1
choices for the kth object.
Applying the multiplication principle, the number of ways to select and order k objects from a set of n objects is:
n(n - 1)(n - 2) ... (n - k +1)
This expression n!/(n - k)! represents the number of permutations of n objects taken k at a time.
Counting Methods cont.
We will use which is read as “n choose k,” to denote the number of k-combinations of n objects
Combinations
Suppose that k objects will be chosen from a set of n objects, where k ≤ n, but that the k objects will not be put in
order. The number of ways in which this can be done is called the number of combinations of n objects taken k at a
time and is given by the formula n!/(n - k)!
Theorem 1.12
The number of k-permutations of n distinguishable objects is (n)k = n(n − 1)(n − 2) · · · (n − k + 1) = n!/(n − k)!
Sampling without Replacement
Choosing objects from a collection is also called sampling, and the chosen objects are known as a sample.
A k-permutation is a type of sample obtained by specific rules for selecting objects from the collection.
Sampling without Replacement cont.
In particular, once we choose an object for a k-permutation, we remove the object from the collection and we cannot
choose it again.
Sampling without Replacement is a way to figure out probability without replacement - in other words, you don’t
replace the first item you choose before you choose a second.
Theorem 1.13
The number of ways to choose k objects out of n distinguishable objects is:
= n!/ k !(n − k)!
The logic behind this identity is that choosing k out of n elements to be part of a subset is equivalent to choosing n
− k elements to be excluded from the subset.
Sampling with Replacement
When an object can be chosen repeatedly, we have sampling with replacement.
Each object can be chosen repeatedly because a selected object is replaced by a duplicate.
Sampling with Replacement cont.
Theorem 1.14
Given m distinguishable objects, there are m n ways to choose with replacement an ordered sample of n objects.
When drawing a sample from a population, there are many different combinations of people that could be selected.
Formula Nn is used to derive the number of possible samples drawn with replacement, where N is the number in the
total population and n is the number of units being sampled.
For example when selecting three persons from the population of nine addicts, the sample could have been 1-2-3-,
or 4-5-6-, or 7-8-9, or any of many other combinations.
To be exact, in sampling with replacement from the population, there are N n = 93 = 729 different combinations of
three addicts that could have been selected.
Sampling with Replacement cont.
Theorem 1.15
For n repetitions of a subexperiment with sample space S = {s 0, . . . , sm−1}, there are mn possible observation
sequences.
For five subexperiments with sample space S = {0, 1}, how many observation sequences are there in which 0 appears
n0 = 2 times and 1 appears n1 = 3 times?
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . .
The set of five-letter words with 0 appearing twice and 1 appearing three times is {00111, 01011, 01101, 01110,
10011, 10101, 10110, 11001, 11010, 11100}.
There are exactly = 10 such words.
Sampling with Replacement cont.
Theorem 1.16
The number of observation sequences for n subexperiments with sample space
S = {0, 1} with 0 appearing n0 times and 1 appearing n1 = n − n0 times is
Above given Theorem can be generalized to subexperiments with m > 2 elements in the sample space.
For n trials of a subexperiment with sample space S = {s0, . . . , sm−1}, we want to find the number of observation
sequences in which s0 appears n0 times, s1 appears n1 times, and so on.
Theorem 1.17
For n repetitions of a subexperiment with sample space S = {s 0, . . . , sm−1}, the number of length n = n0 +· · ·+nm−1
observation sequences with si appearing ni times is = n!/(n0!n1! · · · nm−1!)
Independent Trials
• An independent event is an event that has no connection to another event’s chances of happening (or
not happening).
• In other words, the event has no effect on the probability of another event occurring.
• Independent events in probability are no different from independent events in real life.
• Buying a lottery ticket has no effect on having a child with blue eyes.
• When two events are independent, one event does not influence the probability of another event.
• A trial in an experiment is independent if the likelihood of each possible outcome does not change from
trial to trial.
Independent Trials cont.
A trial in an experiment is independent if the likelihood of each possible outcome does not change from trial to trial.
Suppose you draw cards one at a time from a standard deck of cards without putting the cards back into the deck.
If you draw an ace on the first draw, your chance of drawing an ace on the second draw changes from 4/52 to 3/51.
We now apply the counting methods to derive probability models for experiments consisting of independent
repetitions of a subexperiment.
We start with a simple subexperiment in which there are two outcomes: a success occurs with probability p;
otherwise, a failure occurs with probability 1− p.
Theorem 1.18
The probability of n0 failures and n1 successes in n = n0 + n1 independent trials is
P[Sn0,n1] = (1 − p)n−n1 pn1 = (1 − p)n0 pn−n0
Independent Trials cont.
The second formula in this theorem is the result of multiplying the probability of n 0 failures in n trials by the number
of outcomes with n0 failures.
Now suppose we perform n independent repetitions of a subexperiment for which there are m possible outcomes
for any subexperiment.
That is, the sample space for each subexperiment is (s0, . . . , sm−1) and every event in one subexperiment is
independent of the events in all the other subexperiments.
Therefore, in every subexperiment the probabilities of corresponding events are the same and we can use the
notation P[sk] = pk for all of the subexperiments.
In the probability tree of the experiment, each node has m branches and branch i has probability pi. The probability
of an experimental outcome is just the product of the branch probabilities encountered on a path from the root of
the tree to the leaf representing the outcome.
Independent Trials cont.
For example, the experimental outcome s2 s0 s3 s2 s4 occurs with probability p2 p0 p3 p2 p4. We want to find the
probability of the event Sn0,...,nm−1 = {s0 occurs n0 times, . . . , sm−1 occurs nm−1 times}
Next, we observe that any other experimental outcome that is a reordering of the preceding sequence has the same
probability because on each path through the tree to such an outcome there are ni occurrences of si .
As a result, P [Sn0,...,nm−1]= Mp1n1 p2n2· · · prnr where M, the number of such outcomes, is the multinomial coefficient
Theorem 1.19
A subexperiment has sample space S = {s0, . . . , sm−1} with P[si] = pi.
For n = n0 +· · ·+nm−1 independent trials, the probability of n i occurences of si, i = 0, 1, . . . ,m − 1, is P [Sn0,...,nm−1] =
p0n0 · · pnm−1nm−1
Independent Trials cont.
Call which arrives on telephone switch is either a voice call with probability , a fax call with probability or a modem
call with probability .
If is an event with voice calls, fax calls and modem calls out of observed calls. If apply previous Theorem on this
case then:
becomes
Reliability Problems
It is a known fact that reliability program increases the initial cost of every device, instrument or system and also it is
true that the reliability decreases when the complexity of the system increases.
In this type of complex situation, reliability of a product or service is best assured when it is designed by the design
engineer and built in by production engineer, rather than conducting externally an experiment by a reliability
engineer.
Once the product is accepted by the buyer and put in operation, either by itself or as a part of a larger assembly, the
quality of performance would be judged by how long the product gives useful service; this is indicated by the word
“Reliability”.
Reliability is probability that a component, device, equipment or a system will perform its intended function
adequately for a specific period of time under a given set of conditions. According to the definition, the basic
elements of reliability are probability, adequate performance, duration of adequate performance and operating
conditions.
Reliability Problems cont.
Many objects consist of more parts or elements. From reliability point of view, an element is any component or
object that is considered in the investigated case as a whole and is not decomposed into simpler objects.
An element can be a lamp bulb, the connecting point of two electric components, a screw, an oil hose, a piston in an
engine, and even the complete engine in a diesel locomotive.
Also, the individual operations or their groups in a complex manufacturing or building process can be considered as
elements.
Each of them can fail. This increases the probability that the whole system fails.
Independent trials can also be used to describe reliability problems in which we would like to calculate the
probability that a particular operation succeeds.
The operation consists of n components and each component succeeds with probability p, independent of any other
component. Let Wi denote the event that component i succeeds.
Reliability Problems cont.
The resultant reliability depends on the reliability of the individual elements and their number and mutual
arrangement.
A suitable arrangement can even increase the reliability of the system.
Two basic systems are series and parallel, and their combinations are also possible.
Reliability Problems - Components in series
From reliability point of view, a series system (fig a) is such, which fails if any of its elements fails.
For example, a motorcycle cannot go if any of the following parts cannot serve: engine, tank with fuel, chain, frame,
front or rear wheel, etc., and, of course, the driver.
All these elements are thus arranged in series. Elements are also screws and many other things. If failure of any
component does not depend on any other component, the reliability of the system is obtained simply as the product
of the reliabilities of individual elements.
A practical conclusion is that “the reliability of a series system is always lower than the reliability of any of its
components”
An example is a four-cylinder engine. It will fail only if all four cylinders are unable to run. If one, two, or even three
cylinders do not work, the fourth one is still able to put the car into motion (though with significantly reduced
power).
The probability of a simultaneous occurrence of mutually independent events equals the product of individual
probabilities.
We can analyze complicated combinations of components in series and in parallel by reducing several components
in parallel or components in series to a single equivalent component.
Probability of success for first system is given by: (system doesn‘t work if both parallel components don‘t work and
the probability that system works is 1 minus probability that system works; probability that serial components work
is , so the probability that it does not work is )
Probability of succes for second system is given by: (direct conclusion for parallel components similar to first
system).
k you.