Department of Mathematics, Statistics
and Computer Science
Wim Ruitenburg's Fall 2010 MATH 1300-101
Last updated: November 2010
Comments and suggestions: Email wimr@mscs.mu.edu
Book, chapter 15 on probability, plus extra notes
Probability from chapter 15
From the chapter we can learn:
- The sample space is the collection of possible outcomes.
Exactly one outcome must occur after each probabilistic event.
- Each potential outcome has a probability value, which is a value of at
least 0 and at most 1.
- The sum of all probabilities of all outcomes equals 1.
- An event is a collection of possible outcomes.
Some rules of computing the number of elements of a sample space are:
- The multiplication rule (page 561).
- Counting permutations (page 565).
- Counting combinations (page 565).
These formulas are particularly useful when the probability space is
equiprobable (page 569).
In class we did two examples:
- When we roll 3 fair dice, what is the probability that none of the 3
dice shows a 6? (Use the product rule.)
- When we roll 3 fair dice, what is the probability that at least one of
the 3 dice shows a 6? (Use the previous result.)
Taking small risks often
Here is a special limit case of the product rule.
Suppose we take a same small risk of say 1/n to our lives, and take this
risk n independent times.
What is our chance to survive?
- Suppose we take a same small risk of say 1/6 to our lives, and take
this risk 6 times.
What is the chance to survive?
By the multiplication rule we multiply (1 - 1/6) exactly 6 times by
itself to find the answer:
- (1 - 1/6)^6 = close to 0.334898 or about 33.5%
- Suppose we take a same small risk of say 1/52 to our lives, and take
this risk 52 times.
What is the chance to survive?
By the multiplication rule we multiply (1 - 1/52) exactly 52 times by
itself to find the answer:
- (1 - 1/52)^52 = close to 0.364314 or about 36.4%
- Suppose we take a same small risk of say 1/10000 to our lives, and take
this risk 10000 times.
What is the chance to survive?
By the multiplication rule we multiply (1 - 1/10000) exactly 10000 times by
itself to find the answer:
- (1 - 1/10000)^10000 = close to 0.367861 or about 36.8%
-
The answers converge to a rather famous number.
Let n be some very very big number.
Suppose we take a same small risk of say 1/n to our lives, and take this
risk n times.
What is the chance to survive?
The formula looks like
For big n the value approaches the inverse of the so-called Euler number e.
This number e = close to 2.71828182846.
So the `survival' answer goes to
- 1/e = close to 0.367879 or about 36.8%
The birthday paradox
During the class we had about 60 people.
- What is the chance that some of us have the same birthday?
No, it is not about 0.13.
- It is relatively easier to first compute the chance that all our
birthdays are different.
Recall that we use a formula that looks like
(365/365) * (364/365) * (363/365) * and so on * ((365 - 59)/365) * ((365 -
60)/365)
The chance that we all have different birthdays is less than 1%.
So the chance that at least two of us share a birthday, is therefore more than
100% - 1% = 99%.
- When there are about 23 people, the chance of two of us sharing the
same birthday is about 50%.
- Easy exercise:
Suppose the class had 367 people.
What is the chance that some of us have the same birthday?
What are the hidden assumptions about the distribution of birthdays of people?
The other child
Suppose a parent has two children.
- If the oldest is a boy, what are the chances that the other is a girl?
- If, instead, we only know that one of the children is a boy, what are
the chances that the other is a girl?
Again there are hidden assumptions.
In this case, the `reasonable' hidden assumption is that each next child has an
equal and independent chance of being a boy or a girl.
Assuming this, the two chances above are 1/2 and 2/3.
Three Doors
This problem is also referred to as the Monty Hall problem.
The game host shows us three doors, a red door, a white door, and a blue door.
Behind one of these doors there is a big prize.
We are asked to pick one of the three doors.
Once our choice is final, and the prize is behind the door, we have won.
Obviously, our chance of winning is one-third, or 1/3.
What happens with our chances when the game show host adapts the rules a bit?
Suppose we have picked a door, say the red door.
Then the game host opens another door, say the white door, and reveals that
there is no prize behind the white door.
Now we are offered the options of sticking with the red door, or switch to the
blue door.
Should we switch, or should we stay?
What are our chances for the red door, what for the blue door?
- Here is the standard answer.
There is a 1/3 chance that the prize is behind the red door.
There is a 2/3 chance that the prize is behind the white door or behind the
blue door.
Although the show host has ruled out the white door, the prize is still behind
the white or the blue door with chance 2/3.
The difference is that we also know that the white door covers no prize.
So the chance for the prize being behind the blue door is 2/3.
So switch to the blue door!
- Many people have an intuitive resistance to the answer above.
This may be because of `hidden' assumptions.
For example, we assume that the three doors are not made of glass.
For otherwise we can see the prize through the door.
We also assume that the show host is telling the truth.
This and similar restrictions are usually perceived as obvious from the
context.
Other assumptions may be less obvious.
- Suppose the show host always makes the observation about there
being another door without a prize behind it, for all participants.
Then the argument above applies.
- Suppose the show host randomly makes the observation about
there being another door without a prize behind it, for all participants.
Then the argument above applies.
- Suppose the show host only makes the observation about there
being another door without a prize behind it, for participants whose original
choice was correct.
Then the above argument is obviously false: Never change.
- Suppose the show host makes the observation about there being another
door without a prize behind it, for the first time.
Then it depends on whether or not the show host is trying to trick us.
We may not know.
- A little extra information goes a long way.
For example, suppose we are in the standard situation of the three doors
problem, where the show host always makes the observation about there
being another door without a prize behind it.
Now before we make our first choice of one of the three doors, we discover that
there is no prize behind the blue door without anyone else realizing that we
know that there is no prize behind the blue door.
Question: Which door should we first pick, and why?
Remarkably, the answer is pick the blue (bad) door deliberately.
Can you explain why we so can guarantee to win the prize?
Let us change the problem in a seemingly irrelevant way as follows.
As before, we are asked to pick one of the three doors.
Once our choice is final, and the prize is behind the door, we have won.
Obviously, our chance of winning is 1/3.
Suppose we have picked the red door.
Then a storm blows through the hall, and one of the doors is randomly blown
open.
Suppose that the white door blew open.
Behind it there is no prize.
Now we are offered the options of sticking with the red door, or switch to the
blue door.
Should we switch, or should we stay?
- Answer: It makes no difference.
The chance that the prize is behind the red door is 1/2.
The chance that the prize is behind the blue door is 1/2.
A door flew open randomly, so the search has been reduced to the remaining two
doors, without preference.
It is only coincidence that the door with the prize did not fly open.
Spinning Wheels
Page 580 illustrates some spinning wheels.
We are interested in finding probabilities and expected values of such spinning
wheels.
When is the game fair?
When is the game unfair in our favor?
When is the game unfair in favor of the wheel owner?
The Best of Three Dice
Suppose we have three fair but unusual dice, as follows.
Instead of having six sides with values 1, 2, 3, 4, 5, and 6, their sides are
allowed to have other values.
We have a red, a white, and a blue one.
The red one has numbers 2, 2, 2, 2, 6, and 6 on its six sides.
The white one has numbers 1, 1, 5, 5, 5, and 5 on its six sides.
The blue one has numbers 3, 3, 3, 3, 3, and 3 on its six sides.
When we list the six sides in tables, we have a situation like this:
The following game is played between two players.
The first player picks one on the dice.
Then the second player picks another.
Next, both players throw their dice simultaneously.
Whoever throws the higher number, wins.
With the three dice above, draws are not possible.
Question: Which one of these three dice is best?
- Let us compare dice two at a time.
- Red versus white.
When we toss red versus white, there is a 1/3 chance for red to throw 6, and so
certainly win.
If red throws 2 (chance 2/3), there still is a 1/3 chance for white to throw 1,
and red still wins.
So the chances for red to beat white are (1/3) + (2/3) x (1/3) = 5/9, for a
total of 5/9.
So red is better than white.
- White versus blue.
Blue always throws a 3.
There is a 2/3 chance that white throws a 5, and wins.
Otherwise, white throws a 1 and loses.
So the chances for white to beat blue are 2/3.
So white is better than blue.
Are we done yet?
If red is better than white, and white is better than blue, shouldn't color red
be a winner?
Let us check to make sure.
- Blue versus red.
Blue always throws 3.
If red throws 2 (chance 2/3), then blue wins.
Otherwise red wins.
So the chances for blue to beat red are 2/3.
So blue is better than red.
- There is no `best' one among the three dice.
On average, red beats white, white beats blue, and blue beats red.
In the game above, it is good to be the second player.
No matter which color the first player picks, the second player can always
pick another color which is better than the first color.
Winning a Losing Game
Suppose you play a color at roulette.
You can pick either black or red.
Your chance for doubling your money is 18/38, your chance for losing your money
is 20/38.
- If you play the game 3800 times by each time betting 1 dollar on a
color, you should expect to win 1 dollar about 1800 times and lose 1 dollar
about 2000 times.
So on balance you lose about 200 dollars.
The casino wins 200 dollars.
- Instead of repeatedly playing for the same amount, let us adapt our
gambling strategy as follows.
- The first time you bet some amount, say 5 dollars, on a color.
If you win, you get 10 dollars, and you stop with an overall profit of 5
dollars.
Otherwise, your total los is 5 dollars, and you continue as follows.
- The second time you double your bet to 10 dollars.
If you win, you get 20 dollars, and you stop with an overall profit of 20 minus
15 is 5 dollars.
Otherwise, your total los is 15 dollars, and you continue as follows.
- The third time you double your bet to 20 dollars.
If you win, you get 40 dollars, and you stop with a profit of 40 minus 35 is 5
dollars.
Otherwise, your total los is 35 dollars, and you continue as follows.
- The fourth time you double your bet to 40 dollars.
If you win, you get 80 dollars, and you stop with a profit of 80 minus 75 is 5
dollars.
Otherwise, your total los is 75 dollars, and you continue as follows.
- The fifth time you double your bet to 80 dollars.
If you win, you get 160 dollars, and you stop with a profit of 160 minus 155 is
5 dollars.
Otherwise, your total los is 155 dollars, and you continue as follows.
- The sixth time you double your bet to 160 dollars.
If you win, you get 320 dollars, and you stop with a profit of 320 minus 315 is
5 dollars.
Otherwise, ......
- The seventh time you double your bet to 320 dollars.
If you win, you get 640 dollars, and you stop with ......
- The eighth time you double your bet to 640 dollars. ......
The chance that you keep losing is very small, so there is a very big chance
that you will win 5 dollars.
The strategy is very dangerous, because there is a small risk of a gigantic
loss, when there is no more opportunity to double your bet.
Check Swapping
Suppose we have a stack of checks with denominations of 1 dollar, 2
dollars, 4 dollars, 8 dollars, 16 dollars, 32 dollars, 64 dollars, and 128
dollars.
(We rather arbitrarily stop at 128; you may change the problem by extending the
pile of possible checks with many more doublings.)
We also have two blank envelops.
In each of the envelops I put one check such that, with equal probability, I
either pick 1 and 2, or 2 and 4, or 4 and 8, or 8 and 16, or 16 and 32, or
32 and 64, or 64 and 128.
You can see the two envelops.
You do not know which two consecutive checks I picked.
You know that one envelop contains twice the amount of the other, but you don't
know which one.
- Suppose we have the two blank envelops in front of us as shown.
Now pick one of the envelops and open it, say we pick the left one.
Let us assume it contains a check for 4 dollars.
So now you know
What is in the other envelop?
With chance 1/2 it contains a 2 dollar check, and with chance 1/2 it contains
an 8 dollar check.
So if you were allowed to switch checks, then your average expected return, the
expected value, is 1/2 times 2 plus 1/2 times 8 equals 5 dollars, which
is 25 percent above what you have with your original choice.
So switch if you are allowed to!
If your originally chosen check shows any amount other than 128 dollars then,
by a similar argument, switch and on average expect a profit of 25 percent, or
more.
- In the scenario above, if you had only opened the right envelop, with
its 2 dollar check or 8 dollar check, you would also have wanted to switch,
with an expected value 25 percent above what you have with your original
choice.
So if two parties each had picked up and opened different envelops, then they
both want to switch with the other, and expect a 25 percent profit on average.
- Suppose we start all over, and again have two blank envelops in front of
us as before, without knowing about the contents except that one of the checks
has twice the value of the other.
Now pick one of the envelops, say you pick the left one.
Do not open it.
We mark the chosen check by replacing the ? by an x, where x stands for the
amount on the check, still unknown to you.
Would it benefit you to switch envelops?
It is not likely that you picked the check with the maximal amount of 128
dollars.
Assuming that, you may expect that the other check has with probability 1/2 an
amount of at least x/2, and with with probability 1/2 an amount of 2x.
So on average at least an amount of 5x/4.
So it appears reasonable to switch.
Let us choose a switch.
We mark the change in choice by replacing x by ?, and by putting a y on the
newly chosen envelop, where y stands for the amount on the check, still unknown
to you.
You may expect an increase in return by about 25 percent.
However, would it benefit you to switch envelops again, and return to your
first choice?
If we believe in the switch from the check with x dollars to the check with y
dollars, then we ought to believe so.
You end up where you started, with the check marked x.
How can this be?
The reason is that we carelessly assumed that we could ignore the possibility
that the check in your chosen envelop happens to be the one of 128 dollars.
- The game can be made much more complicated.
Suppose we start again from the beginning with the two checks:
You pick an envelop, say the left one.
Assume it contains a check for 64 dollars.
So now you know
Another student receives the other envelop, and looks inside.
Before you can say whether or not you want to switch envelops, the other
student says that she would like to switch.
Would it be beneficial to agree with her to switch?
Your original argument for switching was that the other envelop has equal
chances to either contain a 32 dollar check, or a 128 dollar check, which
averages to 80 dollars.
However, if the owner of the other envelop wants to switch, then it stands to
reason that she did not see the maximal amount of 128 dollars in her envelop.
You probably should not switch.