**By Robert Seawright, Proprietor, Above the Market**

As illustrated (see image attached), you are faced with three urns, each containing 2000 balls. A has 2 reds and the rest black; B has 20 blues and the rest black; C contains 1 red, 10 blues and the rest black. You may reach into the urn of your choice and remove a ball at random. If you draw red, you get $1000; if you draw blue, you get $100; if you draw black, you get nothing.

Which urn do you pick and why?

Source: The Big Questions (from a paper by Armen Alchian, with the exercise attributed to Harry Markowitz)

As you can see, the urns are arranged such that on average you’ll win exactly $1 regardless of which urn you pick. Therefore, a wholly rational person — in a vacuum — should be indifferent to which urn s/he picks but should never choose C. However, none of us is ever wholly rational and our preferences have various components, some rational and some not. For example, if I have a pressing and immediate need for $1000 — $100 won’t do it — I would (quite rationally) choose A.

But for now, let’s stick to the scenario we’ve been given without adding anything to it.

Choosing B offers the lowest variance and provides the best chance to collect *something*, so the risk averse should look there. But since the likelihood of winning anything isn’t all that high, the bigger prize might be more attractive – A offers the best chance to win $1000, so gamblers should look there. C makes no sense to utility maximizers (combining A and B provides the same odds as C) but it does split the difference. So C could work for those who hate making decisions (think *Sophie’s Choice* for a particularly excruciating example) or perhaps for a – fairly typical — couple with a gambling husband and a risk averse wife (the other way around doesn’t happen much in real life).

In general, our biases are such that we want both to win big and to avoid losing. That’s why lottery drawings and slot machines are designed the way they are. They typically feature a single very large prize, many small prizes, and not too many intermediate prizes (plus a very significant cut for the house). Last year, in an attempt to bring in more players, the Powerball lottery increased both the size of the grand prize (from $20 to $40 million minimum) and the odds of winning something (from 1/35 to 1/32).

As an aside, note that our inherent biases — as usual — have more and deeper impacts than we understand or expect. The Wason selection tasks show that even the symbols contained within choices offered create variance in how rational (as typically defined) the responses are.

If nothing else, this exercise demonstrates how difficult is can be to translate a mathematical/probability problem into a set of *human* actions. The probabilities are what they are, but the farther you remove the problem description from the realm of probability and more toward real human concerns, the less well it matches and the less useful it seems.

If some of you would like to explain which urn you’d pick and why in the comments, I’d appreciate it.

Great post. Thanks Robert.

I picked B for highest odds. Marilyn Vos Savant had a similar mental exercise many years ago:

You are on a game show and there is three doors. Only one has a prize. You pick door number one, the host opens door two to show you it is empty. He then gives you the option to stay with your original choice or switch to door three. What should you do?

A. Door number three now has a two in three chance of containing the prize because the host has given you information (door two is empty)you didn’t have when you made your choice, your choice (number one) still has a one in three chance.

Except that Marilyn was wrong. The original distribution of the balls doesn’t change simply by opening one of the doors. Your odds have gone up to 50/50 regardless of which door you have.

Rich is correct. This is a version of the Monty Hall problem based on the TV program The Price is Right.

In the program, there are three doors, one has a good prize while the other two have booby prizes. You pick door 1. Monty opens door 2 and it doesn’t have the good prize. He gives you a choice of switching to door 3 or staying with door 1. Switching always has twice as much chance of winning the good prize.

I think this is the best way to think about it: You pick door 1. Instead of opening door 2, Monty gives you the option of staying with door 1 or switching to the other TWO doors. So who wouldn’t take two doors instead of one? Then Monty says he can’t let you have two doors, so he’ll take one back. But he always takes the door with a booby prize, because he knows where the good prize is. So by switching you have a 2/3 chance of winning.

Are you sure about that? Isn’t this a case of the gambler’s fallacy? The first choice is a pure 33% odds of winning. But the second bet is a totally independent choice with 50% odds of winning. Your second choice is completely independent of the first. It’s like rolling a 6 sided dice with 3 numbers and trying to guess the number. Then swapping the 3 number dice out for a 2 numbered dice. The odds change because the circumstances are different.

I just did a search and found an entry in Wikipedia under the Monty Hall Problem. Savant’s column was about the TV program. Even mathematicians didn’t believe that she was correct. ” Paul Erdős, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation confirming the predicted result (Vazsonyi 1999).”

Here is how it is explained in Wikipedia:

Contestants who switch have a 2/3 chance of winning the car, while contestants who stick have only a 1/3 chance. One way to see this is to notice that there is a 2/3 chance that the initial choice of the player is a door hiding a goat. When that is the case, the host is forced to open the other goat door, and the remaining closed door hides the car. “Switching” only fails to give the car when the player had initially picked the door hiding the car, which only happens one third of the time.

Quite sure. This is the classic example of Bayesian thinking — updating a posterior probability by adding new information. It’s not immediately intuitive, but it’s true. A few simulations will show it.

I didn’t believe. I also made a simulation. Now it seems almost obvious. A great problem!

Btw. a great way to educate yourself is to play it as a game for few rounds. Try to find someone not believing to play against.

What are the odds that the ball is in 1 or 3 if you open door 2 and the ball is there?

If Monty opens door 2, the good prize will not be there. That is the critical point. He is not randomly picking a door. He always picks the booby prize. That’s why you can think of trading your initial pick for the other two doors, before he opens any doors. Opening the door before or after you switch doesn’t matter because he is not doing it randomly.

I wrote about the Monty Hall problem here: http://rpseawright.wordpress.com/2011/11/17/lets-make-a-deal/

I’d like to relate Urn A to trend trading and Urn B to swing trading, though with 2000 balls the odds only very generally apply. Trend trading has a few large winners that more than make up for many small losses. You need a high tolerance for losses and a deep enough pocket to ride out long strings of losing trades. Swing trading can have many more winners than losers, but the profits are relatively small.

I picked B as it has the best chance of winning something.

If you want to take this toward markets, there are implications for position size. Having five stable companies (type B) in your portfolio may be sufficient diversification against drawdowns, while twenty “lottery ticket” positions (type A) is far too risky.

I hope you might touch on position sizing in future posts.

The odds of winning anything are pretty bad regardless of which urn you pick (worse than Powerball, actually), but A has the best chance of winning a decent sized prize. So I pick A.

Same here! Even if you pick B you had a chance of 1% to win. Way too small! At least the prize of A is substantial.

How do you win $1, when you get nothing for drawing a black ball?

No one will win exactly $1. But the average (expected) return — in the aggregate — is $1.

B.

And this is the way to invest.

Let’s say you have this choice once a month. Every month, you will ‘win.’

If you always select A, then your variation is far more random. You could could two years without winning. You could win three months in a row.

Over time of course it will ‘even out’ but events can be random for longer than you think.