Are you serious? Are you buying this? Ok—let me make this easy: There NEVER WAS a 33% chance. Ever. The 1-in-3 choice is a ruse. No matter what door you choose, Monty has at least one door with a goat behind it, and he opens it. At that point, you are presented with a 1-in-2 choice. The prior choice is completely irrelevant at this point! You have a 50% chance of being right, just as you would expect. Your first choice did absolutely nothing to influence the outcome! This argument reminds me of the time I bet $100 on black at a roulette table because it had come up red for like 20 consecutive times, and of course it came up red again and I lost my $$. A guy at the table said to me “you really think the little ball remembers what it previously did and avoids the red slots??”. Don’t focus on the first choice, just look at the second—there’s two doors and you have to choose one (the one you already picked, or the other one). You got a 50% chance.
Think about it this way. Let’s say you precommit before we play Monty’s game that you won’t switch. Then you win 1/3rd of the time, exactly when you picked the correct door first, yes?
Now, suppose you precommit to switching. Under what circumstances will you win? You’ll win if you didn’t pick the correct door to start with. That means you have a 2/3rd chance of winning since you win whenever your first door wasn’t the correct choice.
Your comparison to the roulette wheel doesn’t work: The roulette wheel has no memory, but in this case, the car isn’t reallocated between the two remaining doors, it was chosen before the process started.
Your analogy doesn’t hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.
If you’ve really thought about XiXiDu’s analogies and they haven’t helped, here’s another; this is the one that made it obvious to me.
Omega transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bucket, and offers to let you trade buckets.
Each bucket analogizes to a door that you may choose; the sand analogizes to probability mass. Seen this way, it’s clear that what you want is to get as much sand (probability mass) as possible, and Omega’s bucket has more sand in it. Monty’s unopened door doesn’t inherit anything tangible from the opened door, but it does inherit the opened door’s probability mass.
That works better for you? That’s deeply surprising. Using entities like Omega and transmutation seems to make things more abstract and much harder to understand what the heck is going on. I must need to massively update my notions about what sort of descriptors can make things clear to people.
I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.
“If Monty ‘replaced’ a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket.”
“Monty wants to keep the diamond for himself, so if he’s offering to trade with me, he probably thinks I have it and wants to get it back.”
It might seem paradoxical, but using ‘transmute at random’ instead of ‘replace’, or ‘Omega’ instead of ‘Monty Hall’, actually simplifies the problem for me by establishing that all relevant facts to the problem have already been included. That never seems to happen in the real world, so the world of the analogy is usefully unreal.
I’m not keen on this analogy because you’re comparing the effect of the new information to an agent freely choosing to pour sand in a particular way. A confused person won’t understand why Omega couldn’t decide to distribute sand some other way—e.g. equally between the two remaining buckets.
Anyway, I think JoshuaZ’s explanation is the clearest I’ve ever seen.
“Your analogy doesn’t hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.”
No, they are not causally linked. It does not matter what door you choose, you don’t influence the outcome in any way at all. Ultimately, you have to choose between two doors. In fact, you don’t “choose” a door at first at all. Because there is always at least one goat behind a door you didn’t choose, you cannot influence the next action, which is for Monty to open a door with a goat. At that point it’s a choice between two doors.
At this point you’ve had this explained to you multiple times. May I suggest that if you don’t get it at this point, maybe be a bit of an empiricist and write a computer program to repeat the game many times and see what fraction switching wins? Or if you don’t have the skill to do that (in which case learning to program should be on your list of things to learn how to do. It is very helpful and forces certain forms of careful thinking) play the game out with a friend in real life.
If—and I mean do mean if, I wouldn’t want to spoil the empirical test—logical doesn’t understand the situation well enough to predict the correct outcome, there’s a good chance he won’t be able to program it into a computer correctly regardless of his programming skill. He’ll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.
On the other hand, if he’s right about the Monty Hall problem and he programs it correctly… it will still return the result he expects.
Sure, but then the question becomes whether the other programmer got the program right...
My point is that if you don’t understand a situation, you can’t reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber’s implementation, logical might well conclude that there was just some glitch in the code that he didn’t notice—which happens to programmers regrettably often.
I think implementing the game with a friend is the better option here, for ease of implementation and strength of evidence. That’s all :)
The thing you might be overlooking is that Monty does not open a door at random, he opens a door guaranteed to contain a goat. When I first heard this problem, I didn’t get it until that was explicitly pointed out to me.
If Monty opens a door at random (and the door could contain a car), then there is no causal link and therefore the probability would be as you describe.
Are you serious? Are you buying this? Ok—let me make this easy: There NEVER WAS a 33% chance. Ever. The 1-in-3 choice is a ruse. No matter what door you choose, Monty has at least one door with a goat behind it, and he opens it. At that point, you are presented with a 1-in-2 choice. The prior choice is completely irrelevant at this point! You have a 50% chance of being right, just as you would expect. Your first choice did absolutely nothing to influence the outcome! This argument reminds me of the time I bet $100 on black at a roulette table because it had come up red for like 20 consecutive times, and of course it came up red again and I lost my $$. A guy at the table said to me “you really think the little ball remembers what it previously did and avoids the red slots??”. Don’t focus on the first choice, just look at the second—there’s two doors and you have to choose one (the one you already picked, or the other one). You got a 50% chance.
Think about it this way. Let’s say you precommit before we play Monty’s game that you won’t switch. Then you win 1/3rd of the time, exactly when you picked the correct door first, yes?
Now, suppose you precommit to switching. Under what circumstances will you win? You’ll win if you didn’t pick the correct door to start with. That means you have a 2/3rd chance of winning since you win whenever your first door wasn’t the correct choice.
Your comparison to the roulette wheel doesn’t work: The roulette wheel has no memory, but in this case, the car isn’t reallocated between the two remaining doors, it was chosen before the process started.
Your analogy doesn’t hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.
If you’ve really thought about XiXiDu’s analogies and they haven’t helped, here’s another; this is the one that made it obvious to me.
Omega transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bucket, and offers to let you trade buckets.
Each bucket analogizes to a door that you may choose; the sand analogizes to probability mass. Seen this way, it’s clear that what you want is to get as much sand (probability mass) as possible, and Omega’s bucket has more sand in it. Monty’s unopened door doesn’t inherit anything tangible from the opened door, but it does inherit the opened door’s probability mass.
That works better for you? That’s deeply surprising. Using entities like Omega and transmutation seems to make things more abstract and much harder to understand what the heck is going on. I must need to massively update my notions about what sort of descriptors can make things clear to people.
I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.
“If Monty ‘replaced’ a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket.”
“Monty wants to keep the diamond for himself, so if he’s offering to trade with me, he probably thinks I have it and wants to get it back.”
It might seem paradoxical, but using ‘transmute at random’ instead of ‘replace’, or ‘Omega’ instead of ‘Monty Hall’, actually simplifies the problem for me by establishing that all relevant facts to the problem have already been included. That never seems to happen in the real world, so the world of the analogy is usefully unreal.
I really like this technique.
I’m not keen on this analogy because you’re comparing the effect of the new information to an agent freely choosing to pour sand in a particular way. A confused person won’t understand why Omega couldn’t decide to distribute sand some other way—e.g. equally between the two remaining buckets.
Anyway, I think JoshuaZ’s explanation is the clearest I’ve ever seen.
“Your analogy doesn’t hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.”
No, they are not causally linked. It does not matter what door you choose, you don’t influence the outcome in any way at all. Ultimately, you have to choose between two doors. In fact, you don’t “choose” a door at first at all. Because there is always at least one goat behind a door you didn’t choose, you cannot influence the next action, which is for Monty to open a door with a goat. At that point it’s a choice between two doors.
At this point you’ve had this explained to you multiple times. May I suggest that if you don’t get it at this point, maybe be a bit of an empiricist and write a computer program to repeat the game many times and see what fraction switching wins? Or if you don’t have the skill to do that (in which case learning to program should be on your list of things to learn how to do. It is very helpful and forces certain forms of careful thinking) play the game out with a friend in real life.
If logical wants to play for real money I volunteer my services.
If—and I mean do mean if, I wouldn’t want to spoil the empirical test—logical doesn’t understand the situation well enough to predict the correct outcome, there’s a good chance he won’t be able to program it into a computer correctly regardless of his programming skill. He’ll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.
On the other hand, if he’s right about the Monty Hall problem and he programs it correctly… it will still return the result he expects.
He could try one of many already-written programs if he lacks the skill to write one.
Sure, but then the question becomes whether the other programmer got the program right...
My point is that if you don’t understand a situation, you can’t reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber’s implementation, logical might well conclude that there was just some glitch in the code that he didn’t notice—which happens to programmers regrettably often.
I think implementing the game with a friend is the better option here, for ease of implementation and strength of evidence. That’s all :)
The thing you might be overlooking is that Monty does not open a door at random, he opens a door guaranteed to contain a goat. When I first heard this problem, I didn’t get it until that was explicitly pointed out to me.
If Monty opens a door at random (and the door could contain a car), then there is no causal link and therefore the probability would be as you describe.
Fail.