The problem here is that you don’t KNOW that the probability is 90%. What if it’s 80%? or 60%? or 12%? In real life you will only run the experiment once. The probabilities are just a GUESS. The person who is making the guess has no idea what the real probabilities are. And as Mr. Yudkowsky has pointed out elsewhere, people consistently tend to underestimate the difficulty of a task. They can’t even estimate with any accuracy how long it will take them to finish their homework. If you aren’t in the business of saving people’s lives in EXACTLY this same way, on a regular basis, the estimate of 90% is probably crap. And so is the estimate of 100% probability of saving 400 lives. All you can really say, is that you see fewer difficulties that way, from where you are standing now. It’s a crap shoot, either way, because, once you get started, no matter which option you choose, difficulties you hadn’t anticipated will arise.
This reminds me of ‘the bridge experiment’, where a test subject is given the opportunity to throw a fat person off a bridge in front of a train, and thereby save the lives of 5 persons trapped on the tracks up ahead. The psychologists bemoaned the lack of rationality of the test subjects, since most of them wouldn’t throw the fat person off the bridge, and thus trade the lives of one person, for five. I was like, ‘ARE YOU CRAZY? Do you think one fat person would DERAIL A TRAIN? What do you think cow catchers are for, fool? What if he BOUNCED a couple of times, and didn’t end up on the rails? It’s preposterous. The odds are 1000 to 1 against success. No sane person would take that bet.’
The psychologists supposedly fixed this concern by telling the test subjects that it was guaranteed that throwing the fat person off the bridge would succeed. Didn’t work, because people STILL wouldn’t buy into their preposterous plan.
Then the psychologists changed the experiment so that the test subject would just have to throw a switch on the track which would divert the train from the track where the five people were trapped to a track where just one person was trapped (still fat by the way). Far more of the test subjects said they would flip the switch than had said they would throw someone off the bridge. The psychologists suggested some preposterous sounding reason for the difference, I don’t even remember what, but it seemed to me that the change was because the plan just seemed a lot more likely to succeed. The test subjects DISCOUNTED the assurances of the psychologists that the ‘throw someone off the bridge plan’ would succeed. And quite rationally too, if you ask me. What rational person would rely on the opinion of a psychologist on such a matter?
When the 90%/500 or 100%/400 question was posed, I felt myself having exactly the same reaction. I immediately felt DUBIOUS that the odds were actually 90%. I immediately discounted the odds. By quite a bit, in fact. Perhaps that was because of lack of self confidence, or hard won pessimism from years of real life experience, but I immediately discounted the odds. I bet a lot of other people did too. And I wouldn’t take the bet, for exactly that reason. I didn’t BELIEVE the odds, as given. I was skeptical. Interestingly enough though, I was less skeptical of the ‘can’t fail/100%’ estimate, than of the 90% estimate. Maybe I could easily imagine a scenario where there was no chance of failure at all, but couldn’t easily imagine a scenario where the odds were, reliably, 90%. Once you start throwing around numbers like 90%, in an imperfect world, what you’re really saying is ‘there is SOME chance of failure’. Estimating how much chance, would be very much a judgement call.
So maybe what you’re looking at here isn’t irrationality, or the inability to multiply, but rather rational pessimism about it being as easy as claimed.
The problem here is that you don’t KNOW that the probability is 90%. What if it’s 80%? or 60%? or 12%? In real life you will only run the experiment once. The probabilities are just a GUESS. The person who is making the guess has no idea what the real probabilities are. And as Mr. Yudkowsky has pointed out elsewhere, people consistently tend to underestimate the difficulty of a task. They can’t even estimate with any accuracy how long it will take them to finish their homework. If you aren’t in the business of saving people’s lives in EXACTLY this same way, on a regular basis, the estimate of 90% is probably crap. And so is the estimate of 100% probability of saving 400 lives. All you can really say, is that you see fewer difficulties that way, from where you are standing now. It’s a crap shoot, either way, because, once you get started, no matter which option you choose, difficulties you hadn’t anticipated will arise.
This reminds me of ‘the bridge experiment’, where a test subject is given the opportunity to throw a fat person off a bridge in front of a train, and thereby save the lives of 5 persons trapped on the tracks up ahead. The psychologists bemoaned the lack of rationality of the test subjects, since most of them wouldn’t throw the fat person off the bridge, and thus trade the lives of one person, for five. I was like, ‘ARE YOU CRAZY? Do you think one fat person would DERAIL A TRAIN? What do you think cow catchers are for, fool? What if he BOUNCED a couple of times, and didn’t end up on the rails? It’s preposterous. The odds are 1000 to 1 against success. No sane person would take that bet.’
The psychologists supposedly fixed this concern by telling the test subjects that it was guaranteed that throwing the fat person off the bridge would succeed. Didn’t work, because people STILL wouldn’t buy into their preposterous plan.
Then the psychologists changed the experiment so that the test subject would just have to throw a switch on the track which would divert the train from the track where the five people were trapped to a track where just one person was trapped (still fat by the way). Far more of the test subjects said they would flip the switch than had said they would throw someone off the bridge. The psychologists suggested some preposterous sounding reason for the difference, I don’t even remember what, but it seemed to me that the change was because the plan just seemed a lot more likely to succeed. The test subjects DISCOUNTED the assurances of the psychologists that the ‘throw someone off the bridge plan’ would succeed. And quite rationally too, if you ask me. What rational person would rely on the opinion of a psychologist on such a matter?
When the 90%/500 or 100%/400 question was posed, I felt myself having exactly the same reaction. I immediately felt DUBIOUS that the odds were actually 90%. I immediately discounted the odds. By quite a bit, in fact. Perhaps that was because of lack of self confidence, or hard won pessimism from years of real life experience, but I immediately discounted the odds. I bet a lot of other people did too. And I wouldn’t take the bet, for exactly that reason. I didn’t BELIEVE the odds, as given. I was skeptical. Interestingly enough though, I was less skeptical of the ‘can’t fail/100%’ estimate, than of the 90% estimate. Maybe I could easily imagine a scenario where there was no chance of failure at all, but couldn’t easily imagine a scenario where the odds were, reliably, 90%. Once you start throwing around numbers like 90%, in an imperfect world, what you’re really saying is ‘there is SOME chance of failure’. Estimating how much chance, would be very much a judgement call.
So maybe what you’re looking at here isn’t irrationality, or the inability to multiply, but rather rational pessimism about it being as easy as claimed.