Even if the prior probability of your saving 3↑↑↑3 people and killing 3↑↑↑3 people, conditional on my giving you five dollars, exactly balanced down to the log(3↑↑↑3) decimal place, the likelihood ratio for your telling me that you would “save” 3↑↑↑3 people would not be exactly 1:1 for the two hypotheses down to the log(3↑↑↑3) decimal place.
The scenario is already so outlandish that it seems unwarranted to assume that the mugger is saying the truth with more than 0.5 certainty. The motives of such a being to engage in this kind of prank, if truly in such a powerful position, would have to be very convulted. Isn’t it at least as likely that the opposite will happen if I hand over the five dollars?
Okay, I guess if that’s my answer, I’ll have to hand over the money if the mugger says “don’t give me five dollars!” Or do I?
Reversed stupidity is not intelligence. You are not so confused as to guess the opposite of what will happen more often than what will actually happen. All your confusion means is that it is almost as likely that the opposite will happen.
This is one of many reasons that the “discover novel physics that implies the ability to affect (really big number) lives” version of this thought experiment works better than the “encounter superhuman person who asserts the ability to affect (really big number) lives”. That said, if I’m looking for reasons for incredulity and prepared to stop thinking about the scenario once I’ve found them, I can find them easily enough in both cases.
Well, one of my responses to the superhuman scenario is that my prior depends on the number, so you can’t exceed my prior just by raising the number.
The reasons I gave for having my prior depend on the number don’t still apply to the physics scenario, but there are new reasons that would. For instance, the human mind is not good at estimating or comprehending very small probabilities and very large numbers; if I had to pay $5 for research that had a very tiny probability of producing a breakthrough that would improve lives by a very large amount of utility, I would have little confidence in my ability to properly compute those numbers and the more extreme the numbers the less my confidence would be.
(And “I have no confidence” also means I don’t know my own errors are distributed, so you can’t easily fix this up by factoring my confidence into the expected value calculation.)
Yes, agreed, a researcher saying “give me $5 to research technology with implausible payoff” is just some guy saying “give me $5 to use my implausible powers” with different paint and has many of the same problems.
The scenario I’m thinking of is “I have, after doing a bunch of research, discovered some novel physics which, given my understanding of it and the experimental data I’ve gathered, implies the ability to improve (really big number) lives,” which raises the possibility that I ought to reject the results of my own experiments and my own theorizing, because the conclusion is just so bloody implausible (at least when expressed in human terms; EY loses me when he starts talking about quantifying the implausibility of the conclusion in terms of bits of evidence and/or bits of sensory input and/or bits of cognitive state).
And in particular, the “you could just as easily harm (really big number) lives!” objection simply disappears in this case; it’s no more likely than anything else, and vanishes into unconsiderability when compared to “nothing terribly interesting will happen,” unless I posit that I actually do know what I’m doing.
The scenario is already so outlandish that it seems unwarranted to assume that the mugger is saying the truth with more than 0.5 certainty. The motives of such a being to engage in this kind of prank, if truly in such a powerful position, would have to be very convulted. Isn’t it at least as likely that the opposite will happen if I hand over the five dollars?
Okay, I guess if that’s my answer, I’ll have to hand over the money if the mugger says “don’t give me five dollars!” Or do I?
Reversed stupidity is not intelligence. You are not so confused as to guess the opposite of what will happen more often than what will actually happen. All your confusion means is that it is almost as likely that the opposite will happen.
This is one of many reasons that the “discover novel physics that implies the ability to affect (really big number) lives” version of this thought experiment works better than the “encounter superhuman person who asserts the ability to affect (really big number) lives”. That said, if I’m looking for reasons for incredulity and prepared to stop thinking about the scenario once I’ve found them, I can find them easily enough in both cases.
Well, one of my responses to the superhuman scenario is that my prior depends on the number, so you can’t exceed my prior just by raising the number.
The reasons I gave for having my prior depend on the number don’t still apply to the physics scenario, but there are new reasons that would. For instance, the human mind is not good at estimating or comprehending very small probabilities and very large numbers; if I had to pay $5 for research that had a very tiny probability of producing a breakthrough that would improve lives by a very large amount of utility, I would have little confidence in my ability to properly compute those numbers and the more extreme the numbers the less my confidence would be.
(And “I have no confidence” also means I don’t know my own errors are distributed, so you can’t easily fix this up by factoring my confidence into the expected value calculation.)
Yes, agreed, a researcher saying “give me $5 to research technology with implausible payoff” is just some guy saying “give me $5 to use my implausible powers” with different paint and has many of the same problems.
The scenario I’m thinking of is “I have, after doing a bunch of research, discovered some novel physics which, given my understanding of it and the experimental data I’ve gathered, implies the ability to improve (really big number) lives,” which raises the possibility that I ought to reject the results of my own experiments and my own theorizing, because the conclusion is just so bloody implausible (at least when expressed in human terms; EY loses me when he starts talking about quantifying the implausibility of the conclusion in terms of bits of evidence and/or bits of sensory input and/or bits of cognitive state).
And in particular, the “you could just as easily harm (really big number) lives!” objection simply disappears in this case; it’s no more likely than anything else, and vanishes into unconsiderability when compared to “nothing terribly interesting will happen,” unless I posit that I actually do know what I’m doing.