There is a number of good reasons why one would refuse to pay the Pascal’s Muggle, and one is based on computational complexity: Small probabilities are costly. One needs more computing resources to evaluate tiny probablitites accurately. Once they are tiny enough, your calculations will take too long to be of interest. And the numbers under consideration exceed 10^122, the largest number of quantum states in the observable universe, by so much, you have no chance of making an accurate evaluation before the heat death of the universe.
This also matches the intuitive approach. If you are confronted by something as unbelievable as “In the sky above, a gap edged by blue fire opens with a horrendous tearing sound”, as Eliezer colorfully puts it, then the first thing to realize is that you have no way to evaluate the probability of this event being “real” without much more work than you can possibly do in your lifetime. Or at least in the time the Mugger wants you to make a decision. Even to choose whether to precommit to something like that requires more resources that you are likely to have. So the Eliezer’s statement ” If you assign superexponentially infinitesimal probability to claims of large impacts, then apparently you should ignore the possibility of a large impact even after seeing huge amounts of evidence. ” has two points of failure based on computational complexity:
Assigning an accurate superexponentially infinitesimal probability is computationally very expensive.
Figuring out the right amount of updating after “seeing huge amounts of [extremely surprising] evidence is also computationally very expensive.
So the naive approach, “this is too unbelievable to take seriously, no way I can figure out what is true, what is real and what is fake with any degree of confidence, might as well not bother” is actually the reasonable one.
If you have an unbounded utility function, then putting a lot of resources into accurately estimating arbitrarily tiny probabilities can be worth the effort, and if you can’t estimate them very accurately, then you just have to make do with as accurate an estimate as you can make.
There is a number of good reasons why one would refuse to pay the Pascal’s Muggle, and one is based on computational complexity: Small probabilities are costly. One needs more computing resources to evaluate tiny probablitites accurately. Once they are tiny enough, your calculations will take too long to be of interest. And the numbers under consideration exceed 10^122, the largest number of quantum states in the observable universe, by so much, you have no chance of making an accurate evaluation before the heat death of the universe.
Incidentally, this is one of the arguments in physics against the AMPS black hole firewall paradox: Collecting the outgoing Hawking radiation takes way longer than the black hole evaporation time.
This also matches the intuitive approach. If you are confronted by something as unbelievable as “In the sky above, a gap edged by blue fire opens with a horrendous tearing sound”, as Eliezer colorfully puts it, then the first thing to realize is that you have no way to evaluate the probability of this event being “real” without much more work than you can possibly do in your lifetime. Or at least in the time the Mugger wants you to make a decision. Even to choose whether to precommit to something like that requires more resources that you are likely to have. So the Eliezer’s statement ” If you assign superexponentially infinitesimal probability to claims of large impacts, then apparently you should ignore the possibility of a large impact even after seeing huge amounts of evidence. ” has two points of failure based on computational complexity:
Assigning an accurate superexponentially infinitesimal probability is computationally very expensive.
Figuring out the right amount of updating after “seeing huge amounts of [extremely surprising] evidence is also computationally very expensive.
So the naive approach, “this is too unbelievable to take seriously, no way I can figure out what is true, what is real and what is fake with any degree of confidence, might as well not bother” is actually the reasonable one.
If you have an unbounded utility function, then putting a lot of resources into accurately estimating arbitrarily tiny probabilities can be worth the effort, and if you can’t estimate them very accurately, then you just have to make do with as accurate an estimate as you can make.