Are you classifying 10% as a Pascal-level probability? How big does a probability have to get before you don’t think Pascal-type considerations apply to it?
Are you suggesting that if there was (for example) a ten percent probability of an asteroid hitting the Earth in 2025, we should devote fewer resources to asteroid prediction/deflection than simple expected utility calculations would predict?
I don’t think it counts as “Pascalian” until it starts to scrape below the threshold of probabilities you can meaningfully assert about propositions. If we were basically assured of a bright astronomical future so long as person X doesn’t win the lottery, I wouldn’t say that worrying that X might win the lottery was a Pascalian risk.
I’m usually fine with dropping a one-time probability of 0.1% from my calculations. 10% is much too high to drop from a major strategic calculation but even so I’d be uncomfortable building my life around one. If this was a very well-defined number as in the asteroid calculation then it would be more tempting to build a big reference class of risks like that one and work on stopping them collectively. If an asteroid were genuinely en route, large enough to wipe out humanity, possibly stoppable, and nobody was doing anything about this 10% probability, I would still be working on FAI but I would be screaming pretty loudly about the asteroid on the side. If the asteroid is just going to wipe out a country, I’ll make sure I’m not in that country and then keep working on x-risk.
Are you classifying 10% as a Pascal-level probability? How big does a probability have to get before you don’t think Pascal-type considerations apply to it?
Are you suggesting that if there was (for example) a ten percent probability of an asteroid hitting the Earth in 2025, we should devote fewer resources to asteroid prediction/deflection than simple expected utility calculations would predict?
No, he’s saying that 10% and 1% are non-Pascalian probabilities for x-risks, but that 1-in-10,000 is effectively Pascalian.
I don’t think it counts as “Pascalian” until it starts to scrape below the threshold of probabilities you can meaningfully assert about propositions. If we were basically assured of a bright astronomical future so long as person X doesn’t win the lottery, I wouldn’t say that worrying that X might win the lottery was a Pascalian risk.
I didn’t like his anecdote, either.
I think you’ve read him wrong. He’s opposed to “don’t pay attention to high utility * small probability scenarios”, on the basis of heroism.
I’m usually fine with dropping a one-time probability of 0.1% from my calculations. 10% is much too high to drop from a major strategic calculation but even so I’d be uncomfortable building my life around one. If this was a very well-defined number as in the asteroid calculation then it would be more tempting to build a big reference class of risks like that one and work on stopping them collectively. If an asteroid were genuinely en route, large enough to wipe out humanity, possibly stoppable, and nobody was doing anything about this 10% probability, I would still be working on FAI but I would be screaming pretty loudly about the asteroid on the side. If the asteroid is just going to wipe out a country, I’ll make sure I’m not in that country and then keep working on x-risk.