In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I’m hallucinating all this, and the darned-near-zero odds of a Pascal’s Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal’s Mugging attempt is true
That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.
If I don’t know anything about Newcomb’s problem and estimate a 10% chance of Omega showing up and posing it to me tomorrow, I’ll definitely spend more than 10% of my planning time for tomorrow reading up on and thinking about it. Why? Because I’ll be able to make far more money in that possible future than the others, which means that the expected utility differentials are larger, and so it makes sense to spend more resources on preparing for it.
The I-am-undetectably-insane case is the opposite of this, a scenario that it’s pretty much impossible to usefully prepare for.
And a PM scenario is (at least for an expected-utility maximizer) a more extreme variant of my first scenario—low probabilities of ridiculously large outcomes, that are because of that still worth thinking about.
In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I’m hallucinating all this, and the darned-near-zero odds of a Pascal’s Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal’s Mugging attempt is true
That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.
I agree, but I think I see where DataPacRat is going with his/her comments.
First, it seems as if we only think about the Pascalian scenarios that are presented to us. If we are presented with one of these scenarios, e.g. mugging, we should consider all other scenarios of equal or greater expected impact.
In addition, low probability events that we fail to consider can possibly obsolete the dilemma posed by PM. For example, say a mugger demands your wallet or he will destroy the universe. There is a nonzero probability that he has the capability to destroy the universe, but it is important to consider the much greater, but still low, probability that he dies of a heart attack right before your eyes.
That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.
If I don’t know anything about Newcomb’s problem and estimate a 10% chance of Omega showing up and posing it to me tomorrow, I’ll definitely spend more than 10% of my planning time for tomorrow reading up on and thinking about it. Why? Because I’ll be able to make far more money in that possible future than the others, which means that the expected utility differentials are larger, and so it makes sense to spend more resources on preparing for it.
The I-am-undetectably-insane case is the opposite of this, a scenario that it’s pretty much impossible to usefully prepare for.
And a PM scenario is (at least for an expected-utility maximizer) a more extreme variant of my first scenario—low probabilities of ridiculously large outcomes, that are because of that still worth thinking about.
I agree, but I think I see where DataPacRat is going with his/her comments.
First, it seems as if we only think about the Pascalian scenarios that are presented to us. If we are presented with one of these scenarios, e.g. mugging, we should consider all other scenarios of equal or greater expected impact.
In addition, low probability events that we fail to consider can possibly obsolete the dilemma posed by PM. For example, say a mugger demands your wallet or he will destroy the universe. There is a nonzero probability that he has the capability to destroy the universe, but it is important to consider the much greater, but still low, probability that he dies of a heart attack right before your eyes.