Thus, I can never be more than one minus one-in-ten-billion sure that my sensory experience is even roughly correlated with reality. Thus, it would require extraordinary circumstances for me to have any reason to worry about any probability of less than one-in-ten-billion magnitude.
No. The reason not to spend much time thinking about the I-am-undetectably-insane scenario is not, in general, that it’s extraordinarily unlikely. The reason is that you can’t make good predictions about what would be good choices for you in worlds where you’re insane and totally unable to tell.
This holds even if the probability for the scenario goes up.
/A/ reason not to spend much time thinking about the I-am-undetectably-insane scenario is as you describe; however, it’s not the /only/ reason not to spend much time thinking about it.
I often have trouble explaining myself, and need multiple descriptions of an idea to get a point across, so allow me to try again:
There is roughly a 30 out of 1,000,000 chance that I will die in the next 24 hours. Over a week, simplifying a bit, that’s roughly 200 out of 1,000,000 odds of me dying. If I were to buy a 1-in-a-million lottery ticket a week, then, by one rule of thumb, I should be spending 200 times as much of my attention on my forthcoming demise than I should on buying that ticket and imagining what to do with the winnings.
In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I’m hallucinating all this, and the darned-near-zero odds of a Pascal’s Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal’s Mugging attempt is true; which works out to darned-near-zero seconds spent bothering with the Mugging, no matter how much or how little time I spend contemplating the Matrix.
(There are, of course, alternative viewpoints which may make it worth spending more time on the low-probability scenarios in each case; for example, buying a lottery ticket can be viewed as one of the few low-cost ways to funnel money from most of your parallel-universe selves so that a certain few of your parallel-universe selves have enough resources to work on certain projects that are otherwise infeasibly expensive. But these alternatives require careful consideration and construction, at least enough to be able to have enough logical weight behind them to counter the standard rule-of-thumb I’m trying to propose here.)
In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I’m hallucinating all this, and the darned-near-zero odds of a Pascal’s Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal’s Mugging attempt is true
That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.
If I don’t know anything about Newcomb’s problem and estimate a 10% chance of Omega showing up and posing it to me tomorrow, I’ll definitely spend more than 10% of my planning time for tomorrow reading up on and thinking about it. Why? Because I’ll be able to make far more money in that possible future than the others, which means that the expected utility differentials are larger, and so it makes sense to spend more resources on preparing for it.
The I-am-undetectably-insane case is the opposite of this, a scenario that it’s pretty much impossible to usefully prepare for.
And a PM scenario is (at least for an expected-utility maximizer) a more extreme variant of my first scenario—low probabilities of ridiculously large outcomes, that are because of that still worth thinking about.
In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I’m hallucinating all this, and the darned-near-zero odds of a Pascal’s Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal’s Mugging attempt is true
That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.
I agree, but I think I see where DataPacRat is going with his/her comments.
First, it seems as if we only think about the Pascalian scenarios that are presented to us. If we are presented with one of these scenarios, e.g. mugging, we should consider all other scenarios of equal or greater expected impact.
In addition, low probability events that we fail to consider can possibly obsolete the dilemma posed by PM. For example, say a mugger demands your wallet or he will destroy the universe. There is a nonzero probability that he has the capability to destroy the universe, but it is important to consider the much greater, but still low, probability that he dies of a heart attack right before your eyes.
In the I-am-undetectably-insane scenario, your predictions about the worlds where you’re insane don’t even matter, because your subjective experience doesn’t actually take place in those worlds anyways.
No. The reason not to spend much time thinking about the I-am-undetectably-insane scenario is not, in general, that it’s extraordinarily unlikely. The reason is that you can’t make good predictions about what would be good choices for you in worlds where you’re insane and totally unable to tell.
This holds even if the probability for the scenario goes up.
/A/ reason not to spend much time thinking about the I-am-undetectably-insane scenario is as you describe; however, it’s not the /only/ reason not to spend much time thinking about it.
I often have trouble explaining myself, and need multiple descriptions of an idea to get a point across, so allow me to try again:
There is roughly a 30 out of 1,000,000 chance that I will die in the next 24 hours. Over a week, simplifying a bit, that’s roughly 200 out of 1,000,000 odds of me dying. If I were to buy a 1-in-a-million lottery ticket a week, then, by one rule of thumb, I should be spending 200 times as much of my attention on my forthcoming demise than I should on buying that ticket and imagining what to do with the winnings.
In parallel, if I am to compare two independent scenarios, the at-least-one-in-ten-billion odds that I’m hallucinating all this, and the darned-near-zero odds of a Pascal’s Mugging attempt, then I should be spending proportionately that much more time dealing with the Matrix scenario than that the Pascal’s Mugging attempt is true; which works out to darned-near-zero seconds spent bothering with the Mugging, no matter how much or how little time I spend contemplating the Matrix.
(There are, of course, alternative viewpoints which may make it worth spending more time on the low-probability scenarios in each case; for example, buying a lottery ticket can be viewed as one of the few low-cost ways to funnel money from most of your parallel-universe selves so that a certain few of your parallel-universe selves have enough resources to work on certain projects that are otherwise infeasibly expensive. But these alternatives require careful consideration and construction, at least enough to be able to have enough logical weight behind them to counter the standard rule-of-thumb I’m trying to propose here.)
That still sounds wrong. You appear to be deciding on what to precompute for purely by probability, without considering that some possible futures will give you the chance to shift more utility around.
If I don’t know anything about Newcomb’s problem and estimate a 10% chance of Omega showing up and posing it to me tomorrow, I’ll definitely spend more than 10% of my planning time for tomorrow reading up on and thinking about it. Why? Because I’ll be able to make far more money in that possible future than the others, which means that the expected utility differentials are larger, and so it makes sense to spend more resources on preparing for it.
The I-am-undetectably-insane case is the opposite of this, a scenario that it’s pretty much impossible to usefully prepare for.
And a PM scenario is (at least for an expected-utility maximizer) a more extreme variant of my first scenario—low probabilities of ridiculously large outcomes, that are because of that still worth thinking about.
I agree, but I think I see where DataPacRat is going with his/her comments.
First, it seems as if we only think about the Pascalian scenarios that are presented to us. If we are presented with one of these scenarios, e.g. mugging, we should consider all other scenarios of equal or greater expected impact.
In addition, low probability events that we fail to consider can possibly obsolete the dilemma posed by PM. For example, say a mugger demands your wallet or he will destroy the universe. There is a nonzero probability that he has the capability to destroy the universe, but it is important to consider the much greater, but still low, probability that he dies of a heart attack right before your eyes.
In the I-am-undetectably-insane scenario, your predictions about the worlds where you’re insane don’t even matter, because your subjective experience doesn’t actually take place in those worlds anyways.