I have a simple solution for Pascal muggers. I assume that, unless I have good and specific reason to believe otherwise, the probability of achieving an extreme utility u is bounded above by O(1/u). And therefore after some point, I can replace whatever extreme utility is quoted in an argument with a constant. Which might be arbitrarily close to 0.
In some contexts this is obvious. For example if someone offers you $1000 to build a fence in their yard, you might reasonably believe them. You might or might not choose to do it. If they offered you $10,000, that’s suspiciously high for the job. You reasonably would worry that it is a lie, and might or might not choose the believable $1000 offer over it as having higher expected value. If you were offered $1,000,000 to build the same fence, you’d assume it was a lie and definitely wouldn’t take the job.
By this reasoning, what should you think in the limit as the job stays finite, and the reward tends towards infinity? You hit that limit with the claim that for a finite amount of worship in your lifetime (and a modest tithe) you’ll get an infinite reward in Heaven. This is my favorite argument against Pascal’s wager.
But now let’s bring it back to something more reasonable. Longtermism makes moral arguments in terms of improving the prospect of a distant future teaming with consciousness. But if you trot out the Doomsday argument, the a priori odds of this future are proportional to 1 / the amount of future consciousness. Given the many ways that humanity could go extinct, and the possibilities of future space, I don’t have a strong opinion how to adjust that prior given current evidence. Therefore I treat this as a case where we hit the upper bound on the probability of the reward from the scenario is bounded above by a reasonably small constant.
My constant is small in this case because I also believe that large amounts of future consciousness go hand in hand with high likelihood of extreme misery from a Malthusian disaster scenario.
If we pretend things like AI x-risk aren’t a thing for a moment.
Then the balancing for the large utility for long termism is the small chance that we see ourselves being early.
Taking this as anthropic evidence that x-risk is high seems wrong.
I think the long termism is, to a large extent, held up on the strong evidence that we are at the begining. If I found out that 50% of people are living in ancestor simulations, the case for long termism would weaken a lot, we are probably in yet another sim.
I am well aware that my assumptions are my assumptions. They work for me, but you may want to assume something different.
I’ve personally interpreted the Doomsday argument that way since I ran across it about 30 years ago. Honestly AI x-risk is pretty low on my list of things to worry about.
The simulation argument never impressed me. Every simulation that I’ve seen ran a lot more slowly than the underlying reality. Therefore even if you do get a lengthy regress of simulations stacked on on simulations, most of the experience to have is in the underlying reality, and not in the simulations. Therefore I’ve concluded that I probably exist in reality, not a simulation.
Ok, I will grant you the “simulations run slower/ with more energy” so are less common argument as approximately true.
(I think there are big caviats to that, and I think it would be possible to run a realistic sim of you for less than your metabolic power use of ~100 watts. And of course, giving you your exact experiences without cheating requires a whole universe of stars lit up, just so you can see some dots in an astronomy magazine.)
Imagine a universe with one early earth, and 10^50 minds in a intergalactic civilization, including a million simulations of early earth. (Amongst a billion sims of other stuff)
In this universe it is true both that most beings are in underlying reality, and that we are likely in a simulation.
This relies on us being unusually interesting to potential simulators.
But, based on current human behavior, I would expect such simulations to focus on the great and famous, or on situations which represent fun game play.
My life does not qualify as any of those. So I heavily discount this possibility.
I would expect that more notable events would tend to get more sim time.
It might or might not be hard to sim one person without surrounding social context.
(Ie maybe humans interact in such completed ways that it’s easiest to just sim all 8 billion. )
But the main point is that you are still extremely special, compared to a random member of a 10^50 person galactic civilization.
You aren’t maximally special, but are still bloomin special.
You aren’t looking at just how tiny our current world is on the scale of a billion dyson spheres.
If we scale up the resource scales from here to K3 without changing the distribution of things people are interested in, then everything anyone has bothered to say or think ever would get (I think like at least 10 ) orders of magnitude more compute than needed to simulate our civilization up to this point.
However I’m not egocentric enough to imagine myself as particularly interesting to potential simulators. And so that hypothetical doesn’t significantly change my beliefs.
“Particularly interesting” in a sense in which all humans currently on earth (or in our history) are unusually interesting. It’s that compared to the scale of the universe, simulating pre singularity history doesn’t take much.
I don’t know the amount of compute needed, but I strongly suspect it’s <1 in 10^20 of the compute that fits in our universe.
In a world of 10^50 humans in a galaxy spanning empire, you are interesting just for being so early.
I have a simple solution for Pascal muggers. I assume that, unless I have good and specific reason to believe otherwise, the probability of achieving an extreme utility u is bounded above by O(1/u). And therefore after some point, I can replace whatever extreme utility is quoted in an argument with a constant. Which might be arbitrarily close to 0.
In some contexts this is obvious. For example if someone offers you $1000 to build a fence in their yard, you might reasonably believe them. You might or might not choose to do it. If they offered you $10,000, that’s suspiciously high for the job. You reasonably would worry that it is a lie, and might or might not choose the believable $1000 offer over it as having higher expected value. If you were offered $1,000,000 to build the same fence, you’d assume it was a lie and definitely wouldn’t take the job.
By this reasoning, what should you think in the limit as the job stays finite, and the reward tends towards infinity? You hit that limit with the claim that for a finite amount of worship in your lifetime (and a modest tithe) you’ll get an infinite reward in Heaven. This is my favorite argument against Pascal’s wager.
But now let’s bring it back to something more reasonable. Longtermism makes moral arguments in terms of improving the prospect of a distant future teaming with consciousness. But if you trot out the Doomsday argument, the a priori odds of this future are proportional to 1 / the amount of future consciousness. Given the many ways that humanity could go extinct, and the possibilities of future space, I don’t have a strong opinion how to adjust that prior given current evidence. Therefore I treat this as a case where we hit the upper bound on the probability of the reward from the scenario is bounded above by a reasonably small constant.
My constant is small in this case because I also believe that large amounts of future consciousness go hand in hand with high likelihood of extreme misery from a Malthusian disaster scenario.
If we pretend things like AI x-risk aren’t a thing for a moment.
Then the balancing for the large utility for long termism is the small chance that we see ourselves being early.
Taking this as anthropic evidence that x-risk is high seems wrong.
I think the long termism is, to a large extent, held up on the strong evidence that we are at the begining. If I found out that 50% of people are living in ancestor simulations, the case for long termism would weaken a lot, we are probably in yet another sim.
I am well aware that my assumptions are my assumptions. They work for me, but you may want to assume something different.
I’ve personally interpreted the Doomsday argument that way since I ran across it about 30 years ago. Honestly AI x-risk is pretty low on my list of things to worry about.
The simulation argument never impressed me. Every simulation that I’ve seen ran a lot more slowly than the underlying reality. Therefore even if you do get a lengthy regress of simulations stacked on on simulations, most of the experience to have is in the underlying reality, and not in the simulations. Therefore I’ve concluded that I probably exist in reality, not a simulation.
Ok, I will grant you the “simulations run slower/ with more energy” so are less common argument as approximately true.
(I think there are big caviats to that, and I think it would be possible to run a realistic sim of you for less than your metabolic power use of ~100 watts. And of course, giving you your exact experiences without cheating requires a whole universe of stars lit up, just so you can see some dots in an astronomy magazine.)
Imagine a universe with one early earth, and 10^50 minds in a intergalactic civilization, including a million simulations of early earth. (Amongst a billion sims of other stuff)
In this universe it is true both that most beings are in underlying reality, and that we are likely in a simulation.
This relies on us being unusually interesting to potential simulators.
Hypothetically this is possible.
But, based on current human behavior, I would expect such simulations to focus on the great and famous, or on situations which represent fun game play.
My life does not qualify as any of those. So I heavily discount this possibility.
I would expect that more notable events would tend to get more sim time.
It might or might not be hard to sim one person without surrounding social context.
(Ie maybe humans interact in such completed ways that it’s easiest to just sim all 8 billion. )
But the main point is that you are still extremely special, compared to a random member of a 10^50 person galactic civilization.
You aren’t maximally special, but are still bloomin special.
You aren’t looking at just how tiny our current world is on the scale of a billion dyson spheres.
If we scale up the resource scales from here to K3 without changing the distribution of things people are interested in, then everything anyone has bothered to say or think ever would get (I think like at least 10 ) orders of magnitude more compute than needed to simulate our civilization up to this point.
I agree with your caveats.
However I’m not egocentric enough to imagine myself as particularly interesting to potential simulators. And so that hypothetical doesn’t significantly change my beliefs.
“Particularly interesting” in a sense in which all humans currently on earth (or in our history) are unusually interesting. It’s that compared to the scale of the universe, simulating pre singularity history doesn’t take much.
I don’t know the amount of compute needed, but I strongly suspect it’s <1 in 10^20 of the compute that fits in our universe.
In a world of 10^50 humans in a galaxy spanning empire, you are interesting just for being so early.