I have a simple solution for Pascal muggers. I assume that, unless I have good and specific reason to believe otherwise, the probability of achieving an extreme utility u is bounded above by O(1/u). And therefore after some point, I can replace whatever extreme utility is quoted in an argument with a constant. Which might be arbitrarily close to 0.
In some contexts this is obvious. For example if someone offers you $1000 to build a fence in their yard, you might reasonably believe them. You might or might not choose to do it. If they offered you $10,000, that’s suspiciously high for the job. You reasonably would worry that it is a lie, and might or might not choose the believable $1000 offer over it as having higher expected value. If you were offered $1,000,000 to build the same fence, you’d assume it was a lie and definitely wouldn’t take the job.
By this reasoning, what should you think in the limit as the job stays finite, and the reward tends towards infinity? You hit that limit with the claim that for a finite amount of worship in your lifetime (and a modest tithe) you’ll get an infinite reward in Heaven. This is my favorite argument against Pascal’s wager.
But now let’s bring it back to something more reasonable. Longtermism makes moral arguments in terms of improving the prospect of a distant future teaming with consciousness. But if you trot out the Doomsday argument, the a priori odds of this future are proportional to 1 / the amount of future consciousness. Given the many ways that humanity could go extinct, and the possibilities of future space, I don’t have a strong opinion how to adjust that prior given current evidence. Therefore I treat this as a case where we hit the upper bound on the probability of the reward from the scenario is bounded above by a reasonably small constant.
My constant is small in this case because I also believe that large amounts of future consciousness go hand in hand with high likelihood of extreme misery from a Malthusian disaster scenario.
If we pretend things like AI x-risk aren’t a thing for a moment.
Then the balancing for the large utility for long termism is the small chance that we see ourselves being early.
Taking this as anthropic evidence that x-risk is high seems wrong.
I think the long termism is, to a large extent, held up on the strong evidence that we are at the begining. If I found out that 50% of people are living in ancestor simulations, the case for long termism would weaken a lot, we are probably in yet another sim.
I am well aware that my assumptions are my assumptions. They work for me, but you may want to assume something different.
I’ve personally interpreted the Doomsday argument that way since I ran across it about 30 years ago. Honestly AI x-risk is pretty low on my list of things to worry about.
The simulation argument never impressed me. Every simulation that I’ve seen ran a lot more slowly than the underlying reality. Therefore even if you do get a lengthy regress of simulations stacked on on simulations, most of the experience to have is in the underlying reality, and not in the simulations. Therefore I’ve concluded that I probably exist in reality, not a simulation.
Ok, I will grant you the “simulations run slower/ with more energy” so are less common argument as approximately true.
(I think there are big caviats to that, and I think it would be possible to run a realistic sim of you for less than your metabolic power use of ~100 watts. And of course, giving you your exact experiences without cheating requires a whole universe of stars lit up, just so you can see some dots in an astronomy magazine.)
Imagine a universe with one early earth, and 10^50 minds in a intergalactic civilization, including a million simulations of early earth. (Amongst a billion sims of other stuff)
In this universe it is true both that most beings are in underlying reality, and that we are likely in a simulation.
This relies on us being unusually interesting to potential simulators.
But, based on current human behavior, I would expect such simulations to focus on the great and famous, or on situations which represent fun game play.
My life does not qualify as any of those. So I heavily discount this possibility.
I would expect that more notable events would tend to get more sim time.
It might or might not be hard to sim one person without surrounding social context.
(Ie maybe humans interact in such completed ways that it’s easiest to just sim all 8 billion. )
But the main point is that you are still extremely special, compared to a random member of a 10^50 person galactic civilization.
You aren’t maximally special, but are still bloomin special.
You aren’t looking at just how tiny our current world is on the scale of a billion dyson spheres.
If we scale up the resource scales from here to K3 without changing the distribution of things people are interested in, then everything anyone has bothered to say or think ever would get (I think like at least 10 ) orders of magnitude more compute than needed to simulate our civilization up to this point.
However I’m not egocentric enough to imagine myself as particularly interesting to potential simulators. And so that hypothetical doesn’t significantly change my beliefs.
“Particularly interesting” in a sense in which all humans currently on earth (or in our history) are unusually interesting. It’s that compared to the scale of the universe, simulating pre singularity history doesn’t take much.
I don’t know the amount of compute needed, but I strongly suspect it’s <1 in 10^20 of the compute that fits in our universe.
In a world of 10^50 humans in a galaxy spanning empire, you are interesting just for being so early.
The human mind contains about 10^12 or whatever bits. So there are at most 2^10^12 minds we would recognize as human. If you think that simulating the exact same mind 10 times is no better than simulating it once, and you deny the moral relevance of vast incomprehensible transhuman minds, then you have some finite bounds on your utility. Some finite set of things that might or might not be simulated. This lets you just deny that 3^^^^3 distinct humans exist in the platonic space of possible minds.
The other solution is solomonov reality fluid. Reality has some magic reality fluid that determines how real something is. A bigger universe doesn’t get more realness. It just spreads the same realness out more widely.
When you see a quantum coin get tossed, you split into 2 versions, but each of those versions is half as real. This removes any incentive to delay pleasurable experiences until after you see a quantum coin flip.
Ie otherwise, eating an icecream and then seeing 100 digits of quantum randomness would mean experiencing eating that icecream once, and seeing the randomness first would mean the universe being split into 10^100 versions of you, each of which enjoys their own ice cream. So unless you feel compelled to read pages of quantum random numbers before doing anything fun, you must be splitting up your realness between the quantum many worlds.
If you don’t split the realness in your probability distribution, you are constantly surprised how little quantum randomness you see. Ie suppose there is a 1 in 100 chance of me putting 50 digits of quantum randomness into this post. And you see I don’t. 1%*10^50 =10^48, meaning your surprise, that I didn’t add that randomness should be 1 in 10^48 if you consider all the worlds with different numbers equally real.
Now probability distribution realness doesn’t have to be the same as moral realness. There are consistent models of philosophy where these are different. But it actually works out fine if those are the same.
So if we live in a universe with a vast number of people, that universe has to split it’s realness among the people in it. Ie if there are 3^^3 people, most of them get < 1/3^^3 measure, making them almost entirely imaginary.
If there is some entity that is taking some large number of people, and then selecting one of those people to decide the fate of all the rest, your prior that you-in-particular are the one that the entity selected should scale with the number of people in the group. If the group contains 3^^^3 people, you need enough evidence to overwhelm the 1/3^^^3 prior that you in particular have been singled out as special within that group.
I find this confusing. My actual strength of belief now that I can tip an outcome that affects at least 3^^^3 other people is a lot closer to 1/(1000000) than 1/(3^^7625597484987). My justification is that while 3^^^3 isn’t a number that fits into any finite multiverse, the universe going on for infinitely long seems kinda possible and anthropic reasoning may not be valid here (I added 10x in case it is) and I have various ideas. The difference in those two probabilities is large (to put it mildly), and significant (one is worth thinking about and the other isn’t). How to resolve this?
Let’s consider those 3^^^3 other people. Choose one of those people at random. What’s your strength of belief that that particular person can tip an outcome that affects > 0.0001% of those 3^^^3 other people?
Putting it another way: do you expect that the average beliefs among those 3^^^3 people would be more accurate if each person believed that there was a 1/3^^^3 chance that they could determine the fate of a substantial fraction of the people in their reference class, or if each person believed there was a 1/1000000 chance that they could determine the fate of a substantial fraction of the people in their reference class?
the universe going on for infinitely long seems kinda possible
I think in infinite universes you need to start factoring in stuff like the simulation hypothesis.
I have a simple solution for Pascal muggers. I assume that, unless I have good and specific reason to believe otherwise, the probability of achieving an extreme utility u is bounded above by O(1/u). And therefore after some point, I can replace whatever extreme utility is quoted in an argument with a constant. Which might be arbitrarily close to 0.
In some contexts this is obvious. For example if someone offers you $1000 to build a fence in their yard, you might reasonably believe them. You might or might not choose to do it. If they offered you $10,000, that’s suspiciously high for the job. You reasonably would worry that it is a lie, and might or might not choose the believable $1000 offer over it as having higher expected value. If you were offered $1,000,000 to build the same fence, you’d assume it was a lie and definitely wouldn’t take the job.
By this reasoning, what should you think in the limit as the job stays finite, and the reward tends towards infinity? You hit that limit with the claim that for a finite amount of worship in your lifetime (and a modest tithe) you’ll get an infinite reward in Heaven. This is my favorite argument against Pascal’s wager.
But now let’s bring it back to something more reasonable. Longtermism makes moral arguments in terms of improving the prospect of a distant future teaming with consciousness. But if you trot out the Doomsday argument, the a priori odds of this future are proportional to 1 / the amount of future consciousness. Given the many ways that humanity could go extinct, and the possibilities of future space, I don’t have a strong opinion how to adjust that prior given current evidence. Therefore I treat this as a case where we hit the upper bound on the probability of the reward from the scenario is bounded above by a reasonably small constant.
My constant is small in this case because I also believe that large amounts of future consciousness go hand in hand with high likelihood of extreme misery from a Malthusian disaster scenario.
If we pretend things like AI x-risk aren’t a thing for a moment.
Then the balancing for the large utility for long termism is the small chance that we see ourselves being early.
Taking this as anthropic evidence that x-risk is high seems wrong.
I think the long termism is, to a large extent, held up on the strong evidence that we are at the begining. If I found out that 50% of people are living in ancestor simulations, the case for long termism would weaken a lot, we are probably in yet another sim.
I am well aware that my assumptions are my assumptions. They work for me, but you may want to assume something different.
I’ve personally interpreted the Doomsday argument that way since I ran across it about 30 years ago. Honestly AI x-risk is pretty low on my list of things to worry about.
The simulation argument never impressed me. Every simulation that I’ve seen ran a lot more slowly than the underlying reality. Therefore even if you do get a lengthy regress of simulations stacked on on simulations, most of the experience to have is in the underlying reality, and not in the simulations. Therefore I’ve concluded that I probably exist in reality, not a simulation.
Ok, I will grant you the “simulations run slower/ with more energy” so are less common argument as approximately true.
(I think there are big caviats to that, and I think it would be possible to run a realistic sim of you for less than your metabolic power use of ~100 watts. And of course, giving you your exact experiences without cheating requires a whole universe of stars lit up, just so you can see some dots in an astronomy magazine.)
Imagine a universe with one early earth, and 10^50 minds in a intergalactic civilization, including a million simulations of early earth. (Amongst a billion sims of other stuff)
In this universe it is true both that most beings are in underlying reality, and that we are likely in a simulation.
This relies on us being unusually interesting to potential simulators.
Hypothetically this is possible.
But, based on current human behavior, I would expect such simulations to focus on the great and famous, or on situations which represent fun game play.
My life does not qualify as any of those. So I heavily discount this possibility.
I would expect that more notable events would tend to get more sim time.
It might or might not be hard to sim one person without surrounding social context.
(Ie maybe humans interact in such completed ways that it’s easiest to just sim all 8 billion. )
But the main point is that you are still extremely special, compared to a random member of a 10^50 person galactic civilization.
You aren’t maximally special, but are still bloomin special.
You aren’t looking at just how tiny our current world is on the scale of a billion dyson spheres.
If we scale up the resource scales from here to K3 without changing the distribution of things people are interested in, then everything anyone has bothered to say or think ever would get (I think like at least 10 ) orders of magnitude more compute than needed to simulate our civilization up to this point.
I agree with your caveats.
However I’m not egocentric enough to imagine myself as particularly interesting to potential simulators. And so that hypothetical doesn’t significantly change my beliefs.
“Particularly interesting” in a sense in which all humans currently on earth (or in our history) are unusually interesting. It’s that compared to the scale of the universe, simulating pre singularity history doesn’t take much.
I don’t know the amount of compute needed, but I strongly suspect it’s <1 in 10^20 of the compute that fits in our universe.
In a world of 10^50 humans in a galaxy spanning empire, you are interesting just for being so early.
There are a couple of potential solutions here.
One solution is computational personhood.
The human mind contains about 10^12 or whatever bits. So there are at most 2^10^12 minds we would recognize as human. If you think that simulating the exact same mind 10 times is no better than simulating it once, and you deny the moral relevance of vast incomprehensible transhuman minds, then you have some finite bounds on your utility. Some finite set of things that might or might not be simulated. This lets you just deny that 3^^^^3 distinct humans exist in the platonic space of possible minds.
The other solution is solomonov reality fluid. Reality has some magic reality fluid that determines how real something is. A bigger universe doesn’t get more realness. It just spreads the same realness out more widely.
When you see a quantum coin get tossed, you split into 2 versions, but each of those versions is half as real. This removes any incentive to delay pleasurable experiences until after you see a quantum coin flip.
Ie otherwise, eating an icecream and then seeing 100 digits of quantum randomness would mean experiencing eating that icecream once, and seeing the randomness first would mean the universe being split into 10^100 versions of you, each of which enjoys their own ice cream. So unless you feel compelled to read pages of quantum random numbers before doing anything fun, you must be splitting up your realness between the quantum many worlds.
If you don’t split the realness in your probability distribution, you are constantly surprised how little quantum randomness you see. Ie suppose there is a 1 in 100 chance of me putting 50 digits of quantum randomness into this post. And you see I don’t. 1%*10^50 =10^48, meaning your surprise, that I didn’t add that randomness should be 1 in 10^48 if you consider all the worlds with different numbers equally real.
Now probability distribution realness doesn’t have to be the same as moral realness. There are consistent models of philosophy where these are different. But it actually works out fine if those are the same.
So if we live in a universe with a vast number of people, that universe has to split it’s realness among the people in it. Ie if there are 3^^3 people, most of them get < 1/3^^3 measure, making them almost entirely imaginary.
If there is some entity that is taking some large number of people, and then selecting one of those people to decide the fate of all the rest, your prior that you-in-particular are the one that the entity selected should scale with the number of people in the group. If the group contains 3^^^3 people, you need enough evidence to overwhelm the 1/3^^^3 prior that you in particular have been singled out as special within that group.
I find this confusing. My actual strength of belief now that I can tip an outcome that affects at least 3^^^3 other people is a lot closer to 1/(1000000) than 1/(3^^7625597484987). My justification is that while 3^^^3 isn’t a number that fits into any finite multiverse, the universe going on for infinitely long seems kinda possible and anthropic reasoning may not be valid here (I added 10x in case it is) and I have various ideas. The difference in those two probabilities is large (to put it mildly), and significant (one is worth thinking about and the other isn’t). How to resolve this?
Let’s consider those 3^^^3 other people. Choose one of those people at random. What’s your strength of belief that that particular person can tip an outcome that affects > 0.0001% of those 3^^^3 other people?
Putting it another way: do you expect that the average beliefs among those 3^^^3 people would be more accurate if each person believed that there was a 1/3^^^3 chance that they could determine the fate of a substantial fraction of the people in their reference class, or if each person believed there was a 1/1000000 chance that they could determine the fate of a substantial fraction of the people in their reference class?
I think in infinite universes you need to start factoring in stuff like the simulation hypothesis.
This line of reasoning is a lot like the Doomsday argument.