First, I said I’m not a utilitarian, I didn’t say that I don’t value other people. There’s a big difference!
Second, I’m not willing to step behind that veil of ignorance. Why should I? Decision-theoretically, it can make sense to argue “you should help agent X because in some counterfactual, agent X would be deciding whether to help you using similar reasoning”. But, there might be important systematic differences between early people and late people (for example, because late people are modified in some ways compared to the human baseline) which break the symmetry. It might be a priori improbable for me to be born as a late person (and still be me in the relevant sense) or for a late person to be born in our generation[1].
Moreover, if there is a valid decision-theoretic argument to assign more weight to future people, then surely a superintelligent AI acting on my behalf would understand this argument and act on it. So, this doesn’t compel me to precommit to a symmetric agreement with future people in advance.
There is a stronger case for intentionally creating and giving resources to people who are early in counterfactual worlds. At least, assuming people have meaningful preferences about the state of never-being-born.
If a future decision is to shape the present, we need to predict it.
The decision-theoretic strategy “Figure out where you are, then act accordingly.” is merely an approximation to “Use the policy that leads to the multiverse you prefer.”. You *can* bring your present loyalties with you behind the veil, it might just start to feel farcically Goodhartish at some point.
There are of course no probabilities of being born into one position or another, there are only various avatars through which your decisions affect the multiverse. The closest thing to probabilities you’ll find is how much leverage each avatar offers: The least wrong probabilistic anthropics translates “the effect of your decisions through avatar A is twice as important as through avatar B” into “you are twice as likely to be A as B”.
So if we need probabilities of being born early vs. late, we can compare their leverage. We find:
Quantum physics shows that the timeline splits a bazillion times a second. So each second, you become a bazillion yous, but the portions of the multiverse you could first-order impact are divided among them. Therefore, you aren’t significantly more or less likely to find yourself a second earlier or later.
Astronomy shows that there’s a mazillion stars up there. So we build a Dyson sphere and huge artificial womb clusters, and one generation later we launch one colony ship at each star. But in that generation, the fate of the universe becomes a lot more certain, so we should expect to find ourselves before that point, not after.
Physics shows that several constants are finely tuned to support organized matter. We can infer that elsewhere, they aren’t. Since you’d think that there are other, less precarious arrangements of physical law with complex consequences, we can also moderately update towards that very precariousness granting us unusual leverage about something valuable in the acausal marketplace.
History shows that we got lucky during the Cold War. We can slightly update towards:
Current events are important.
Current events are more likely after a Cold War.
Nuclear winter would settle the universe’s fate.
The news show that ours is the era of inadequate AI alignment theory. We can moderately update towards being in a position to affect that.
First, I said I’m not a utilitarian, I didn’t say that I don’t value other people. There’s a big difference!
Second, I’m not willing to step behind that veil of ignorance. Why should I? Decision-theoretically, it can make sense to argue “you should help agent X because in some counterfactual, agent X would be deciding whether to help you using similar reasoning”. But, there might be important systematic differences between early people and late people (for example, because late people are modified in some ways compared to the human baseline) which break the symmetry. It might be a priori improbable for me to be born as a late person (and still be me in the relevant sense) or for a late person to be born in our generation[1].
Moreover, if there is a valid decision-theoretic argument to assign more weight to future people, then surely a superintelligent AI acting on my behalf would understand this argument and act on it. So, this doesn’t compel me to precommit to a symmetric agreement with future people in advance.
There is a stronger case for intentionally creating and giving resources to people who are early in counterfactual worlds. At least, assuming people have meaningful preferences about the state of never-being-born.
If a future decision is to shape the present, we need to predict it.
The decision-theoretic strategy “Figure out where you are, then act accordingly.” is merely an approximation to “Use the policy that leads to the multiverse you prefer.”. You *can* bring your present loyalties with you behind the veil, it might just start to feel farcically Goodhartish at some point.
There are of course no probabilities of being born into one position or another, there are only various avatars through which your decisions affect the multiverse. The closest thing to probabilities you’ll find is how much leverage each avatar offers: The least wrong probabilistic anthropics translates “the effect of your decisions through avatar A is twice as important as through avatar B” into “you are twice as likely to be A as B”.
So if we need probabilities of being born early vs. late, we can compare their leverage. We find:
Quantum physics shows that the timeline splits a bazillion times a second. So each second, you become a bazillion yous, but the portions of the multiverse you could first-order impact are divided among them. Therefore, you aren’t significantly more or less likely to find yourself a second earlier or later.
Astronomy shows that there’s a mazillion stars up there. So we build a Dyson sphere and huge artificial womb clusters, and one generation later we launch one colony ship at each star. But in that generation, the fate of the universe becomes a lot more certain, so we should expect to find ourselves before that point, not after.
Physics shows that several constants are finely tuned to support organized matter. We can infer that elsewhere, they aren’t. Since you’d think that there are other, less precarious arrangements of physical law with complex consequences, we can also moderately update towards that very precariousness granting us unusual leverage about something valuable in the acausal marketplace.
History shows that we got lucky during the Cold War. We can slightly update towards:
Current events are important.
Current events are more likely after a Cold War.
Nuclear winter would settle the universe’s fate.
The news show that ours is the era of inadequate AI alignment theory. We can moderately update towards being in a position to affect that.