An important question here is “what is the point of being ‘more real’?”.
Does having a higher measure give you a better acausal bargaining position? Do you terminally value more realness? Less vulnerable to catastrophes? Wanting to make sure your values are optimized harder?
I consider these, except for the terminal sense, to be rather weak as far as motivations go.
Acausal Bargaining: Imagine a bunch of nearby universes with instances of ‘you’. They all have variations, some very similar, others with directions that seem a bit strange to the others. Still identifiably ‘you’ by a human notion of identity. Some of them became researchers, others investors, a few artists, writers, and a handful of CEOs.
You can model these as being variations on some shared utility function: U+αi where U is shared, and αi is the individual utility function. Some of them are more social, others cynical, and so on. A believable amount of human variation that won’t necessarily converge to the same utility function on reflection (but quite close).
For a human, losing memories so that you are more real is akin to each branch chopping off the αi. They lose memories of a wonderful party which changed their opinion of them, they no longer remember the horrors of a war, and so on.
Everyone may do the simple ask of losing all their minor memories which has no effect on the utility function, but then if you want more bargaining power, do you continue? The hope is that this would make your coalition easier to locate, to be more visible in “logical sight”. That this increased bargaining power would thus ensure that, at the least, your important shared values are optimized harder than they could if you were a disparate group of branches.
I think this is sometimes correct, but often not.
From a simple computationalist perspective, increasing the measure of the ‘overall you’ is of little matter. The part that bargains, your rough algorithm and your utility function, is already shared: U is shared among all your instances already, some of you just have considerations that pull in other directions (αi).
This is the same core idea of the FDT explanation of why people should vote: because, despite not being clones of you, there is a group of people that share similar reasoning as you. Getting rid of your memories in the voting case does not help you!
For the Acausal Bargaining case, there is presumably some value in being simpler. But, that means more likely that you should bargain ‘nearby’ to present a computationally cheaper value function ‘far away’. So, similar to forgetting, where you appear as if having some shared utility function, but without actually forgetting—and thus being able to optimize for αi in your local universe. As well, the bargained utility function presented far away (less logical sight to your cluster of universes) is unlikely to be the same as U.
So, overall, my argument would be that forgetting does give you more realness. If at 7:59AM, a large chunk of universes decide to replace part of their algorithm with a specific coordinated one (like removing a memory) then that algorithm is instantiated across more universes. But, that from a decision-theoretic perspective, I don’t think that matters too much? You already share the important decision theoretic parts, even if the whole algorithm is not shared.
From a human perspective we may care about this as a value of wanting to ‘exist more’ in some sense. I think this is a reasonable enough value to have, but that it is oft satisfied by considering the sharing of decision methods and 99.99% of personality is enough.
My main question of whether this is useful beyond a terminal value for existing more is about quantum immortality—of which I am more uncertain about.
An important question here is “what is the point of being ‘more real’?”. Does having a higher measure give you a better acausal bargaining position? Do you terminally value more realness? Less vulnerable to catastrophes? Wanting to make sure your values are optimized harder?
I consider these, except for the terminal sense, to be rather weak as far as motivations go.
Acausal Bargaining: Imagine a bunch of nearby universes with instances of ‘you’. They all have variations, some very similar, others with directions that seem a bit strange to the others. Still identifiably ‘you’ by a human notion of identity. Some of them became researchers, others investors, a few artists, writers, and a handful of CEOs.
You can model these as being variations on some shared utility function: U+αi where U is shared, and αi is the individual utility function. Some of them are more social, others cynical, and so on. A believable amount of human variation that won’t necessarily converge to the same utility function on reflection (but quite close).
For a human, losing memories so that you are more real is akin to each branch chopping off the αi. They lose memories of a wonderful party which changed their opinion of them, they no longer remember the horrors of a war, and so on.
Everyone may do the simple ask of losing all their minor memories which has no effect on the utility function, but then if you want more bargaining power, do you continue? The hope is that this would make your coalition easier to locate, to be more visible in “logical sight”. That this increased bargaining power would thus ensure that, at the least, your important shared values are optimized harder than they could if you were a disparate group of branches.
I think this is sometimes correct, but often not.
From a simple computationalist perspective, increasing the measure of the ‘overall you’ is of little matter. The part that bargains, your rough algorithm and your utility function, is already shared: U is shared among all your instances already, some of you just have considerations that pull in other directions (αi). This is the same core idea of the FDT explanation of why people should vote: because, despite not being clones of you, there is a group of people that share similar reasoning as you. Getting rid of your memories in the voting case does not help you!
For the Acausal Bargaining case, there is presumably some value in being simpler. But, that means more likely that you should bargain ‘nearby’ to present a computationally cheaper value function ‘far away’. So, similar to forgetting, where you appear as if having some shared utility function, but without actually forgetting—and thus being able to optimize for αi in your local universe. As well, the bargained utility function presented far away (less logical sight to your cluster of universes) is unlikely to be the same as U.
So, overall, my argument would be that forgetting does give you more realness. If at 7:59AM, a large chunk of universes decide to replace part of their algorithm with a specific coordinated one (like removing a memory) then that algorithm is instantiated across more universes. But, that from a decision-theoretic perspective, I don’t think that matters too much? You already share the important decision theoretic parts, even if the whole algorithm is not shared.
From a human perspective we may care about this as a value of wanting to ‘exist more’ in some sense. I think this is a reasonable enough value to have, but that it is oft satisfied by considering the sharing of decision methods and 99.99% of personality is enough.
My main question of whether this is useful beyond a terminal value for existing more is about quantum immortality—of which I am more uncertain about.