The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?’
As best I can tell, a full reduction of “existence” necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word “exist”, or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents’ utility functions don’t have to share that component.
The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don’t care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.
Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name “probability” stop lining up with each other.)
But it’s possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you’re evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don’t, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.
That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it’s the retrieval of an observation from long-term memory—rather than the observation itself—that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn’t seem interesting.)
As best I can tell, a full reduction of “existence” necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word “exist”, or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents’ utility functions don’t have to share that component.
The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don’t care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.
Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name “probability” stop lining up with each other.)
But it’s possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you’re evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don’t, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.
That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it’s the retrieval of an observation from long-term memory—rather than the observation itself—that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn’t seem interesting.)