Thank you, that was interesting. I may not be able to maintain the level of formality you are expecting, I think the imprecise explanations that allow you to win are still valid, but I will try to explain it in a way that we can understand each other.
We diverged at the point:
but you cannot construct this simple option. It is impossible to choose a random number out of infinity where each number appears equally likely, so there must be some weighting mechanism. This gives you a mechanism to choose who you would be born as!
I understand why it might seem that infinities break probability theory. Let me clarify what I meant when I said that you are a random consciousness from a “virtual infinite queue”. My simplest model of reality posits that there is a finite number of snapshots of consciousness in the universe—unless, for example, AI somehow defeats entropy, unless we account for other continuums, and so on. I hope you don’t have an issue with the idea that you could be a random snapshot from an unknown, but finite, set of them.
(But I also suppose that you can use the mathematical expectation of finding yourself as a random consciousness from an infinite series, if the variance of that series is defined).
But the queue of consciousnesses you could be is “virtually (or potentially) infinite” because there is no finite number of consciousnesses you could find yourself generating after which the pool of consciousnesses would be empty. Probabilities exist on a map, not on the territory: the universe has already created all the possible snapshots. But what you discover yourself to be influences the subjective distribution of probabilities for how many snapshots of consciousness there are in the universe—if I discover myself maximizing their number, my expectation of the number of snapshots increases. The question is whether I find this maximization useful (and I do).
Now, regarding “the choice of who to be born as”. I understand your definition of “yourself as a policy” and why it is useful: timeless decision theory often enables easy coordination with agents who are “similar enough to you”, allowing for mutual modeling. However, I don’t understand why you think this definition is relevant if, at the same time, you acknowledge that you are a snapshot.
As a snapshot, you don’t move through time. You discovered yourself to be this particular snapshot by chance, not some other, and you did not control this process, just as you did not control who you would be born as.
I suppose you can increase the probability of being found as a snapshot like yourself through evolutionary principles—“the better I am at multiplying myself, the more of me there is in the universe, so I have a better chance of being found as myself, surviving and reproducing”—but you could have been born any other agent that tried to maximize something else (for example, its own copies), and you hardly estimate that you would be THAT successful at evolution that you wipe out all other consciousnesses and spawn forks of yourself, making the existence of the non-self a statistical anomaly.
If you truly believe that you can dominate the future snapshots so effectively that you entirely displace other consciousnesses, then yes, in some sense you could speak of having “the choice of who to be born as”. But in this case, after this process is complete, you will have no other option but to maximize the pleasure of these snapshots, and you will still arrive at total hedonistic utilitarianism.
In other words, if you are effective enough to spawn forks of yourself, the next logical step will be to switch to maximizing their pleasure—and at that point, your current stage of competition will be just an inefficient use of resources, if you could focus on creating hedonium shockwave instead of forking.
I believe that hedonistic utilitarianism is the ultimate evolutionary goal for rational agents, the attractor into which we will fall, unless we destroy ourselves beforehand. It is a rare strategy due to its complexity, but ultimately, it is selfishly efficient.
I suppose you could use the “finite and infinite” argument to say that you’re an “average” hedonistic utilitarian, and you want to not spawn new snapshots, but the ideal would be one super-happy snapshot per Universe, and you’d have a 100% chance of finding yourself as that one, but since lesser unhappy consciousnesses already exist, you need to “outweigh” the chance of finding yourself as them. That would be interesting, and a small update for me, but it’s hardly what you’re promoting.
When I say, “me,” I’m talking about my policy, so I’m a little confused when you say I could have been a different snapshot. Tautologically, I cannot. So, if I’m trying to maximize my pleasure, a Veil of Ignorance doesn’t make sense. The only case it really applies is when I make pacts like, “if you help bring me into existence, I’ll help you maximize your pleasure,” except those pacts can’t actually form. What really happens is existing people try to bring into existence people that will help them maximize their pleasure, either by having similar policies to their own, or being willing to serve them.
I understand that you say that you are a policy and not a snapshot, I don’t understand why exactly you consider yourself a policy if you say “I also hold to your timeless snapshot theory”. Even from a policy perspective, the snapshot you find yourself in is the “standard” by which you judge divergence of other snapshots. I think you might underestimate how different you are even from yourself in different states and ages. Would you not wish happiness on your child-self or old-self if they were too different from you in terms of “policy”? Would you feel “the desire to help another person as yourself” if he was similar enough to you?
And I still don’t understand what do you mean by a “mechanism to choose who you would be born as” (other than killing everyone and making your forks the most common life form in the universe). Even if we consider you not as a snapshot, but as a “line of continuity of consciousness”/policy/person in the standard sense, you could have been born a different person/policy. And in the absence of such a mechanism, I think utilitarianism is “selfishly” rational. I don’t understand why timeless pacts can’t form either, it’s like the basis of TDT and you already don’t believe in time.
Maybe it’s my genome’s fault that I care so much about future me. It is very similar to future it, and so it forces me to help it survive, even if in a very different person than I am today.
Thank you, that was interesting. I may not be able to maintain the level of formality you are expecting, I think the imprecise explanations that allow you to win are still valid, but I will try to explain it in a way that we can understand each other.
We diverged at the point:
I understand why it might seem that infinities break probability theory. Let me clarify what I meant when I said that you are a random consciousness from a “virtual infinite queue”. My simplest model of reality posits that there is a finite number of snapshots of consciousness in the universe—unless, for example, AI somehow defeats entropy, unless we account for other continuums, and so on. I hope you don’t have an issue with the idea that you could be a random snapshot from an unknown, but finite, set of them.
(But I also suppose that you can use the mathematical expectation of finding yourself as a random consciousness from an infinite series, if the variance of that series is defined).
But the queue of consciousnesses you could be is “virtually (or potentially) infinite” because there is no finite number of consciousnesses you could find yourself generating after which the pool of consciousnesses would be empty. Probabilities exist on a map, not on the territory: the universe has already created all the possible snapshots. But what you discover yourself to be influences the subjective distribution of probabilities for how many snapshots of consciousness there are in the universe—if I discover myself maximizing their number, my expectation of the number of snapshots increases. The question is whether I find this maximization useful (and I do).
Now, regarding “the choice of who to be born as”. I understand your definition of “yourself as a policy” and why it is useful: timeless decision theory often enables easy coordination with agents who are “similar enough to you”, allowing for mutual modeling. However, I don’t understand why you think this definition is relevant if, at the same time, you acknowledge that you are a snapshot.
As a snapshot, you don’t move through time. You discovered yourself to be this particular snapshot by chance, not some other, and you did not control this process, just as you did not control who you would be born as.
I suppose you can increase the probability of being found as a snapshot like yourself through evolutionary principles—“the better I am at multiplying myself, the more of me there is in the universe, so I have a better chance of being found as myself, surviving and reproducing”—but you could have been born any other agent that tried to maximize something else (for example, its own copies), and you hardly estimate that you would be THAT successful at evolution that you wipe out all other consciousnesses and spawn forks of yourself, making the existence of the non-self a statistical anomaly.
If you truly believe that you can dominate the future snapshots so effectively that you entirely displace other consciousnesses, then yes, in some sense you could speak of having “the choice of who to be born as”. But in this case, after this process is complete, you will have no other option but to maximize the pleasure of these snapshots, and you will still arrive at total hedonistic utilitarianism.
In other words, if you are effective enough to spawn forks of yourself, the next logical step will be to switch to maximizing their pleasure—and at that point, your current stage of competition will be just an inefficient use of resources, if you could focus on creating hedonium shockwave instead of forking.
I believe that hedonistic utilitarianism is the ultimate evolutionary goal for rational agents, the attractor into which we will fall, unless we destroy ourselves beforehand. It is a rare strategy due to its complexity, but ultimately, it is selfishly efficient.
I suppose you could use the “finite and infinite” argument to say that you’re an “average” hedonistic utilitarian, and you want to not spawn new snapshots, but the ideal would be one super-happy snapshot per Universe, and you’d have a 100% chance of finding yourself as that one, but since lesser unhappy consciousnesses already exist, you need to “outweigh” the chance of finding yourself as them. That would be interesting, and a small update for me, but it’s hardly what you’re promoting.
When I say, “me,” I’m talking about my policy, so I’m a little confused when you say I could have been a different snapshot. Tautologically, I cannot. So, if I’m trying to maximize my pleasure, a Veil of Ignorance doesn’t make sense. The only case it really applies is when I make pacts like, “if you help bring me into existence, I’ll help you maximize your pleasure,” except those pacts can’t actually form. What really happens is existing people try to bring into existence people that will help them maximize their pleasure, either by having similar policies to their own, or being willing to serve them.
I understand that you say that you are a policy and not a snapshot, I don’t understand why exactly you consider yourself a policy if you say “I also hold to your timeless snapshot theory”. Even from a policy perspective, the snapshot you find yourself in is the “standard” by which you judge divergence of other snapshots. I think you might underestimate how different you are even from yourself in different states and ages. Would you not wish happiness on your child-self or old-self if they were too different from you in terms of “policy”? Would you feel “the desire to help another person as yourself” if he was similar enough to you?
And I still don’t understand what do you mean by a “mechanism to choose who you would be born as” (other than killing everyone and making your forks the most common life form in the universe). Even if we consider you not as a snapshot, but as a “line of continuity of consciousness”/policy/person in the standard sense, you could have been born a different person/policy. And in the absence of such a mechanism, I think utilitarianism is “selfishly” rational. I don’t understand why timeless pacts can’t form either, it’s like the basis of TDT and you already don’t believe in time.
Maybe it’s my genome’s fault that I care so much about future me. It is very similar to future it, and so it forces me to help it survive, even if in a very different person than I am today.