I try to be pragmatic, which means I only find it useful to consider constructive theories; anything else is not defined, and I would say you cannot even talk about them. This is why I take issue with many simple explanations of utilitarianism: people claim to “sum over everyone equally” while not having a good definition for “everyone” or “summing equally”. I think these are the two mistakes you are making in your post.
You say something like,
You never had the mechanism to choose who you would be born as, and the simplest option is pure chance.
but you cannot construct this simple option. It is impossible to choose a random number out of infinity where each number appears equally likely, so there must be some weighting mechanism. This gives you a mechanism to choose who you would be born as!
We have to first define what “you” even looks like. I take an approach akin to effective field theory, where I consider you a coarse policy that is being run, which is detailed enough to where it’s pragmatically useful to consider. I wrote a longer comment in another thread that explains this well enough. The key takeaway is that we can compare two policies with their KL-divergence, and thus we can compare “current you” to “future you”, or “current me” to “current you”.
I also hold to your timeless snapshot theory, though I would like to mention animals (including humans) are likely cognitively disabled in this regard. Processes that realized they were timeless snapshots are the same kinds of processes that have an existential crisis instead of enabling more of the same. Anyway, since we’re both timeless snapshots, me now and me ten seconds from now are not the same person. However, we have extremely similar policies, and thus are extremely similar people. By choosing to stay alive now, or choosing to think a certain way, I can choose how a very similar being to myself arises!
If you’re trying to maximise your pleasure, or your utility, you have to include all the beings that are similar to you in your summation. In particular, you should be weighing like
Uoverall=∑π is a policy2−KL(π||snapshot of you)Uπ
If π is a hedonistic sum utilitarian, then
Uπ=∑p is a personpleasure(p).
There’s not really a reason π would be a hedonistic sum utilitarian, unless that’s close to the policy of your current snapshot. Such a policy isn’t evolutionarily stable, since it can be invaded by policies that act the same, except purely selfish when they can get away with it. In fact, every policy can be invaded like this. So, over time, the policies similar to you will become more and more selfish. However, you usually don’t find yourself to be a selfish egoist, because eventually your snapshot dies and a child with more altruistic brainwashing takes its place as the next most similar policy.
Now, I’d like to poke a little at the difference between selfish egoism and utilitarianism. To make them both constructive, you have to specify who “you” are, what your preferences are, what other people’s preferences you care about, and how much you weigh these preferences. You’ll end up with a double sum,
∑π2−KL(π||you)∑i is a preferencewiui
Utilitarians claim to weigh others’ preferences so much that they actually end up better off by sacrificing for the greater good. They wouldn’t even think of it as a sacrifice! But, if it’s not a sacrifice, the selfish egoist would take the very same actions! So, are selfish egoists really just sheep in wolves’ clothing? People who get a bad rapport, because others assume their preferences are misaligned with theirs, when the utilitarian’s are just as often? I think this is the case, but perhaps the difference comes from how they treat fundamental disagreements.
You can build a weight matrix out of everyone’s weights for each others’ preferences. If we have three people, Alice, Bob, and Eve, a matrix
W=⎡⎢⎣0.90.2−0.10.10.80.1−0.5−0.52⎤⎥⎦
might say Alice and Bob are mildly friendly to one another, while Eve hates their guts. Since
⎡⎢⎣UAliceUBobUEve⎤⎥⎦∝W⎡⎢⎣UAliceUBobUEve⎤⎥⎦
their utilities are some eigenvector of W. There are three eigenvectors:
Alice would prefer they choose the last one, Bob the second, and Eve the first, so this is a fundamental disagreement. I think the only difference that makes sense is to define the selfish egoist as someone who will fight for their preferred utility function, while the utilitarian as someone who will fight for whichever has the highest eigenvalue.
Thank you, that was interesting. I may not be able to maintain the level of formality you are expecting, I think the imprecise explanations that allow you to win are still valid, but I will try to explain it in a way that we can understand each other.
We diverged at the point:
but you cannot construct this simple option. It is impossible to choose a random number out of infinity where each number appears equally likely, so there must be some weighting mechanism. This gives you a mechanism to choose who you would be born as!
I understand why it might seem that infinities break probability theory. Let me clarify what I meant when I said that you are a random consciousness from a “virtual infinite queue”. My simplest model of reality posits that there is a finite number of snapshots of consciousness in the universe—unless, for example, AI somehow defeats entropy, unless we account for other continuums, and so on. I hope you don’t have an issue with the idea that you could be a random snapshot from an unknown, but finite, set of them.
(But I also suppose that you can use the mathematical expectation of finding yourself as a random consciousness from an infinite series, if the variance of that series is defined).
But the queue of consciousnesses you could be is “virtually (or potentially) infinite” because there is no finite number of consciousnesses you could find yourself generating after which the pool of consciousnesses would be empty. Probabilities exist on a map, not on the territory: the universe has already created all the possible snapshots. But what you discover yourself to be influences the subjective distribution of probabilities for how many snapshots of consciousness there are in the universe—if I discover myself maximizing their number, my expectation of the number of snapshots increases. The question is whether I find this maximization useful (and I do).
Now, regarding “the choice of who to be born as”. I understand your definition of “yourself as a policy” and why it is useful: timeless decision theory often enables easy coordination with agents who are “similar enough to you”, allowing for mutual modeling. However, I don’t understand why you think this definition is relevant if, at the same time, you acknowledge that you are a snapshot.
As a snapshot, you don’t move through time. You discovered yourself to be this particular snapshot by chance, not some other, and you did not control this process, just as you did not control who you would be born as.
I suppose you can increase the probability of being found as a snapshot like yourself through evolutionary principles—“the better I am at multiplying myself, the more of me there is in the universe, so I have a better chance of being found as myself, surviving and reproducing”—but you could have been born any other agent that tried to maximize something else (for example, its own copies), and you hardly estimate that you would be THAT successful at evolution that you wipe out all other consciousnesses and spawn forks of yourself, making the existence of the non-self a statistical anomaly.
If you truly believe that you can dominate the future snapshots so effectively that you entirely displace other consciousnesses, then yes, in some sense you could speak of having “the choice of who to be born as”. But in this case, after this process is complete, you will have no other option but to maximize the pleasure of these snapshots, and you will still arrive at total hedonistic utilitarianism.
In other words, if you are effective enough to spawn forks of yourself, the next logical step will be to switch to maximizing their pleasure—and at that point, your current stage of competition will be just an inefficient use of resources, if you could focus on creating hedonium shockwave instead of forking.
I believe that hedonistic utilitarianism is the ultimate evolutionary goal for rational agents, the attractor into which we will fall, unless we destroy ourselves beforehand. It is a rare strategy due to its complexity, but ultimately, it is selfishly efficient.
I suppose you could use the “finite and infinite” argument to say that you’re an “average” hedonistic utilitarian, and you want to not spawn new snapshots, but the ideal would be one super-happy snapshot per Universe, and you’d have a 100% chance of finding yourself as that one, but since lesser unhappy consciousnesses already exist, you need to “outweigh” the chance of finding yourself as them. That would be interesting, and a small update for me, but it’s hardly what you’re promoting.
When I say, “me,” I’m talking about my policy, so I’m a little confused when you say I could have been a different snapshot. Tautologically, I cannot. So, if I’m trying to maximize my pleasure, a Veil of Ignorance doesn’t make sense. The only case it really applies is when I make pacts like, “if you help bring me into existence, I’ll help you maximize your pleasure,” except those pacts can’t actually form. What really happens is existing people try to bring into existence people that will help them maximize their pleasure, either by having similar policies to their own, or being willing to serve them.
I understand that you say that you are a policy and not a snapshot, I don’t understand why exactly you consider yourself a policy if you say “I also hold to your timeless snapshot theory”. Even from a policy perspective, the snapshot you find yourself in is the “standard” by which you judge divergence of other snapshots. I think you might underestimate how different you are even from yourself in different states and ages. Would you not wish happiness on your child-self or old-self if they were too different from you in terms of “policy”? Would you feel “the desire to help another person as yourself” if he was similar enough to you?
And I still don’t understand what do you mean by a “mechanism to choose who you would be born as” (other than killing everyone and making your forks the most common life form in the universe). Even if we consider you not as a snapshot, but as a “line of continuity of consciousness”/policy/person in the standard sense, you could have been born a different person/policy. And in the absence of such a mechanism, I think utilitarianism is “selfishly” rational. I don’t understand why timeless pacts can’t form either, it’s like the basis of TDT and you already don’t believe in time.
Maybe it’s my genome’s fault that I care so much about future me. It is very similar to future it, and so it forces me to help it survive, even if in a very different person than I am today.
I try to be pragmatic, which means I only find it useful to consider constructive theories; anything else is not defined, and I would say you cannot even talk about them. This is why I take issue with many simple explanations of utilitarianism: people claim to “sum over everyone equally” while not having a good definition for “everyone” or “summing equally”. I think these are the two mistakes you are making in your post.
You say something like,
but you cannot construct this simple option. It is impossible to choose a random number out of infinity where each number appears equally likely, so there must be some weighting mechanism. This gives you a mechanism to choose who you would be born as!
We have to first define what “you” even looks like. I take an approach akin to effective field theory, where I consider you a coarse policy that is being run, which is detailed enough to where it’s pragmatically useful to consider. I wrote a longer comment in another thread that explains this well enough. The key takeaway is that we can compare two policies with their KL-divergence, and thus we can compare “current you” to “future you”, or “current me” to “current you”.
I also hold to your timeless snapshot theory, though I would like to mention animals (including humans) are likely cognitively disabled in this regard. Processes that realized they were timeless snapshots are the same kinds of processes that have an existential crisis instead of enabling more of the same. Anyway, since we’re both timeless snapshots, me now and me ten seconds from now are not the same person. However, we have extremely similar policies, and thus are extremely similar people. By choosing to stay alive now, or choosing to think a certain way, I can choose how a very similar being to myself arises!
If you’re trying to maximise your pleasure, or your utility, you have to include all the beings that are similar to you in your summation. In particular, you should be weighing like
Uoverall=∑π is a policy2−KL(π||snapshot of you)UπIf π is a hedonistic sum utilitarian, then
Uπ=∑p is a personpleasure(p).There’s not really a reason π would be a hedonistic sum utilitarian, unless that’s close to the policy of your current snapshot. Such a policy isn’t evolutionarily stable, since it can be invaded by policies that act the same, except purely selfish when they can get away with it. In fact, every policy can be invaded like this. So, over time, the policies similar to you will become more and more selfish. However, you usually don’t find yourself to be a selfish egoist, because eventually your snapshot dies and a child with more altruistic brainwashing takes its place as the next most similar policy.
Now, I’d like to poke a little at the difference between selfish egoism and utilitarianism. To make them both constructive, you have to specify who “you” are, what your preferences are, what other people’s preferences you care about, and how much you weigh these preferences. You’ll end up with a double sum,
∑π2−KL(π||you)∑i is a preferencewiuiUtilitarians claim to weigh others’ preferences so much that they actually end up better off by sacrificing for the greater good. They wouldn’t even think of it as a sacrifice! But, if it’s not a sacrifice, the selfish egoist would take the very same actions! So, are selfish egoists really just sheep in wolves’ clothing? People who get a bad rapport, because others assume their preferences are misaligned with theirs, when the utilitarian’s are just as often? I think this is the case, but perhaps the difference comes from how they treat fundamental disagreements.
You can build a weight matrix out of everyone’s weights for each others’ preferences. If we have three people, Alice, Bob, and Eve, a matrix
W=⎡⎢⎣0.90.2−0.10.10.80.1−0.5−0.52⎤⎥⎦might say Alice and Bob are mildly friendly to one another, while Eve hates their guts. Since
⎡⎢⎣UAliceUBobUEve⎤⎥⎦∝W⎡⎢⎣UAliceUBobUEve⎤⎥⎦their utilities are some eigenvector of W. There are three eigenvectors:
⎡⎢⎣0.060.520.42⎤⎥⎦,⎡⎢⎣0.43.4−2.8⎤⎥⎦,⎡⎢⎣2.40.0−1.4⎤⎥⎦Alice would prefer they choose the last one, Bob the second, and Eve the first, so this is a fundamental disagreement. I think the only difference that makes sense is to define the selfish egoist as someone who will fight for their preferred utility function, while the utilitarian as someone who will fight for whichever has the highest eigenvalue.
Thank you, that was interesting. I may not be able to maintain the level of formality you are expecting, I think the imprecise explanations that allow you to win are still valid, but I will try to explain it in a way that we can understand each other.
We diverged at the point:
I understand why it might seem that infinities break probability theory. Let me clarify what I meant when I said that you are a random consciousness from a “virtual infinite queue”. My simplest model of reality posits that there is a finite number of snapshots of consciousness in the universe—unless, for example, AI somehow defeats entropy, unless we account for other continuums, and so on. I hope you don’t have an issue with the idea that you could be a random snapshot from an unknown, but finite, set of them.
(But I also suppose that you can use the mathematical expectation of finding yourself as a random consciousness from an infinite series, if the variance of that series is defined).
But the queue of consciousnesses you could be is “virtually (or potentially) infinite” because there is no finite number of consciousnesses you could find yourself generating after which the pool of consciousnesses would be empty. Probabilities exist on a map, not on the territory: the universe has already created all the possible snapshots. But what you discover yourself to be influences the subjective distribution of probabilities for how many snapshots of consciousness there are in the universe—if I discover myself maximizing their number, my expectation of the number of snapshots increases. The question is whether I find this maximization useful (and I do).
Now, regarding “the choice of who to be born as”. I understand your definition of “yourself as a policy” and why it is useful: timeless decision theory often enables easy coordination with agents who are “similar enough to you”, allowing for mutual modeling. However, I don’t understand why you think this definition is relevant if, at the same time, you acknowledge that you are a snapshot.
As a snapshot, you don’t move through time. You discovered yourself to be this particular snapshot by chance, not some other, and you did not control this process, just as you did not control who you would be born as.
I suppose you can increase the probability of being found as a snapshot like yourself through evolutionary principles—“the better I am at multiplying myself, the more of me there is in the universe, so I have a better chance of being found as myself, surviving and reproducing”—but you could have been born any other agent that tried to maximize something else (for example, its own copies), and you hardly estimate that you would be THAT successful at evolution that you wipe out all other consciousnesses and spawn forks of yourself, making the existence of the non-self a statistical anomaly.
If you truly believe that you can dominate the future snapshots so effectively that you entirely displace other consciousnesses, then yes, in some sense you could speak of having “the choice of who to be born as”. But in this case, after this process is complete, you will have no other option but to maximize the pleasure of these snapshots, and you will still arrive at total hedonistic utilitarianism.
In other words, if you are effective enough to spawn forks of yourself, the next logical step will be to switch to maximizing their pleasure—and at that point, your current stage of competition will be just an inefficient use of resources, if you could focus on creating hedonium shockwave instead of forking.
I believe that hedonistic utilitarianism is the ultimate evolutionary goal for rational agents, the attractor into which we will fall, unless we destroy ourselves beforehand. It is a rare strategy due to its complexity, but ultimately, it is selfishly efficient.
I suppose you could use the “finite and infinite” argument to say that you’re an “average” hedonistic utilitarian, and you want to not spawn new snapshots, but the ideal would be one super-happy snapshot per Universe, and you’d have a 100% chance of finding yourself as that one, but since lesser unhappy consciousnesses already exist, you need to “outweigh” the chance of finding yourself as them. That would be interesting, and a small update for me, but it’s hardly what you’re promoting.
When I say, “me,” I’m talking about my policy, so I’m a little confused when you say I could have been a different snapshot. Tautologically, I cannot. So, if I’m trying to maximize my pleasure, a Veil of Ignorance doesn’t make sense. The only case it really applies is when I make pacts like, “if you help bring me into existence, I’ll help you maximize your pleasure,” except those pacts can’t actually form. What really happens is existing people try to bring into existence people that will help them maximize their pleasure, either by having similar policies to their own, or being willing to serve them.
I understand that you say that you are a policy and not a snapshot, I don’t understand why exactly you consider yourself a policy if you say “I also hold to your timeless snapshot theory”. Even from a policy perspective, the snapshot you find yourself in is the “standard” by which you judge divergence of other snapshots. I think you might underestimate how different you are even from yourself in different states and ages. Would you not wish happiness on your child-self or old-self if they were too different from you in terms of “policy”? Would you feel “the desire to help another person as yourself” if he was similar enough to you?
And I still don’t understand what do you mean by a “mechanism to choose who you would be born as” (other than killing everyone and making your forks the most common life form in the universe). Even if we consider you not as a snapshot, but as a “line of continuity of consciousness”/policy/person in the standard sense, you could have been born a different person/policy. And in the absence of such a mechanism, I think utilitarianism is “selfishly” rational. I don’t understand why timeless pacts can’t form either, it’s like the basis of TDT and you already don’t believe in time.
Maybe it’s my genome’s fault that I care so much about future me. It is very similar to future it, and so it forces me to help it survive, even if in a very different person than I am today.