I haven’t read this all yet, but what I want to say about anthropics and immortality doesn’t seem included at a quick glance.
If we live in a multiverse, everyone is mostly immortal anyway. Or rather, almost everyone who could potentially be saved by anything you may do, will live forever (or less if that’s against physical reality) somewhere. I also don’t think that measure matters for utility, only for probability estimates (yes, I really have to do a write up on all this eventually.) (This means that I should be neutral to the measure of worlds I exist in, and should neither pay 1 utilon to split the world into 2 (thus doubling my measure), nor to prevent such a split.)
Those two claims taken together seem to imply that cryonics is actually a bad idea. If (let’s say) you, along with X% of humanity signs up, what you have done is ensure that you likely end up in a world where less people exist (namely, only those that signed up.) However, if you wouldn’t sign up, then the whole world is in it “together”, and then the world which survives likely has more members in total. A world where you survive without signing up for cryonics is likely to have something in it that causes many more people to survive (FAI, or breakthrough in research, or something I haven’t thought of). By making it easier for you to be saved, you make it harder for others.
It will depend on your actual utility function, but the ones I’ve considered (selfish, altruistic to one’s own world) seem to agree on this. Altruistism to all possible worlds is a little more complicated, and also seems to be the one most people’s intuition use by default. Then, it would depend on how much you expect yourself to be worth to others in the worlds where you survive only if you take cryonics, and how exactly you weigh worlds where you don’t exist.
But it’s not as simple as saying: cryonics has a chance at working, I want to be immortal, therefore cryonics is a good idea. (If the arguments for cryonics are different, I haven’t heard any others that are relevant to my argument.)
On the other hand, if you don’t have too much confidence in Big Worlds, then cryonics may still work out as positive utility.
(I’m aware this basic argument has been made in the selfish case by Yvain. I had all of this worked out myself before seeing that, and also he doesn’t really make the same case I do. He argues that it is neutral, while I argue for harmfulness. I will note that it is only harmful to those that sign up under my theory.)
I had to read Yvain and then piece together a bunch of missing parts to figure out what exactly you meant. You’re assuming not just many worlds, but an infinite number of worlds, or at least a large enough number of them that every possible variation relying on our laws of physics would be included. This, for starters, is a huge assumption.
If I don’t get cryonics in this universe, another me in another universe will. By reducing the number of possible universes I might be in, I increase the possibility that I am living in as close to the optimal universe as possible, as long as I don’t eliminate the best possible optimum. If I’m on the show Deal or No Deal, as long as I don’t get rid of the million dollar case, any case I get rid of improves my odds of getting the million dollar case.
However, it’s not clear that helping a bunch of random strangers in my universe is any better than helping a bunch of random strangers in another universe. You’re presuming a set of beliefs in which I am an EA within my universe, but an egoist towards other universes. If I don’t distinguish between a man living in China and a man living in China on Earth 2, I should get cryonics.
You’re assuming not just many worlds, but an infinite number of worlds, or at least a large enough number of them that every possible variation relying on our laws of physics would be included. This, for starters, is a huge assumption.
If our universe is spatially infinite, that should be enough.
One of my reasons for believing in multiverses is the anthropic argument from fine-tuning, which would seem to offer a large enough range for this to be relevant.
If I don’t get cryonics in this universe, another me in another universe will. By reducing the number of possible universes I might be in, I increase the possibility that I am living in as close to the optimal universe as possible, as long as I don’t eliminate the best possible optimum. If I’m on the show Deal or No Deal, as long as I don’t get rid of the million dollar case, any case I get rid of improves my odds of getting the million dollar case.
This seems correct.
However, it’s not clear that helping a bunch of random strangers in my universe is any better than helping a bunch of random strangers in another universe. You’re presuming a set of beliefs in which I am an EA within my universe, but an egoist towards other universes. If I don’t distinguish between a man living in China and a man living in China on Earth 2, I should get cryonics.
I did note that it depends on utility functions, so my “presumptions” were explicit. Even my limited argument still implies that selfish people shouldn’t sign up, while I’m sure some people who have signed up identify as selfish. I also think that altruism towards only your world is a position many people would agree with. It’s at least not clear that it’s not right.
I’m thinking now that it may not matter after all. My argument is that for any person Y, U(Y|Y gets cryonics) is lower than (U(Y|Y doesn’t get cryonics). Their personal utility is higher regardless of utility function, so only outside considerations matter here. But outside considerations may matter only to those who expect to have high impact on the rest of the world, and even then, it’s hard to see how much value you could really have in those worlds that you would be willing to sacrifice your own utility for it.
As I said before:
Then, it would depend on how much you expect yourself to be worth to others in the worlds where you survive only if you take cryonics, and how exactly you weigh worlds where you don’t exist.
This really needs a full theory of anthropics (and metaethics), which this margin is too narrow to contain. Just wanted to get the idea out there.
I question if anyone would truly be neutral to a world being split into two. Being neutral would imply that one may prefer receiving a small pleasure than giving an incomprehensibly large number of individuals fulfilling lives, if said individuals would be in a different universe. Though I know people tend to have scope insensitivity and care less about those who are outside their own “group,” the level of discrimination you’re suggesting seems hard to believe.
“world being split into two” here, means something that no one can really notice, even while seeing all existing observations: it’s a transition from “1 world that looks like X” to “2 worlds that look like X”. It’s meaningless, in my view, but it seems to be the basis of “measure”.
I would prefer receiving a small pleasure and having my measure halved than neither. This has nothing to do with selfishness.
Being neutral would imply that one may prefer receiving a small pleasure than giving an incomprehensibly large number of individuals fulfilling lives, if said individuals would be in a different universe.
That’s associated with being altruistic only towards your universe, or being selfish, which is distinct from the measure claim.
I know my argument is not too rigorous, and should not be used by someone deciding about cryonics now, but I think it deserves a rigorous response: if it’s wrong, it should be provably so. I would love to have an anthropic theory over different utility functions that was clear about when my argument works and when not.
“world being split into two” here, means something that no one can really notice, even while seeing all existing observations: it’s a transition from “1 world that looks like X” to “2 worlds that look like X”. It’s meaningless, in my view, but it seems to be the basis of “measure”.
Meaningless as in the word has literally no meaning, or as in the concept is unimportant?
I know my argument is not too rigorous, and should not be used by someone deciding about cryonics now, but I think it deserves a rigorous response: if it’s wrong, it should be provably so. I would love to have an anthropic >
theory over different utility functions that was clear about when my argument works and when not.
You haven’t really given an argument at all for or against your value system, nor have I. As far as I know, there’s no way to prove a value system is correct, because values are entirely subjective.
That’s associated with being altruistic only towards your universe, or being selfish, which is distinct from the measure claim.
Actually, it is associated with the measure claim, since the fulfilling lives could be caused by the universe splitting into two.
Meaningless as in the word has literally no meaning, or as in the concept is unimportant?
The latter. I’m willing to say that there could be some state of reality that corresponds to 1 world splitting into 2 identical worlds, but I don’t think that should factor into any utility function.
You haven’t really given an argument at all for or against your value system, nor have I. As far as I know, there’s no way to prove a value system is correct, because values are entirely subjective.
What I want to see is a rigorous argument for or against cryonics over popular value systems. I’m not sure that even EA over all existing universes would say to get it.
I will note that conventional wisdom (in cryonics) seems to be that selfish people should sign up, while my theory disagrees, so there is something to be analysed there.
Actually, it is associated with the measure claim, since the fulfilling lives could be caused by the universe splitting into two.
If my measure goes from , say, .1 to .05, and .05 universes cease to exist, that still shouldn’t matter to my utility, as long as those universes that don’t exist are exactly identical to the .05 that still exist with me.
When you say “universe splitting into two”, it refers to 2 universes that evolve exactly the same. I can’t gain something in one world in exchange for losing it in the other, or the 2 would be counted separately in my measure.
In your example of taking a pleasure and destroying worlds, every single person that would have led “fulfilling lives” with probability of X if I refuse the pleasure still leads fulfilling lives with the same probability of X. The only thing that changes is their measure, which doesn’t change anything, not even probabilities, as long as all measures change together. So even an altruist towards all worlds could say that it doesn’t matter how many copies of the exact worlds are, as long as the relative ratios are the same.
What I want to see is a rigorous argument for or against cryonics over popular value systems. I’m not sure that even EA over all existing universes would say to get it.
Ok, I misunderstood what you were referring to when you were talking about the proof. Please PM me if you ever formalize it; I’d like to read it.
Anyways, I see our utility function are radically different. I suppose there’s no use arguing about them.
I haven’t read this all yet, but what I want to say about anthropics and immortality doesn’t seem included at a quick glance.
If we live in a multiverse, everyone is mostly immortal anyway. Or rather, almost everyone who could potentially be saved by anything you may do, will live forever (or less if that’s against physical reality) somewhere. I also don’t think that measure matters for utility, only for probability estimates (yes, I really have to do a write up on all this eventually.) (This means that I should be neutral to the measure of worlds I exist in, and should neither pay 1 utilon to split the world into 2 (thus doubling my measure), nor to prevent such a split.)
Those two claims taken together seem to imply that cryonics is actually a bad idea. If (let’s say) you, along with X% of humanity signs up, what you have done is ensure that you likely end up in a world where less people exist (namely, only those that signed up.) However, if you wouldn’t sign up, then the whole world is in it “together”, and then the world which survives likely has more members in total. A world where you survive without signing up for cryonics is likely to have something in it that causes many more people to survive (FAI, or breakthrough in research, or something I haven’t thought of). By making it easier for you to be saved, you make it harder for others.
It will depend on your actual utility function, but the ones I’ve considered (selfish, altruistic to one’s own world) seem to agree on this. Altruistism to all possible worlds is a little more complicated, and also seems to be the one most people’s intuition use by default. Then, it would depend on how much you expect yourself to be worth to others in the worlds where you survive only if you take cryonics, and how exactly you weigh worlds where you don’t exist.
But it’s not as simple as saying: cryonics has a chance at working, I want to be immortal, therefore cryonics is a good idea. (If the arguments for cryonics are different, I haven’t heard any others that are relevant to my argument.)
On the other hand, if you don’t have too much confidence in Big Worlds, then cryonics may still work out as positive utility.
(I’m aware this basic argument has been made in the selfish case by Yvain. I had all of this worked out myself before seeing that, and also he doesn’t really make the same case I do. He argues that it is neutral, while I argue for harmfulness. I will note that it is only harmful to those that sign up under my theory.)
I had to read Yvain and then piece together a bunch of missing parts to figure out what exactly you meant. You’re assuming not just many worlds, but an infinite number of worlds, or at least a large enough number of them that every possible variation relying on our laws of physics would be included. This, for starters, is a huge assumption.
If I don’t get cryonics in this universe, another me in another universe will. By reducing the number of possible universes I might be in, I increase the possibility that I am living in as close to the optimal universe as possible, as long as I don’t eliminate the best possible optimum. If I’m on the show Deal or No Deal, as long as I don’t get rid of the million dollar case, any case I get rid of improves my odds of getting the million dollar case.
However, it’s not clear that helping a bunch of random strangers in my universe is any better than helping a bunch of random strangers in another universe. You’re presuming a set of beliefs in which I am an EA within my universe, but an egoist towards other universes. If I don’t distinguish between a man living in China and a man living in China on Earth 2, I should get cryonics.
If our universe is spatially infinite, that should be enough.
One of my reasons for believing in multiverses is the anthropic argument from fine-tuning, which would seem to offer a large enough range for this to be relevant.
This seems correct.
I did note that it depends on utility functions, so my “presumptions” were explicit. Even my limited argument still implies that selfish people shouldn’t sign up, while I’m sure some people who have signed up identify as selfish. I also think that altruism towards only your world is a position many people would agree with. It’s at least not clear that it’s not right.
I’m thinking now that it may not matter after all. My argument is that for any person Y, U(Y|Y gets cryonics) is lower than (U(Y|Y doesn’t get cryonics). Their personal utility is higher regardless of utility function, so only outside considerations matter here. But outside considerations may matter only to those who expect to have high impact on the rest of the world, and even then, it’s hard to see how much value you could really have in those worlds that you would be willing to sacrifice your own utility for it.
As I said before:
This really needs a full theory of anthropics (and metaethics), which this margin is too narrow to contain. Just wanted to get the idea out there.
I question if anyone would truly be neutral to a world being split into two. Being neutral would imply that one may prefer receiving a small pleasure than giving an incomprehensibly large number of individuals fulfilling lives, if said individuals would be in a different universe. Though I know people tend to have scope insensitivity and care less about those who are outside their own “group,” the level of discrimination you’re suggesting seems hard to believe.
“world being split into two” here, means something that no one can really notice, even while seeing all existing observations: it’s a transition from “1 world that looks like X” to “2 worlds that look like X”. It’s meaningless, in my view, but it seems to be the basis of “measure”.
I would prefer receiving a small pleasure and having my measure halved than neither. This has nothing to do with selfishness.
That’s associated with being altruistic only towards your universe, or being selfish, which is distinct from the measure claim.
I know my argument is not too rigorous, and should not be used by someone deciding about cryonics now, but I think it deserves a rigorous response: if it’s wrong, it should be provably so. I would love to have an anthropic theory over different utility functions that was clear about when my argument works and when not.
Meaningless as in the word has literally no meaning, or as in the concept is unimportant?
You haven’t really given an argument at all for or against your value system, nor have I. As far as I know, there’s no way to prove a value system is correct, because values are entirely subjective.
Actually, it is associated with the measure claim, since the fulfilling lives could be caused by the universe splitting into two.
The latter. I’m willing to say that there could be some state of reality that corresponds to 1 world splitting into 2 identical worlds, but I don’t think that should factor into any utility function.
What I want to see is a rigorous argument for or against cryonics over popular value systems. I’m not sure that even EA over all existing universes would say to get it.
I will note that conventional wisdom (in cryonics) seems to be that selfish people should sign up, while my theory disagrees, so there is something to be analysed there.
If my measure goes from , say, .1 to .05, and .05 universes cease to exist, that still shouldn’t matter to my utility, as long as those universes that don’t exist are exactly identical to the .05 that still exist with me.
When you say “universe splitting into two”, it refers to 2 universes that evolve exactly the same. I can’t gain something in one world in exchange for losing it in the other, or the 2 would be counted separately in my measure.
In your example of taking a pleasure and destroying worlds, every single person that would have led “fulfilling lives” with probability of X if I refuse the pleasure still leads fulfilling lives with the same probability of X. The only thing that changes is their measure, which doesn’t change anything, not even probabilities, as long as all measures change together. So even an altruist towards all worlds could say that it doesn’t matter how many copies of the exact worlds are, as long as the relative ratios are the same.
Ok, I misunderstood what you were referring to when you were talking about the proof. Please PM me if you ever formalize it; I’d like to read it.
Anyways, I see our utility function are radically different. I suppose there’s no use arguing about them.